uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,563,715
arxiv
\section{Introduction and Motivation}\par Some of major problems in modern cosmology are dark matter, dark energy, cosmological constant, and Hubble tension \cite{Silk:2016srn, Peracaula:2022vpx}. These serious physical challenges show that the standard model of cosmology, with all its advantages, is not able to solve many of these problems accurately\cite{Perivolaropoulos:2021jda}. Dark energy is an energy that the cause of the accelerating expansion of the present universe at large scales. Also, dark matter as the invisible and ghost-like matter is the cause of the, interconnection and balance of the galaxies, clusters and superclusters. Both of dark matter and dark energy has been completely confirmed by various direct and indirect observational methods. But for decades no plausible physical source has been found for this major contribution to the universe's matter and energy. \cite{1937ApJ....86..217Z,Peebles:2002gy}.\\ When the Hubble parameter is measured in local scales by cosmic ladders scaling methods, the values measured for $H_{\rm 0}$ are different from the value reported by Planck, which depends standard $\Lambda$CDM model and uses CMB photons. This difference is called the \textit{Hubble tension}. On the one hand, scientists are looking for more accurate scaling methods, and on the other hand, they are looking for an alternative to the standard model for solving Hubble tension\cite{Jang:2017dxn,Riess:2020fzl,Pesce:2020xfe,Kim:2020gai,Abadi:2020hbr,DiValentino:2021izs}.\\ Recently, we have been able to show that a possible source of dark matter and energy could be due to the surface tension on the shell of supervoids \cite{Yusofi:2019sai,Yusofi:2022hgg}. We have assumed that the supervoids dominate in the in the large-scale overviwe. Cosmic (Vast/Super) voids have been ideally considered to be spherical bubbles. The gravitational integration of galaxies over time, on the one hand, leads to the formation of over-dense regions such as clusters, superclusters, walls, strings, and filaments. On the other hand, as superclusters merge, almost empty spaces are created between them, and we call these under-dense regions among galactic strings and walls as cosmic voids\cite{Weygaert:2011}.\\ Hydrodynamic models and simulations of the formation of the cosmic web structure show that these bubbles are also merging\cite{Weygaert:2011,Carlesi:2014kua}. The standard cosmological model ignores the statics, dynamics and evolution of supervoids that make up the main part of the late universe. While the supervoids are not only not completely empty, but also have a density of energy and evolution, and also because they are bulky, they are more likely to merge with each other and are much more suitable candidates for influence on the cosmic scales\cite{Weygaert:2011, Higuchi:2017hdo, Yusofi:2022hgg}. Therefore, the possibility of the role of these large inhomogeneities in the dynamics of the universe and the effects of their evolution in determining the value of cosmic parameters seems possible\cite{Nan:2021prt,Vigneron:2019dpj,Buchert:2018vqi,Srivastava:2007en}.\\ In the proposed hypothesis in \cite{Yusofi:2022hgg}, cosmic voids are assumed to be spherical bubbles. The walls of these bubbles are surrounded by galactic superclusters. By considering the walls as the ideal separating surface between the low-density bubble-like areas and the high-density droplet-like areas, we have obtained the resulting surface tension by dimensional and heuristic calculation (see \tablename{ I})\cite{Yusofi:2022hgg}. Then, by equating the energy density of cosmic voids with the vacuum energy density, we show that the value estimated for the cosmological constant is very close to that predicted by Planck's observations and has the same order of magnitude \cite{Yusofi:2022hgg}.\\ In this paper, in Sec. II, we discuss the simultaneous coexistence of super-voids and super-clusters as two evolving part in the Cosmic Web. Then in the main Sec. III, we first calculate the mass density of a cosmic void and compare its magnitude with the density of the whole universe. We will then try to answer two important following questions:\\ \textbf{i.} Given that the average diameter of cosmic voids is usually $100 {\rm Mpc}$, might a single cosmic void be a good representation for smaller local scales and larger global scale?\\ \textbf{ii.} Do the slightly differences between of cosmological constants obtained from our hypothesis can represent the acceptable values of the Hubble constant and could the differences in their values be a possible solution for $H_{\rm 0}$ tension?\\ In the final section of this article, we will briefly discuss the possibility of resolving $H_{\rm 0}$ tension and other possible outcomes. \begin{table} \caption{The surface tension of $ \gamma_{\rm i} $, on the shell of cosmic voids containing disk-shaped superclusters $i= 1, 2, 3, 4 $ \cite{Yusofi:2022hgg}.} \begin{center} \begin{tabular}{c c c c} \hline i. Super&\quad $M_{\rm i}$&\quad $R_{\rm i}$&\quad \quad $\gamma_{\rm i}$ \\ \quad cluster\quad&\quad($10^{47}{\rm {kg}}$)\quad & \quad($10^{24}{\rm {m}}$) \quad&\quad $(10^{15}{\rm {J.m^{-2}}})$ \\ \noalign{\smallskip}\hline \noalign{\smallskip}\hline 1. Corona &\quad $0.20$ &\quad $1.50$ &\quad $0.25$ \\ 2. Virgo &\quad $0.03$ &\quad $0.50$ &\quad $0.34$\\ 3. Laniakea &\quad $1.00$ &\quad $2.40$ &\quad $0.50$\\ 4. Caelum &\quad $4.00$ &\quad $ 4.30$ &\quad $0.62$ \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \label{table:I} \end{table} \section{Coexistence of Supervoids with Superclusters in Cosmic Web} \par If we consider part of the present universe, it contains a network of cosmic voids in which several superclusters and small and large galactic objects are merging with each other. It is certain that in the general view, the large scale of the universe is in the void-dominant state, and in the small but dense local scales, it includes super-clusters in the matter-dominant state. The coexistence and continuous integration of superclusters at small scales and supervoids at large scales increase the contribution of each of them to the structure of the universe, and the increase in the size of the cosmic void after merging them leads to an effective repulsive force on the galaxies situated on their shell\cite{Weygaert:2011, Yusofi:2022hgg}. Under such conditions, it can be assumed that the cosmic fluid at the large scale overview consists of merging and expanding large bubbles, and the universe is dominated over time by larger bubbles in the accelerating expansion phase.\\ In the proposed model, dense objects, including galaxies and their clusters and superclusters, are thought of as "drops" and the voids, and supervoids between them as "bubbles"\cite{Yusofi_2010}. In a two-phase inhomogeneous mixture of drops and bubbles, the bubbles also absorb other bubbles and disperse the droplets from the center to the periphery, while their own size becomes larger and less dense than before.Physical simulations of redshifts in terms of different displacements show that local scales become denser over time but the density of large scales decreases\cite{Weygaert:2011}.\\ \section{Global and Local Behavior of Comic Voids} \par In this main part of the paper, we want to show whether a single supervoid can be a good representation of local and global scales or not? For this purpose, we first calculate the mass density of a supervoid and show that its magnitude is about one-tenth of the average density of the universe. Then, by the surface tension of a supervoid, we show that the cosmological constant obtained from it is very close to the cosmological constant measured by Planck 2018\cite{}. Finally, we will answer the important question that is it possible that the \textit{Hubble tension} is due to slight differences in the \textit{surface tension} of the bubbles? \subsection{Mass Density for a Cosmic Void} For a perfectly empty spherical bubble with a total mass accumulated on the shell, the mass density can be calculated from the simple relation below, \begin{equation} \label{hob1} \rho_{\rm i}=\frac{3{M_{\rm {i}}}}{4\pi {\bar{r}_{\rm {v}}^3}}. \end{equation} Here $M_{\rm {i}}$ is the mass of the supercluster and $\bar{r}_{\rm {v}}$ is the average radius of a cosmic void. Taking into account the values of Table 1. for the mass and radius of the Laniakia supercluster, we obtain \begin{equation} \label{hob2} \rho_3=1.70 \times 10^{-27} {\rm {kg.m^{-3}}}. \end{equation} is very close to the universe's average mass density of the universe \cite{Cheng:2008grc}\textit{i.e.} \begin{equation} \label{hob3} \rho_{0,c}=1.88 \times 10^{-26} {\rm {kg.m^{-3}}}. \end{equation} and is about one order smaller than that. It seems that this density deficiency is due to the assumption of a completely empty bubble. The densities of the other cosmic voids are listed in \tablename{ II} respectively. \subsection{Cosmological constant from Surface Tension of Comic Void} The internal pressure of a single bubble (drop) is usually greater than its external pressure, and the pressure difference with the outside comes from the Young-Laplace formula \cite{Butt:2003pci,Reichl:2016msp} \begin{equation}\label{hob4} \Delta{P} = \frac{2\gamma}{\bar{r}_v}. \end{equation} Here, $\gamma$ represents the surface tension for bubble (drop). To calculate the surface tension of a single bubble, we use the following heuristic method \cite{Yusofi:2019sai,Yusofi:2022hgg}, \begin{equation} \label{hob5} \gamma_{\rm i}\equiv\frac{\rm Energy}{\rm Area}={\frac{M_{\rm i}c^2}{\pi R_{\rm i}^2}}. \end{equation} Taking account of the calculations in the previous section, the average density of a cosmic fluid is very close to the density of a single bubble. In the present void-dominant cosmic fluid we can assume that ($\rho_{\rm \Lambda} \equiv \rho_{\rm {v}}$) and ($\Delta{P} \simeq P_{\rm v}$), by considering (\ref{hob4}) we obtain \begin{equation} \label{hob6} P_{\rm v} = w{c^2}\rho_{\rm v} \end{equation} Therefore, to have a cosmological constant caused by bubbles, we will reach the following relation \cite{Yusofi:2022hgg}, \begin{equation} \label{hob7} \Lambda_{\rm i} =\frac{8\pi{G}}{{w{c^4}}}\frac{2\gamma_{\rm i}}{\bar r_{\rm {v}}}. \end{equation} By placing the necessary values \cite{Yusofi:2022hgg}, we reach the results in \tablename{ II}.\\ The cosmological constant in the latest Planck data is reported as below \cite{Planck:2018vyg}, \begin{equation} \label{hob8} \Lambda_{\rm {obs}}= 1.1056 \times 10^{-52} {\rm {m^{-2}}}. \end{equation} In our model, for the Laniakia supercluster in which the Milky Way galaxy is located, the cosmological constant of the model is obtained as follows\cite{Yusofi:2022hgg}, \begin{equation} \label{hob9} \Lambda_3 = 1.2979 \times 10^{-52} {\rm {m^{-2}}}. \end{equation} As we can see in \tablename{ II}, the cosmological constant and the mass density for each of the supervoids are the same as values for the entire universe (\ref{hob8}) and(\ref{hob3}) and are very close to them. Thus, given the values obtained for the $\rho_{\rm i}$ and $\Lambda_{\rm i}$, it seems that a cosmic void can be a good indicator of the global behavior of the universe. \subsection{Is Hubble Tension from Bubble Tension?} For the slight differences between the values of the cosmological constant for the different supervoids listed to \tablename{ II}, we will calculate the corresponding Hubble constant. We will show that our void-based model confirms $H_0$ values that reported locally measured \cite{Jang:2017dxn,Riess:2020fzl,Pesce:2020xfe} and the value inferred from the cosmic microwave background (CMB) \cite{Aloni:2021eaq}. Assuming the $\Lambda$CDM-based cosmology, the Hubble constant of the late universe (model dependent) is inferred as \cite{Planck:2018vyg}, \begin{equation} \label{hob10} H_{\rm {0_{global}}} = 67.66 \pm 0.42 \quad {\rm km.s^{-1}.Mpc^{-1}} \end{equation} . But from Hubble Space Telescope (HST) observations of 70 long-period Cepheids in the Large Magellanic Cloud, the best measurement of the cosmological constant has been estimated as \cite{Riess:2019cxk}, \begin{equation} \label{hob11} H_{\rm {0_{local}}} = 74.03 \pm 1.42 \quad {\rm km.s^{-1}.Mpc^{-1}} \end{equation} Now, we will continue to calculate the cosmological constant in our void-dominant model.\\ The Hubble constant is related to the cosmological constant according to the following relation \begin{equation} \label{hob12} H_{\rm {i}}^2 =\frac{\Lambda_{\rm {i}}}{3\Omega_{\rm \Lambda}}c^2 \end{equation} So $H_{\rm i} \propto \Lambda_{\rm i}^{\frac{1}{2}}$ and we will have \begin{equation} \label{hob13} H_{\rm {i}} = H_{\rm {0}_{obs}}\left(\frac{\Lambda_{\rm i}}{\Lambda_{\rm {obs}}}\right)^{\frac{1}{2}}\end{equation} For observational Hubble constant $ H_{\rm {0}_{obs}}$, we have two selections (\ref{hob10}) and (\ref{hob11}).\\ If we consider $ H_{\rm {0_{obs}}}= 67.66\quad {\rm km.s^{-1}.Mpc^{-1}}$ from Planck 2018 data \cite{Planck:2018vyg} and considering (\ref{hob8}), the equation (\ref{hob13}) gives the following value for the cosmic void that surrounded by Laniakea supercluster \begin{equation} \label{hob14} H_{\rm {3G}} = 73.31 \quad {\rm km.s^{-1}.Mpc^{-1}} \end{equation} On the other hand if we consider $ H_{\rm {0_{obs}}}= 74.03\quad {\rm km.s^{-1}.Mpc^{-1}}$, for the cosmic void that surrounded by Virgo supercluster we obtain \begin{equation} \label{hob15} H_{\rm {2L}} = 66.68 \quad {\rm km.s^{-1}.Mpc^{-1}} \end{equation} The predicted value (\ref{hob15}) is very close to the value obtained in \cite{Kim:2020gai}. For other cosmic voids, the values of the Hubble constant are also listed in \tablename{ II}. Given the value obtained (\ref{hob14}) and (\ref{hob15}) in the proposed bubble model, it can be concluded that the Hubble constant values in it are close to the values (\ref{hob11}) and (\ref{hob10}), in which measured by both of the local and global groups, respectively. As one can see in \tablename{ II}, the values obtained for the Hubble constant in our model include the values reported by both local and global measurements. \\ However, given the value obtained for the Laniakea supercluster (\ref{hob14}), the model results are closer to local measurement(\ref{hob11}). \begin{table} \caption{Cosmological constant $\Lambda_{\rm i}$, mass density $\rho_{\rm i}$, and global and local Hubble constant ${H}_{\rm {iG}}$ and ${H}_{\rm {iL}}$ for different cosmic voids surrounded with superclusters $i= 1, 2, 3, 4 $.} \begin{center} \begin{tabular}{c c c c c} \hline\noalign{\smallskip} Cosmic&\quad\quad $\rho_{\rm i}$&\quad\quad $\Lambda_{\rm i}$& ${H}_{\rm {iG}}$\quad \quad${H}_{\rm {iL}}$\\ \quad Parameter&\quad($10^{-26}{\rm {kg.m^{-3}}}$) \quad&\quad $(10^{-52}{\rm {m^{-2}}})$\quad &\quad $({\rm {km.s^{-1}.Mpc^{-1}}})$ \\ \noalign{\smallskip}\hline \noalign{\smallskip}\hline 1. Corona Sc&\quad $0.14$ &\quad $0.6645$ &\quad $52.45$ \quad $57.39$ \\ 2. Virgo Sc&\quad $0.60$ &\quad $0.8970$ &\quad $60.94$ \quad $\textbf{66.68}$ \\ 3. Laniakea Sc &\quad $0.17$ &\quad $1.2979$ &\quad $\textbf{73.31}$ \quad $80.21$ \\ 4. Caelum Sc&\quad $0.12$ &\quad $1.6172$ &\quad $81.83$ \quad $89.53$\\ \noalign{\smallskip}\hline \end{tabular} \end{center} \label{table:II} \end{table} Given the high accuracy of measurements that reported by local groups on the one hand and the independence of these data from the model on the other, it seems that the main reason for $H_0$ tension is related to $\Lambda$ in the standard $\Lambda$CDM model that assumed \textit{completely constant}. Since, according to our hypothesis, the surface tension values of the supervoids are slightly different, as a consequent of it we can have different $H_0$. Thus, given the values obtained for the Hubble constant $H_{\rm i}$ in \tablename{ II}, it seems that cosmic voids can be a good indicator to study on both the global and local scales.\\ Also, the $H(z)$ values listed in Table 1. on page 5 of \cite{Gomez-Valent:2018hwc}, which have been published in various journals, show that the Hubble parameter decreases as the redshift $z$ decreases. This is a confirmation of our model that over time the amount of surface tension of the cosmic voids becomes smaller. Therefore we now have the smallest possible value for the Hubble parameter as $H(0)$. \section{Conclusions} We have considered supervoids as expanding spherical bubbles in a void-based cosmic fluid. The total supervoid mass is situated on the shell and the shell is formed by the disk-shaped superclusters. Then, by heuristic calculating the mass density and cosmological constant of a single supervoid, we have been shown that the value obtained is very interestingly the same as their corresponding values for the entire universe.\\ On the other hand, for the Hubble tension, we have been shown that the value obtained from the void-dominant universe hypothesis can represent the values obtained from local and global data, but is more consistent with local measurements. As we know, the data reported from global groups such as Planck depends on the $\Lambda$CDM model. Therefore, small changes in $\Lambda$'s value can greatly affect the results of measurements. But on the other hand, the data reported from local groups are independent of each model and have a very high measurement accuracy. Therefore, from our point of view, the main problem in Hubble tension originated from the slight changes in $\Lambda$'s value, which also depends on changes in surface tension and the size of super voids.\\ So the interesting result of this study is that a cosmic void can be a good candidate to describe the behavior of the universe both on a large and local scale. As an important consequence of this research, by examining static and dynamic behavior of cosmic voids more seriously, both theoretically and observationally, possible plausible solutions to the important challenges of physical cosmology on a local and global scale can be offered. In future work, we will address important issues such as dark matter and the problem of vacuum energy within the framework of this hypothesis.
1,108,101,563,716
arxiv
\section{Introduction} The relation between the structure of a group and the structure of its lattice of subgroups constitutes an important domain of research in group theory. The topic has enjoyed a rapid development starting with the first half of the 20th century. Many classes of groups determined by different properties of partially ordered subsets of their subgroups (especially lattices of subgroups) have been identified. We refer to Suzuki's book \cite{12}, Schmidt's book \cite{11} or the more recent book \cite{14} by the author for more information about this theory. It is an usual technique to consider an equivalence relation $\sim$ on an algebraic structure and then to study the factor set with respect to $\sim$, partially ordered by certain ordering relations. In the case of subgroup lattices, one of the most significant example is the poset $C(G)$ of conjugacy classes of subgroups of a group $G$ (see \cite{2,3,4} and \cite{9,10}). The current paper deals with the more general equivalence relation on the subgroup lattice of $G$ induced by isomorphism. It leads to the set Iso($G$) consisting of all equivalence classes of isomorphic subgroups of $G$, that becomes a poset under a su\-i\-ta\-ble ordering relation. Its detailed study is the main goal of our paper. We investigate the finite groups $G$ for which the corresponding poset Iso($G$) is a lattice and, in particular, a chain. We also give some information about the finite groups $G_1$ and $G_2$ for which the posets Iso($G_1$) and Iso($G_2$) are isomorphic. In the following for a finite group $G$ we will denote by $L(G)$ the subgroup lattice of $G$. Recall that $L(G)$ is a complete bounded lattice with respect to set inclusion, having initial element the trivial subgroup 1 and final element $G$, and its binary operations $\wedge, \vee$ are defined by $$H\wedge K=H\cap K,\ H\vee K=\langle H\cup K\rangle, \mbox{ for all } H,K\in L(G).$$Two groups $G_1$ and $G_2$ will be called \textit{L-isomorphic} if their subgroup lattices $L(G_1)$ and $L(G_2)$ are isomorphic. We also recall that an important modular sublattice of $L(G)$ is constituted by the normal subgroup lattice of $G$, usually denoted by $N(G)$. The paper is organized as follows. In Section 2 we present some basic properties and results on the poset Iso($G$) associated to a finite group $G$. A complete description of this poset is given for several remarkable groups. Section 3 deals with the finite groups having the same poset of isomorphic subgroups. In the final section some conclusions and further research directions are indicated. Most of our notation is standard and will usually not be repeated here. Basic definitions and results on lattices and groups can be found in \cite{1,5} and \cite{6,7,13}, respectively. For subgroup lattice concepts we refer the reader to \cite{11,12,14}. \section{The poset Iso($G$)} Let $G$ be a finite group and ${\rm Iso}(G)$ be the set of equivalence classes of subgroups of $G$ with respect to the isomorphism relation, that is $${\rm Iso}(G)=\{[H] \mid H\in L(G)\}, \mbox{ where } [H]=\{K\in L(G) \mid K\cong H\}.$$Then it is easy to see that ${\rm Iso}(G)$ can be partially ordered by defining $$[H_1]\leq [H_2] \mbox{ if and only if } K_1\subseteq K_2 \mbox{ for some } K_1\in [H_1] \mbox{ and } K_2\in [H_2].$$ We remark that $\leq\hspace{0,5mm}$ is weaker than the usual ordering relation on $C(G)$ and that the isomorphism relation is not a congruence on $L(G)$, even if in many cases the poset $({\rm Iso}(G),\leq)$ becomes a lattice. We also must mention that the subposet of ${\rm Iso}(G)$ determined by all classes with a unique element is in fact the lattice ${\rm Sol}(G)$ of solitary subgroups of $G$, introduced and studied in \cite{8}. \bk First of all, we will look at the poset ${\rm Iso}(G)$ associated to some finite groups of small orders. \bk\n{\bf Examples.} \begin{itemize} \item[\rm 1.] ${\rm Iso}(\mathbb{Z}_p)$ is a chain of length 1, for any prime $p$. \item[\rm 2.] ${\rm Iso}(\mathbb{Z}_p\times\mathbb{Z}_p)\cong{\rm Iso}(\mathbb{Z}_{p^2})$ is a chain of length 2, for any prime $p$. \item[\rm 3.] ${\rm Iso}(\mathbb{Z}_6)\cong{\rm Iso}(S_3)\cong{\rm Iso}(D_{10})$ is a direct product of two chains of length 2. \item[\rm 4.] ${\rm Iso}(\mathbb{Z}_2^3)\cong{\rm Iso}(\mathbb{Z}_8)\cong{\rm Iso}(Q_8)$ is a chain of length 3. \item[\rm 5.] ${\rm Iso}(\mathbb{Z}_2\times\mathbb{Z}_4)\cong{\rm Iso}(D_8)$ is the lattice ${\rm C}_5$ (see page 5 of \cite{11}). \item[\rm 6.] ${\rm Iso}(A_4)$ is the pentagon lattice ${\rm E}_5$ (see page 5 of \cite{11}). \end{itemize} \smallskip It is well-known that a finite nilpotent group $G$ can be written as the direct product of its Sylow subgroups $$G=\xmare{i=1}{k}G_i,$$ where $|G_i|=p_i^{\alpha_i}$, for all $i=1,2,...,k$. Since the subgroups of a direct pro\-duct of groups having coprime orders are also direct products (see Corollary of (4.19), \cite{13}, I), one obtains that $$L(G)\cong\xmare{i=1}{k}L(G_i).\0(*)$$This lattice direct decomposition is often used in order to reduce many pro\-blems on $L(G)$ to the subgroup lattices of finite $p$-groups. We easily observe that $(*)$ leads to $${\rm Iso}(G)\cong\xmare{i=1}{k}{\rm Iso}(G_i),\0(**)$$that is the decomposability of $L(G)$ implies the decomposability of ${\rm Iso}(G)$. Moreover, all posets ${\rm Iso}(G_i)$, $i=1,2,...,k$, are indecomposable, since each group $G_i$ possesses a unique class of isomorphism of subgroups of order $p_i$. The above example 3 shows that the converse implication fails: ${\rm Iso}(S_3)$ is decomposable, in contrast with $L(S_3)$. \bigskip In the following we shall focus on describing the poset ${\rm Iso}(G)$ for several important classes of finite groups $G$. We start with abelian groups, for which we already know that the study is reduced to abelian $p$-groups. \bk\n{\bf Proposition 2.1.} {\it Let $G=\xmare{i=1}{k}\mathbb{Z}_{p^{\alpha_i}}$ be a finite abelian $p$-group. Then $\hspace{0,5mm}{\rm Iso}(G)$ is an indecomposable distributive lattice and $$|{\rm Iso}(G)|\leq\dd\sum_{i=0}^{\alpha}\pi(i),$$where $\alpha=\alpha_1+\alpha_2+\cdots+\alpha_k$ and $\pi(i)$ denotes the number of partitions of $\,i$, for all $i=0,1,...,\alpha$.} \bigskip \n{\bf Proof.} Two subgroups of an arbitrary order $p^m$ of $G$ are isomorphic if and only if their types determine the same partition $(m_1,m_2,...,m_k)$ of $m$ ($0\leq m_1\leq m_2\leq\cdots \leq m_k$). In this way, we can identify every class of isomorphic subgroups of $G$ with an element of the direct product $$C=\xmare{i=1}{k}C_{\alpha_i},$$where $C_{\alpha_i}$ is the chain $0<1<2<\cdots<\alpha_i$, for all $i=1,2,...,k$. Clearly, given $[H],[K]\in {\rm Iso}(G)$ ($|H|=p^m$, $|K|=p^{m'}$) that correspond to the partitions $(m_1,m_2,...,m_k)$ and $(m'_1,m'_2,...,m'_k)$ of $m$ and $m'$, respectively, we have $${\rm inf}\{[H],[K]\}=[S] \hspace{1mm}\mbox{ and }\hspace{1mm} {\rm sup}\{[H],[K]\}=[T],$$where $[S]$ and $[T]$ are determined by the $k$-tuples $({\rm min}\{m_1,m'_1\},\hspace{1mm} {\rm min}\{m_2,m'_2\},\newline ...,\hspace{1mm}{\rm min}\{m_k,m'_k\})$ and $({\rm max}\{m_1,m'_1\},\hspace{1mm}{\rm max}\{m_2,m'_2\},...,\hspace{1mm}{\rm max}\{m_k,m'_k\})$. Hence ${\rm Iso}(G)$ forms a lattice that can be embedded into a distributive lattice, namely $C$. This completes the proof. \hfill\rule{1,5mm}{1,5mm} \bigskip Another remarkable class of finite groups for which a similar conclusion holds is constituted by the so-called {\rm ZM}-groups, that is the finite groups all of whose Sylow subgroups are cyclic. It is well-known that two subgroups of such a group $G$ are conjugate if and only if they have the same order. Moreover, $C(G)$ is isomorphic to the lattice $L_n$ of all divisors of $n=|G|$ (see Theorem A of \cite{4}). Thus we infer the following result. \bk\n{\bf Proposition 2.2.} {\it Let $G$ be a {\rm ZM}-group of order $n$. Then the following lattice isomorphisms hold $${\rm Iso}(G)\cong C(G)\cong L_n.$$In particular, ${\rm Iso}(G)$ is a distributive lattice.} \bk\n{\bf Remarks.} \begin{itemize} \item[\rm 1.] The conclusion of Proposition 2.2 is valid for the finite cyclic groups, which are in fact the simplest {\rm ZM}-groups. \item[\rm 2.] The lattice isomorphism from ${\rm Iso}(G)$ to $L_n$ in the above proposition is given by $[H]\mapsto |H|$, for all $[H]\in {\rm Iso}(G)$. For an arbitrary finite group $G$ of order $n$, this map is only isotone (that is, $[H]\leq [K]$ implies that $|H|$ divides $|K|$). \item[\rm 3.] There are finite groups $G$ such that ${\rm Iso}(G)$ is a lattice, but not a distributive or even a modular lattice. For example, in ${\rm Iso}(S_{2^n})$, $n\geq 4$, the classes determined by $S_{2^n}$, the three maximal subgroups of $S_{2^n}$ (that are isomorphic to $Q_{2^{n-1}}$, $D_{2^{n-1}}$ and $\mathbb{Z}_{2^{n-1}}$, respectively) and $\Phi(S_{2^n})$ form a diamond and so ${\rm Iso}(S_{2^n})$ is not distributive; on the other hand, we already have seen that ${\rm Iso}(A_4)$ is the pentagon lattice and therefore it is not modular. \end{itemize} \smallskip Given a finite group $G$, some lattice-theoretical properties can be transferred from ${\rm Iso}(G)$ to $L(G)$. One of them is the complementation. \bk\n{\bf Proposition 2.3.} {\it Let $G$ be a finite group. If $\hspace{0,5mm}{\rm Iso}(G)$ is a complemented lattice, then $L(G)$ is also a complemented lattice.} \bigskip \n{\bf Proof.} Suppose that ${\rm Iso}(G)$ is a complemented lattice and denote by $\wedge'$ and $\vee\,'$ its binary operations. Then for every $[H]\in {\rm Iso}(G)$ there is $[K]\in {\rm Iso}(G)$ such that $[H]\wedge' [K]=[1]$ and $[H]\vee\hspace{0,1mm}' [K]=[G]$. Since $H\wedge K$ is contained both in $H$ and $K$, we infer that $[H\wedge K]\leq [H], [K]$ and therefore $[H\wedge K]\leq [H]\wedge' [K]$. This implies $H\wedge K=1$. Similarly, one obtains $H\vee K=G$. Hence $K$ is a complement of $H$ in $L(G)$. \hfill\rule{1,5mm}{1,5mm} \bk\n{\bf Remark.} The converse of Proposition 2.3 is in general not true. For e\-xam\-ple, $H=\langle y\rangle$ has a complement in $L(D_8)$, namely $K=\langle x^2, xy\rangle$, but $[K]$ is not a complement of $[H]$ in ${\rm Iso}(D_8)$ (more precisely, $[H]$ has no complement in ${\rm Iso}(D_8)$). \bigskip Next, we will study the poset of classes of isomorphic subgroups for finite dihedral groups. Recall that the dihedral group $D_{2n}$ $(n\ge2)$ is the symmetry group of a regular polygon with $n$ sides and it has the order $2n$. The most convenient abstract description of $D_{2n}$ is obtained by using its generators: a rotation $x$ of order $n$ and a reflection $y$ of order $2$. Under these notations, we have $$D_{2n}=\langle x,y\mid x^n=y^2=1,\ yxy=x^{-1}\rangle.$$It is well-known that for every divisor $r$ of $n$, $D_{2n}$ possesses a subgroup isomorphic to $\Z_r$, namely $H^r_0=\langle x^{\frac nr}\rangle$, and $\frac nr$ subgroups isomorphic to $D_{2r}$, namely $H^r_i=\langle x^{\frac nr},x^{i-1}y\rangle,$ $i=1,2,...,\frac nr\hspace{1mm}.$ Then $$|L(D_{2n})|=\tau(n)+\sigma(n),$$where $\tau(n)$ and $\sigma(n)$ are the number and the sum of all divisors of $n$, respectively. We easily infer that $$|{\rm Iso}(D_{2n})|=\left\{\barr{lll} 2\tau(n)-1, \mbox{ for } n \mbox{ odd}\\ &&\\ 2\tau(n), \mbox{ for } n \mbox{ even}.\earr\right.$$ \smallskip We are now able to determine the positive integers $n$ such that ${\rm Iso}(D_{2n})$ forms a lattice. \bk\n{\bf Proposition 2.4.} {\it The poset $\hspace{0,5mm}{\rm Iso}(D_{2n})$ is a lattice if and only if either $n$ is odd or $n=2^k$ for some $k\in\mathbb{N}$.} \bigskip \n{\bf Proof.} Suppose first that $n=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$ with $p_i>2$ prime, $i=1,2,...,k$, or $n=2^k$ for some $k\in\mathbb{N}$. Then, by using the above description of the subgroups of $D_{2n}$, a standard induction argument on $k$ easily shows that ${\rm Iso}(D_{2n})$ forms a lattice. Conversely, assume that ${\rm Iso}(D_{2n})$ is a lattice, but $n$ is of the form $n=2^{\alpha}\beta$, where $\alpha\geq 1$ and $\beta \neq 1$ is odd. Then $D_{2n}$ possesses the subgroups $H=\langle x^{2^{\alpha}},y\rangle\hspace{0,5mm}\cong D_{2\beta}$ and $K=\langle x^{2^{\alpha-1}}\rangle\hspace{0,5mm}\cong\mathbb{Z}_{2\beta}$. Clearly, both $H$ and $K$ contain cyclic subgroups of orders 2 and $\beta$, which proves that $[C_2]\leq [H], [K]$ and $[C_{\beta}]\leq [H], [K]$ (here $C_2$ and $C_{\beta}$ are arbitrary cyclic subgroups of $D_{2n}$ of orders 2 and $\beta$, respectively). It follows immediately that ${\rm inf}\{[H],[K]\}$ does not exist, a contradiction. \hfill\rule{1,5mm}{1,5mm} \bk\n{\bf Remarks.} \begin{itemize} \item[\rm 1.] 12 is the smallest positive integer $n$ for which there is a finite group $G$ of order $n$ such that ${\rm Iso}(G)$ is not a lattice. \item[\rm 2.] Another example of a group with the above property is the direct pro\-duct $D_8\times\mathbb{Z}_4$. In this case the classes determined by the subgroups $H=\langle x^2\rangle\times\hspace{1mm}\mathbb{Z}_4$ and $K=D_8$ does not possess an infimum, too. \end{itemize} \smallskip Unfortunately, we failed in describing exhaustively the class of finite groups $G$ for which the poset ${\rm Iso}(G)$ is a (distributive/modular) lattice. We remark that such a group $G$ satisfies the following interesting property: "for every two distinct prime divisors $p$ and $q$ of $|G|$, either all subgroups of order $pq$ in $G$ are cyclic or all subgroups of order $pq$ in $G$ are non-abelian". \bigskip We end this section by characterizing all finite groups whose posets of classes of isomorphic subgroups are chains (i.e. distributive lattices of a very particular type). \bk\n{\bf Theorem 2.5.} {\it Let $G$ be a finite group. Then $\hspace{0,5mm}{\rm Iso}(G)$ is a chain if and only if $G$ is either a cyclic $p$-group, an elementary abelian $p$-group, a non-abelian $p$-group of order $p^3$ and exponent $p$ or a quaternion group of order\, {\rm 8}.} \bigskip \n{\bf Proof.} It is clear that for a cyclic $p$-group, an elementary abelian $p$-group, a non-abelian $p$-group of order $p^3$ and exponent $p$ or a quaternion group of order 8, the poset of classes of isomorphic subgroups forms a chain. Conversely, suppose that ${\rm Iso}(G)$ is a chain. Then $G$ is a $p$-group. Put $|G|=p^n$ and take a minimal normal subgroup $H$ of $G$. If $H$ is the unique subgroup of order $p$\, of $G$, then (4.4) of \cite{13}, II, shows that $G$ is either cyclic or a generalized quaternion group $Q_{2^n}$, $n\geq 3$. It is well-known that the isomorphism classes of the maximal subgroups of $Q_{2^n}$ are $Q_{2^{n-1}}$ and $\mathbb{Z}_{2^{n-1}}$, and therefore ${\rm Iso}(Q_{2^n})$ is not a chain for $n\geq 4$. This holds only for $n=3$, that is for the quaternion group $Q_8$. If $G$ possesses a minimal subgroup $K$ with $K\neq H$, then $HK$ is elementary abelian of order $p^2$. One obtains that there is no cyclic subgroup of order $p^2$ in $G$, in other words we have $${\rm exp}(G)=p.$$ Obviously, if $G$ is abelian, then it is an elementary abelian $p$-group. In the following we will assume that $G$ is not abelian. Then $p$ is odd and $G$ contains a non-abelian subgroup of order $p^3$, say $N$ (more precisely, $N$ is isomorphic with the group $M(p^3)$, described in (4.13) of \cite{13}, II). Let $A$ be an abelian normal subgroup of maximal order of $G$ and set $|A|=p^a$. If $a\geq 3$ we infer that $A$ has a subgroup $A_1$ of order $p^3$. It follows that the classes $[N]$ and $[A_1]$ are not comparable, a contradiction. In this way, we have $a\leq 2$ and so $a\in\{1,2\}$. By Corollary 2, \cite{13}, I, page 94, we know that $2n\leq a(a+1)$, which implies $n\leq 3$. This leads to $n=3$ and hence $G=N$ is a non-abelian $p$-group of order $p^3$ and exponent $p$, which completes the proof. \hfill\rule{1,5mm}{1,5mm} \bigskip In particular, Theorem 2.5 shows that there are only two finite non-abelian groups $G$ with ${\rm Iso}(G)$ fully ordered, and each of them is of order $p^3$ for some prime $p$. \section{Finite groups with the same poset\\ of classes of isomorphic subgroups} In this section we study when for two finite groups $G_1$ and $G_2$ the poset/lattice isomorphism ${\rm Iso}(G_1)\cong {\rm Iso}(G_2)$ holds. Obviously, a sufficient condition to have this isomorphism is $G_1\cong G_2$, but it is not necessary, as show the examples in Section 2. We also remark that the weaker condition $L(G_1)\cong L(G_2)$ does not imply that ${\rm Iso}(G_1)\cong {\rm Iso}(G_2)$ (for example, take $G_1=S_3$ and $G_2=\mathbb{Z}_3\times\mathbb{Z}_3$) and the same thing can be said about the converse implication. \bigskip We start with the following easy but important lemma. \bk\n{\bf Lemma 3.1.} {\it Let $G_1$ be a finite $p$-group of order $p^n$. If $G_2$ is a finite group such that ${\rm Iso}(G_1)\cong {\rm Iso}(G_2)$, then $G_2$ is a $q$-group of order $q^n$.} \bigskip \n{\bf Proof.} The condition $|G_1|=p^n$ implies that ${\rm Iso}(G_1)$ possesses a unique non-trivial element, namely the class determined by the subgroups of order $p$. Then ${\rm Iso}(G_2)$ satisfies a similar property, and so $G_2$ is a $q$-group for some prime $q$. One the other hand, we easily infer that all maximal chains of ${\rm Iso}(G_1)$ are of length $n$. Since a poset isomorphism preserves the length of such a chain, one obtains that $|G_2|=q^n$, as desired. \hfill\rule{1,5mm}{1,5mm} \bigskip The above lemma can be extended to finite groups of arbitrary orders in the following manner. \bk\n{\bf Theorem 3.2.} {\it Let $G_1$ and $G_2$ be two finite groups such that ${\rm Iso}(G_1)\cong {\rm Iso}(G_2)$. If\, $|G_1|=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$, where $p_i$, $i=1,2,...,k$, are distinct primes, then we have $|G_2|=q_1^{\alpha_1}q_2^{\alpha_2}\cdots q_k^{\alpha_k}$ for some distinct primes $q_1,q_2,...,q_k$.} \bigskip \n{\bf Proof.} Let $f:{\rm Iso}(G_1)\longrightarrow {\rm Iso}(G_2)$ be a poset isomorphism, $i\in\{1,2,...,k\}$ and $S_i$ be a Sylow $p_i$-subgroup of $G_1$. If $f([S_i])=[S_i']$, then, by Lemma 3.1, we have $|S_i'|=q_i^{\alpha_i}$ for some prime $q_i$. Moreover, it is easy to see that $S_i'$ is a Sylow subgroup of $G_2$. Hence $|G_2|$ is of the form $q_1^{\alpha_1}q_2^{\alpha_2}\cdots q_k^{\alpha_k}$ with $q_1,q_2,...,q_k$ distinct primes, completing the proof. \hfill\rule{1,5mm}{1,5mm} \bigskip Proposition 2.1 and Theorem 3.2 lead the following immediate characterization of the poset isomorphism ${\rm Iso}(G_1)\cong {\rm Iso}(G_2)$ for two finite abelian groups $G_1$ and $G_2$. \bk\n{\bf Corollary 3.3.} {\it Let $G_1\hspace{-1mm}$ and $G_2$ be two finite abelian groups of orders $p_1^{\alpha_1}\hspace{-1mm}p_2^{\alpha_2}\hspace{-1mm}\cdots\hspace{-1mm} p_k^{\alpha_k}$ and $q_1^{\beta_1}q_2^{\beta_2}\cdots q_r^{\beta_r}$, respectively. Then $\hspace{0,5mm}{\rm Iso}(G_1)\cong{\rm Iso}(G_2)$ if and only if $k=r$\newline and there is a permutation $\sigma$ of $\{1,2,...,k\}$ such that $\beta_i=\alpha_{\sigma(i)}$ and $\hspace{0,5mm}{\rm Iso}(S_{q_i}')\cong {\rm Iso}(S_{p_{\sigma(i)}})$, where $S_{q_i}'$ is the Sylow $q_i$-subgroup of $G_2$ and $S_{p_{\sigma(i)}}$ is the Sylow $p_{\sigma(i)}$-subgroup of $G_1$, for all $i=1,2,...,k$.} \bk\n{\bf Example.} By using Corollary 3.3, we easily infer that $${\rm Iso}(\mathbb{Z}_2\times\mathbb{Z}_6\times\mathbb{Z}_{18})\cong {\rm Iso}(\mathbb{Z}_7\times\mathbb{Z}_{6125}).$$ Lemma 3.1 shows that the class of finite $p$-groups is preserved by isomorphisms between their posets of classes of isomorphic subgroups. Other important classes of finite groups satisfying the same property are solvable groups, {\rm CLT}-groups and supersolvable groups, respectively. This is due to the fact that if $f:{\rm Iso}(G_1)\longrightarrow {\rm Iso}(G_2)$ is a poset isomorphism and $G_1$ is of one of the above three types, then the strong connection between the orders of $G_1$ and $G_2$ assures the existence of Hall subgroups, of subgroups of any orders or the validity of the Jordan-Dedekind chain condition for $G_2$. \bk\n{\bf Corollary 3.4.} {\it The classes of finite solvable groups, {\rm CLT}-groups and supersolvable groups are preserved by isomorphisms between their posets of classes of isomorphic subgroups.} \bigskip We end this section by some results related to the uniqueness of a finite group with a given poset of classes of isomorphic subgroups. \bk\n{\bf Theorem 3.5.} {\it Let $n\geq 2$ be an integer which is not square-free. Then there are at least two non-isomorphic groups $G_1$ and $G_2$ of order $n$ such that ${\rm Iso}(G_1)\cong {\rm Iso}(G_2)$.} \bigskip \n{\bf Proof.} Let $n=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$ be the decomposition of $n$ as a product of prime factors. We will proceed by induction on $k$. Suppose first that $k=1$, that is $n=p_1^{\alpha_1}$ with $\alpha_1\geq 2$. For $\alpha_1=2$ we take $G_1=\mathbb{Z}_{p_1^2}$ and $G_2=\mathbb{Z}_{p_1}\times\mathbb{Z}_{p_1}$. For $\alpha_1\geq 3$ we take $G_1=M(p_1^{\alpha_1})$ (see Theorem 4.1 of \cite{13}, II) and $G_2=\mathbb{Z}_{p_1^{\alpha_1-1}}\times\mathbb{Z}_{p_1}$ if $p_1\neq 2$, respectively $G_1=D_{p_1^{\alpha_1}}$ and $G_2=\mathbb{Z}_{p_1^{\alpha_1-1}}\times\mathbb{Z}_{p_1}$ if $p_1=2$. Assume now that $k\geq 2$. By the inductive hypothesis, we can choose two non-isomorphic groups $H_1$ and $H_2$ of the order $p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_{k-1}^{\alpha_{k-1}}$ such that ${\rm Iso}(H_1)\cong {\rm Iso}(H_2)$. Then it is a simple exercise to see that the groups $G_1=H_1\times\mathbb{Z}_{p_k^{\alpha_k}}$ and $G_2=H_2\times\mathbb{Z}_{p_k^{\alpha_k}}$ satisfy the desired conditions, completing the proof. \hfill\rule{1,5mm}{1,5mm} \bigskip Inspired by Theorem 3.5, we came up with the following conjecture, which we have verified for several finite groups of small orders. \bk\n{\bf Conjecture.} {\it For every non-trivial finite group $G_1$ whose order is not square-free there exists a finite group $G_2$ such that $|G_1|=|G_2|$, ${\rm Iso}(G_1)\cong {\rm Iso}(G_2)$ and $G_1\ncong G_2$.} \bk\n{\bf Remark.} The above conjecture says nothing else than the implication $$|G_1|=|G_2|\mbox{ and } {\rm Iso}(G_1)\cong {\rm Iso}(G_2)\hspace{1mm}\Longrightarrow\hspace{1mm} G_1\cong G_2$$fails in all cases, except when $|G_1|=|G_2|$ is a square-free number. \section{Conclusions and further research} All our previous results show that the poset consisting of all classes of isomorphic subgroups of a (finite) group can constitute a significant aspect of (finite) group theory. Clearly, the study started in this paper can successfully be extended to other classes of groups. It can be also generalized by studying the posets of isomorphic substructures of other algebraic structures (rings, modules, algebras, ... and so on). This will surely be the subject of some further research. \bigskip Finally, we mention several open problems concerning this topic. \bigskip \noindent{\bf Problem 4.1.} Determine the finite groups $G$ for which the poset Iso($G$) is a lattice and study the properties of this lattice. \bigskip \noindent{\bf Problem 4.2.} What can be said about two \textit{arbitrary} finite groups $G_1$ and $G_2$ satisfying ${\rm Iso}(G_1)\cong {\rm Iso}(G_2)$? \bigskip \noindent{\bf Problem 4.3.} Given two finite groups $G_1$ and $G_2$, study the isomorphisms between the posets/lattices ${\rm Iso}(G_1)$ and ${\rm Iso}(G_2)$ induced by the isomorphisms or by the $L$-isomorphisms between $G_1$ and $G_2$. \bigskip \noindent{\bf Problem 4.4.} The most natural generalization of the poset Iso($G$) asso\-cia\-ted to a finite group $G$ is obtained by considering $L$-isomorphisms instead of group isomorphisms in its definition: $${\rm Iso}'(G)=\{[H]' \mid H\in L(G)\}, \mbox{ where } [H]'=\{K\in L(G) \mid L(K)\cong L(H)\}.$$Investigate the above new poset ${\rm Iso}'(G)$ with respect to the same ordering relation as for Iso($G$). \bigskip \noindent{\bf Problem 4.5.} Given a finite group $G$, study the posets of classes of subgroups with respect to other equivalence relations on $L(G)$ (or on other important subposets of $L(G)$). For example: \begin{itemize} \item[1.] $H\sim_1 K$ if and only if $|H|=|K|$\,; \item[2.] $H\sim_2 K$ if and only if there is $f\in {\rm Aut}(G)$ such that $f(H)=K$; \item[3.] $H\sim_3 K$ if and only if $\pi_e(H)=\pi_e(K)$ (that is, $H$ and $K$ have the same set of element orders). \end{itemize} \bigskip \noindent{\bf Problem 4.6.} The concept of \textit{solitary quotient} of a finite group has been defined in \cite{15} as the dual concept of \textit{solitary subgroup}. Following the same technique, we can construct a "dual" for the set Iso($G$), namely $${\rm QIso}(G)=\{[H] \mid H\in N(G)\}, \mbox{ where } [H]=\{K\in N(G) \mid G/K\cong G/H\}.$$Endow this set with a suitable ordering relation and study some similar problems. \bigskip \bigskip\noindent{\bf Acknowledgements.} The author is grateful to the reviewer for its remarks which improve the previous version of the paper.
1,108,101,563,717
arxiv
\section{Introduction}\label{sec:intro} \subsection{Background} A fundamental problem in robotics is to find the minimum-time path from a start pose to a goal pose while considering several constraints on vehicles such as bounded curvature~\cite{DM12}\cite{FS04}, bounded velocity~\cite{BM02}\cite{LK08} and bounded acceleration~\cite{RP94}\cite{bestaoui1989line}. In particular, bounded curvature implies that the vehicle's turning is subject to a non-zero minimum turning radius corresponding to its speed and maximum turn rate. Dubins~\cite{D57}\cite{shkel2001classification} used a geometrical approach to show that in absence of obstacles, the shortest path for a curvature-constrained vehicle between a pair of poses must be one of the following six path types (also known as the Dubins curves): $LSL$, $RSR$, $LSR$, $RSL$, $LRL$ and $RLR$, where $L(R)$ refers to a left (right) turn with the maximum curvature, and $S$ indicates a straight line segment. Since each path type is composed of three segments, it is uniquely determined by three path parameters, which describe the angles of the circular arcs and the length of the straight line segment. Recently, the authors proposed the T$^\star$ algorithm~\cite{SGW19} which extended the Dubins approach to variable speed vehicles in obstacle-rich environments for time-optimal risk-aware motion planning. However, when environmental currents (e.g., wind or ocean currents) are present, the vehicle trajectory can be significantly distorted~\cite{MG19}, resulting in a minimum-time trajectory which is different from the minimum-distance trajectory. \begin{figure}[t] \centering \subfloat[$2\pi$-arc paths in the inertial frame (IF) and the current frame (CF).]{ \includegraphics[width=0.99\columnwidth]{frontpage_fig1-eps-converted-to.pdf}\label{fig:frontpage_fig1}} \\ \subfloat[$4\pi$-arc paths in the inertial frame (IF) and the current frame (CF).]{ \includegraphics[width=.99\columnwidth]{frontpage_fig2-eps-converted-to.pdf}\label{fig:frontpage_fig2}} \vspace{-1pt} \caption{The minimum-time $2\pi$-arc paths vs. $4\pi$-arc paths. The current vector $(-0.5, 0)$, the start pose $(0,0,0)$ and the goal pose $(-2.3, 2.8, \pi/2)$.}\label{fig:frontpage_fig} \vspace{-9pt} \end{figure} Along this line, the existing methods to compute the minimum-time trajectory for Dubins vehicles in the presence of environmental currents can be categorized into two types: (1) solutions in the inertial frame (IF)~\cite{TW09} and (2) solutions in the current frame (CF)~\cite{MSH05}\cite{BT13}. The current frame is the inertial frame that moves at the speed and direction of the current. Fig.~\ref{fig:frontpage_fig1} shows the minimum-time Dubins path both in the IF and the CF. Due to the effect of current, the optimal Dubins path in the CF results in the distorted trochoidal path in the IF, therefore, the solutions in the IF have complex expressions~\cite{TW09}. A major advantage of using the CF is that the effect of current on the vehicle trajectory is completely encompassed by the motion of the reference frame, hence the path planning problem can be simplified to a moving-target interception problem using Dubins paths~\cite{MSH05}\cite{BT13}\cite{meyer2015dubins}\cite{ding2019curvature}. While details are discussed later, Fig.~\ref{fig:frontpage_fig2} shows the optimal paths obtained by our method in the CF and the IF. \vspace{-12pt} \subsection{The Real-time Challenge} Although the above methods can produce the minimum-time trajectory for Dubins vehicles in the presence of static currents, their real-time application is limited due to their computational complexity. As shown in~\cite{TW09}\cite{BT13}, the existing approaches require to solve for all six Dubins path types to find the minimum-time trajectory. Out of these six path types, only $LSL$ and $RSR$ paths have analytical solutions, while the remaining four path types require to solve a root-finding problem involving transcendental equations, which demand significant computational efforts. However, in dynamic situations (e.g., changing currents, adaptive exploration~\cite{SGH13}\cite{GRP09} and target tracking~\cite{HGW19}) it is critical to obtain a real-time solution for fast replanning, which is the focus of this paper. \begin{figure}[t] \centering \includegraphics[width=0.90\columnwidth]{frontpage_computationtime_log-eps-converted-to.pdf} \caption{Mean computation times for $LSL$ and $RSR$ paths as compared to all six Dubins paths, over $1000$ randomly selected start and goal poses in a steady current environment, on a $2.4$~GHz CPU computer with $8$~GB RAM.}\label{fig:frontpage_computationtime} \vspace{-6pt} \end{figure} To motivate this further, we generated the computation time required to obtain the minimum-time path from all six Dubins path types, as shown in Fig.~\ref{fig:frontpage_computationtime}. Also, we compared this to the computation time required to get the minimum-time path from only the $LSL$ and $RSR$ path types. These computation times were obtained by averaging over $1000$ randomly selected start and goal poses in an environment with steady currents. The simulations were run in MATLAB on a computer with $2.4$~GHz CPU and $8$~GB RAM. It is seen that using only $LSL$ and $RSR$ paths takes $\sim6.4\times 10^{-4}$~s to get a solution. In contrast, using all six path types takes several orders of magnitude higher time to solve the transcendental equations. Furthermore, for practical applications, these numbers can become significantly larger for less powerful on-board processors. Moreover, these computation times depend on the non-linear solvers used. In addition, the implementation of these optimization solvers on on-board processors is challenging as compared to a system of equations with analytical solutions. \textit{Example}: The potential implications of computation times are shown with an example. Consider an underwater vehicle moving at 2.5~m/s in an environment with a time-varying current with a speed of $2$~m/s. Now, suppose the current changes direction towards that of the vehicle motion, then a new path needs to be computed. Suppose that it takes $\sim8.72$~s for the on-board processor to get a solution using all six path types. Then, the vehicle would drift by a distance of $ 8.72 \cdot (2+2.5)=39.24$~m before it could compute a new path. In comparison, if it uses only $LSL$ and $RSR$ path types, then this drift would be as little as $6.4\times 10^{-4}\cdot (2+2.5)= 0.0029$~m. Thus, computation time plays a crucial role in real-time path planning in dynamic environments. \vspace{-6pt} \subsection{Our Approach} \label{ourapproach} Based on the above discussion, we propose a rapid (real-time) analytical solution as described below. \vspace{6pt} \subsubsection{\textbf{Proposed solution using $4\pi$-arc $LSL$ and $RSR$ paths}} We propose a solution in the CF using only the $LSL$ and $RSR$ path types. However, the limitation of using only this subset of path types is the lack of full reachability, i.e., they cannot reach every goal pose in the presence of currents. To overcome the above limitation, we propose a simple yet powerful technique. Instead of using the regular $LSL$ and $RSR$ paths where the arc angles are within a range of $[0, 2\pi)$, we propose to extend their arc range to $[0, 4\pi)$~\cite{mittal2019real}. Accordingly, we define the concepts of $2\pi$-arc and $4\pi$-arc paths below, where the parameters $\alpha$ ($\gamma$) and $\beta$ refer to the turning angle of the first (second) arc and the length of the straight line segment, respectively. \begin{defn}[\textbf{$2\pi$-arc Path}]\label{defn:conventional_path} An $L^\alpha S^\beta L^\gamma$ or $R^\alpha S^\beta R^\gamma$ path is called a \textit{$2\pi$-arc} path, if $\alpha \in [0,2\pi)$ and $\gamma \in [0,2\pi)$. \end{defn} \begin{defn}[\textbf{$4\pi$-arc Path}]\label{defn:unconventional_path} An $L^\alpha S^\beta L^\gamma$ or $R^\alpha S^\beta R^\gamma$ path is called a \textit{$4\pi$-arc} path, if $\alpha \in [0, 4\pi)$ and $\gamma \in [0, 4\pi)$. \end{defn} \begin{rem} The six Dubins path types use the $2\pi$-arcs. \end{rem} \begin{rem}It is shown that the $4\pi$-arc $LSL$ and $RSR$ paths provide full reachability along with reduced total time costs as compared to the $2\pi$-arc $LSL$ and $RSR$ paths. \end{rem} \textit{Example}: Figs.~\ref{fig:frontpage_fig1} and \ref{fig:frontpage_fig2} show the minimum-time $2\pi$-arc and $4\pi$-arc paths, respectively, in both the IF and the CF. Fig.~\ref{fig:frontpage_fig1} shows the optimal $2\pi$-arc path, which is a $RSR$ path with the total time cost of $20.91$~s. In comparison, Fig.~\ref{fig:frontpage_fig2} shows the optimal $4\pi$-arc path, which is a $LSL$ path with $\gamma = 2.263\pi > 2\pi$ and the total time cost of $10.51$~s. Intuitively, this happens because instead of traveling against the current, the vehicle spends more time on arcs which allows the current to help it to reach the goal in less time. \vspace{6pt} \subsubsection{\textbf{Theoretical analysis of $4\pi$-arc $LSL$ and $RSR$ paths}} We present a rigorous theoretical analysis of the properties of $4\pi$-arc $LSL$ and $RSR$ paths. First, we develop a comprehensive procedure for reachability analysis of the $2\pi$-arc $LSL$ and $RSR$ paths. We present the conditions for full reachability using these two path types with the support from Lemmas~\ref{lem:swipe}$-$\ref{lem:slopeproperty3}. The derivation of these conditions and the proofs of supporting lemmas are provided in Appendices~\ref{app:reachability} and~\ref{app:lemma_proofs}, respectively. Next, it is numerically validated that the $2\pi$-arc $LSL$ and $RSR$ paths fail to satisfy the reachability conditions under all goal poses and current velocities. Thus, we present Theorem~\ref{claim1}, which provides a guarantee of full reachability using $4\pi$-arc $LSL$ and $RSR$ paths. Further, it is established through Theorem~\ref{claim2} and Corollary~\ref{claim2_cor}, that the computational complexity of both $2\pi$-arc and $4\pi$-arc path solutions is the same. Along with providing full reachability, another important benefit of $4\pi$-arc paths is their ability to generate faster, i.e., reduced time cost, paths in comparison to the $2\pi$-arc paths, which is highlighted in Theorem~\ref{claim3}. Finally, Theorem~\ref{rem:over_4pi} is presented to prove that $\alpha,\gamma\in[0, 4\pi)$ is sufficient for optimality using $LSL$ and $RSR$ path types and thus further increasing of range is not needed. For validation of our approach, extensive Monte Carlo simulations are performed to compare the performance of Dubins solutions and the proposed $4\pi$-arc path solutions. \vspace{6pt} \subsubsection{\textbf{Comparison of $4\pi$-arc $LSL$ and $RSR$ paths with Dubins}} The solution obtained from the $4\pi$-arc $LSL$ and $RSR$ paths might be sub-optimal for certain goal poses as compared to the one obtained from the six Dubins path types; however, the longer convergence time of the Dubins path solution might render it unsuitable for real-time applications. For offline applications in static current environments, one can use the Dubins path types to compute the minimum-time path. In this regard, Section~\ref{app:dubins_comparison} provides a detailed comparison of the solution quality (i.e., travel time cost) obtained for the $4\pi$-arc $LSL$ and $RSR$ solutions and the Dubins solutions. This analysis indicates that the advantage of the Dubins solutions over the $4\pi$-arc $LSL$ and $RSR$ solutions in terms of travel time costs is not significant. Furthermore, upon adding the computation time costs, the advantage of Dubins solutions is further reduced. On the other hand, for time critical real-time applications (e.g., target tracking, planning under moving obstacles, and changing currents), $4\pi$-arc paths provide rapid and reliable solutions without causing any vehicle drift. In contrast, the high computation times for Dubins solutions can cause vehicle drifts, thereby, resulting in longer sub-optimal trajectories which sometimes do not even converge to the goal pose. Section~\ref{changingcurrents} presents a comparative analysis in the presence of dynamic currents, which highlights the benefits of the solutions obtained from the $4\pi$-arc $LSL$ and $RSR$ paths over the ones obtained from the six Dubins paths. \vspace{-6pt} \subsection{Our Contributions} The paper makes the following novel contributions: \begin{itemize} \item Provides an analytical solution of the path planning problem for Dubins vehicles under environmental currents, where the solution is based on a novel concept of $4\pi$-arc $LSL$ and $RSR$ paths and can be computed in real-time. In this regard, the paper presents the following: \begin{itemize} \item A detailed analytical method to construct the reachability graphs of $LSL$ and $RSR$ paths. \item A detailed derivation of the conditions under which $2\pi$-arc $LSL$ and $RSR$ paths provide full reachability. \item A mathematical proof of full reachability of the $4\pi$-arc $LSL$ and $RSR$ paths under all conditions unlike the corresponding $2\pi$-arc paths (\textbf{Theorem~\ref{claim1}}). \item A mathematical proof that a solution using $4\pi$-arc $LSL$ and $RSR$ paths can be obtained with the same computational workload as that needed for $2\pi$-arc paths (\textbf{Theorem~\ref{claim2} and Corollary~\ref{claim2_cor}}), \item A mathematical proof that $4\pi$-arc $LSL$ and $RSR$ paths provide reduced travel time costs as compared to the corresponding $2\pi$-arc paths. (\textbf{Theorem~\ref{claim3}}). \end{itemize} \item Theoretical properties of $4\pi$-arc $LSL$ and $RSR$ paths are rigorously established and evaluated in comparison to Dubins solutions by extensive Monte-Carlo simulations. \end{itemize} \vspace{-6pt} \subsection{Organization} The rest of the paper is organized as follows. Section~\ref{sec:litreview} reviews the existing literature. Section~\ref{sec:problemandsolution} presents the path planning problem and its analytical solution. Section~\ref{sec:reachabilityanalysis} presents a detailed analytical procedure for the reachability analysis of the 2$\pi$-arc $LSL$ and $RSR$ paths. Section~\ref{sec:main_contents} presents the theoretical properties of $4\pi$-arc paths and shows their advantages over the $2\pi$-arc paths. Section~\ref{sec:results} presents the comparative evaluation results. Finally, the paper is concluded in Section~\ref{sec:conclusion} with recommendations for future work. Appendices~\ref{app:reachability} and~\ref{app:lemma_proofs} provide proofs of reachability conditions and supporting lemmas. \vspace{3pt} \section{Literature Review}\label{sec:litreview} Recently, several papers~\cite{zeng2016comparison} have addressed the path planning problem in the presence of currents. Garau et al.~\cite{GBARP09}\cite{GAO05} studied the minimum-time path planning problem in marine environments with spatial current variability, where the time cost was defined as the sum of step-wise costs that are specified by the traveling distance over the vehicle speed in the presence of ocean currents. However, the drawback in their design is that infeasible paths are penalized rather than being prohibited. Petres et al.~\cite{PPPPEL07} presented the FM$^\star$ algorithm to find the minimum-time path for underwater vehicles, where the time cost is defined over the inner product of the distance function and the current field; however, their cost function still penalizes rather than restricts infeasible paths. In this regard, Soulignac et al.~\cite{STR09} proposed a time cost function that projects the speed vector to both axes as opposed to taking its norm as in~\cite{GAO05}. Accordingly, their method is restricted to feasible paths. In addition, energy based cost functions~\cite{ACO04}\cite{ZIOM08} have also been used for planning in the presence of ocean currents. However, the above-mentioned methods ignore any kinematic motion constraints for vehicles. Along this line, Techy and Woolsey~\cite{TW09} addressed the minimum-time path planning problem for a curvature-constrained vehicle in constant wind, based on the fact that the circular arcs are distorted by the wind into the trochoidal curves in the inertial frame. They derived analytical solutions for $LSL$ and $RSR$ candidate paths, while for other paths of $LSR$, $RSL$, $LRL$ and $RLR$, they must solve certain transcendental equations to obtain solutions. However, as we show in Fig.~\ref{fig:frontpage_computationtime}, the root finding problem for transcendental equations can be computationally expensive. In contrast, McGee et al.~\cite{MSH05} studied the minimum-time path planning problem in the current frame. They first used Pontryagin's Minimum Principle to demonstrate that the optimal path is comprised of straight line segments and curves of maximum turn rate. Then, they introduced the concept of a "virtual target" which starts at the goal state but moves in the opposite direction as the wind. In this setup, the minimum-time problem is simplified into a target interception problem, where the objective is to find the earliest interception point in the current frame so that the Dubins path can meet with the virtual target in minimum time. However, one must repeatedly check for the validity of possible interception points, which can be arbitrarily heavy to compute if the actual interception point lies far from the beginning search point. In this regard, Bakolas et al.~\cite{BT13} directly solved for the interception point in the current frame by introducing an extra parameter of interception time. They also showed that when the wind speed is less than the vehicle speed, the vehicle has full reachability, i.e., the optimal path always exists for any given goal pose. However, their solution methodology still involves solving for the roots of multiple transcendental equations, which could lead to heavy computational burden, thus prohibiting it from real-time applications. Some researchers used the Nonlinear Trajectory Generation (NTG) algorithm~\cite{inanc2005} based on spline curves to obtain the optimal trajectory of a glider with kinematic constraints in presence of dynamically varying ocean currents. The proposed algorithm relies on Sequential Quadratic Programming (SQP) approach to solve the nonlinear programming problem which might lead to sub-optimal solutions and high computational time. In comparison, this paper proposes a novel method which provides a rapid analytical solution to the path planning problem under currents with guaranteed full reachability. \vspace{0pt} \section{Problem Description and Solution}\label{sec:problemandsolution} This section presents the minimum-time path planning problem for Dubins vehicles and its analytical solution. \vspace{0pt} \subsection{Problem Description}\label{sec:problem} Consider a vehicle moving at a velocity ${\bf v} =(v\cos{\theta}, v\sin{\theta})$, where $v\in \mathbb{R}^+$ is its speed and $\theta\in [0, 2\pi)$ is its heading. A steady current is assumed to be present in the environment with velocity ${\bf v}_w = (v_w\cos{\theta_w}, v_w\sin{\theta_w})\equiv (w_x, w_y) $, where $v_w \in \mathbb{R}^+$ is its speed and $\theta_w \in [0, 2\pi)$ is its direction. The current speed is assumed to be slower than the vehicle speed, i.e., $v_w < v$. Then, the motion of the vehicle can be described as: \vspace{-3pt} \begin{equation} \begin{cases} \dot{x}(t) &= v \cdot \cos{\theta(t)} + w_x \\ \dot{y}(t) &= v \cdot \sin{\theta(t)} + w_y \\ \dot{\theta}(t) &= u(t) \label{eq:vehicle_model_currents} \end{cases}, \end{equation} where $\mathbf{p} = (x,y,\theta) \in SE(2)$ is the vehicle pose and $u$ indicates its turn rate. By choosing a proper unit, the vehicle speed can be normalized to $v = 1$. The turn rate $u$ is symmetric and bounded, s.t., $u \in [-u_{\max}, u_{\max}]$, where $u_{\max} \in \mathbb{R}^+$ is the maximum turn rate and the $+$/$-$ sign indicates a left/right turn. These constraints imply that the vehicle is subject to the minimum turning radius of $r= 1/u_{\max}$ (for $v=1$). Then, for a vehicle operating in a current environment, as described in~(\ref{eq:vehicle_model_currents}), the objective is to find the minimum-time path from a start pose $\mathbf{p}_{start} = (x_0, y_0, \theta_0)$ to a goal pose $\mathbf{p}_{goal} = (x_f, y_f, \theta_f)$. The state-of-the-art solutions~\cite{TW09}\cite{MSH05}\cite{BT13} to this problem require to solve for all six Dubins path types to find the minimum-time path. However, as shown in (34) and (39) of~\cite{BT13}, in order to obtain the path types of $LSR$, $RSL$, $LRL$ and $RLR$, one must solve a root-finding problem involving transcendental equations for numerical solutions. This inevitably requires significant computation resources and thus can seriously restrict their usage in real-time applications. In this regard, in order to achieve a real-time solution, we address the above problem using only two path types which have direct analytical solutions. These are $L^{\alpha}S^{\beta}L^{\gamma}$ and $R^{\alpha}S^{\beta}R^{\gamma}$, where $\alpha$ and $\gamma$ are the turning angles of the first and second arc segments, respectively; and $\beta \geq 0$ denotes the length of the straight line segment. Thus, the solution for each path type is uniquely determined by the 3-tuple $\{\alpha, \beta, \gamma\}$ of path parameters. Since these parameters can be solved analytically, the solution is obtained very fast (in real-time). However, due to using only a subset of the Dubins path types, there exist goal poses for which neither $LSL$ nor $RSR$ path can provide feasible solutions, i.e., $LSL$ and $RSR$ paths do not provide full reachability. To address this issue, we extend the feasible ranges of $\alpha$ and $\gamma$ from $[0,2\pi)$ to $[0,4\pi)$. It is shown later that the extended $LSL$ and $RSR$ path types guarantee full reachability, and can provide the solutions with even less time costs. \vspace{-6pt} \subsection{Solutions for the $LSL$ and $RSR$ Paths}\label{sec:method} This section derives the analytical solutions for the parameters of the $LSL$ and $RSR$ path types using the CF, which moves with the same speed and direction as that of the current. In the CF, the goal moves in the opposite direction with $(-w_x, -w_y)$. Thus, the problem is simplified to a moving-target interception problem. Therefore, the objective is to find the minimum interception time to meet with the moving goal using Dubins $LSL$ and $RSR$ paths. Without loss of generality, we choose the start pose $(x_0, y_0, \theta_0) = (0,0,0)$. \vspace{6pt} \subsubsection{$L^{\alpha}S^{\beta}L^{\gamma}$ Path}\label{sec:LSL} As seen in Fig.~\ref{fig:LSL_derivation}, in order to reach the goal $(x_f,y_f,\theta_f)$ in the CF, the following boundary constraints must be satisfied for an $LSL$ path~\cite{BT13}: \begin{equation}\label{eq:LSL_conditions} \begin{cases} x_f - w_x T &= r \sin\theta_f + \beta \cos\alpha \\ y_f - w_y T &= r (1-\cos\theta_f) + \beta \sin\alpha \\ T &= \big(r (\alpha + \gamma) + \beta \big)/v \\ \alpha + \gamma &= 2 k \pi + \theta_f \end{cases}, \end{equation} where $v = 1$ and $T \in \mathbb{R}^+$ is the total travel time. \begin{figure}[t] \flushleft \hspace*{-1em} \subfloat[$L^\alpha S^\beta L^\gamma$ path]{ \includegraphics[width=.520\columnwidth]{LSL_derivation-eps-converted-to.pdf}\label{fig:LSL_derivation}} \hspace*{-1.2em} \subfloat[$R^\alpha S^\beta R^\gamma$ path]{ \includegraphics[width=.520\columnwidth]{RSR_derivation-eps-converted-to.pdf}\label{fig:RSR_derivation}} \caption{Geometric illustration for $LSL$ and $RSR$ paths.} \label{fig:LSL_RSR_derivation} \vspace{-12pt} \end{figure} In addition, we introduce $k\in \mathbb{Z}$ to control the feasible ranges of $\alpha$ and $\gamma$. Specifically, for a $2\pi$-arc $LSL$ path, since $\theta_f \in [0, 2\pi)$ and $\alpha, \gamma \in [0, 2\pi)$, one has $k \in \{0, 1\}$. In contrast, for a $4\pi$-arc $LSL$ path, since $\alpha, \gamma \in [0, 4\pi)$, one has $k \in \{0,1,2,3\}$. Note: We show later that we need only $k \in \{0,1\}$ to find a feasible minimum-time $4\pi$-arc $LSL$ path. Now, for a given $k$, define $A^k$ and $B^k$ as follows: \begin{equation}\label{eq:LSL_AB} \begin{cases} A^k &= x_f - r \sin \theta_f - w_x r (2k\pi + \theta_f) \\ B^k &= y_f -r (1- \cos \theta_f) - w_y r(2k\pi + \theta_f)\\ \end{cases}, \end{equation} which are constants that can be computed given the current velocity, and the start and goal poses. Then, using (\ref{eq:LSL_conditions}) and (\ref{eq:LSL_AB}), we get: \vspace{-3pt} \begin{equation}\label{eq:LSL_AB_condition} \begin{cases} A^k &= \beta \cos \alpha + w_x \beta \\ B^k &= \beta \sin \alpha + w_y \beta \\ \end{cases}. \end{equation} Based on (\ref{eq:LSL_AB_condition}), we can compute $\beta$ by solving the quadratic equation $\big(A^k -w_x \beta \big)^2 + \big( B^k -w_y\beta \big)^2 = \beta^2$, such that \begin{equation}\label{eq:LSL_beta} \beta = \frac{\pm\sqrt{(A^k w_x + B^k w_y)^2 + ({A^k}^2 + {B^k}^2)(1-v_w^2)} - (A^k w_x + B^k w_y)}{1-v_w^2}. \end{equation} It is seen from (\ref{eq:LSL_beta}) that when $v_w < 1$, $\beta$ has valid solutions. Then, $\alpha$ can be computed as \vspace{-3pt} \begin{equation} \alpha = \atantwo \big( B^k - \beta w_y, A^k -\beta w_x \big) (\textrm{mod} \ \kappa), \label{eq:LSL_alpha} \end{equation} where $\kappa = 2\pi$ for $2\pi$-arc paths, and $\kappa = 4\pi$ for $4\pi$-arc paths. Thereafter, $\gamma$ is computed as $\gamma = 2k\pi + \theta_f - \alpha$ (mod $\kappa$). \vspace{10pt} \subsubsection{$R^{\alpha}S^{\beta}R^{\gamma}$ Path}\label{sec:RSR} As seen in Fig.~\ref{fig:RSR_derivation}, the following boundary constraints must be satisfied for an $RSR$ path: \vspace{-3pt} \begin{equation}\label{eq:RSR_conditions} \begin{cases} x_f - w_x T &= -r \sin\theta_f + \beta \cos\alpha\\ y_f - w_y T &= -r (1-\cos\theta_f) - \beta \sin\alpha \\ T &= \big( r (\alpha + \gamma) + \beta \big) /v \\ -\alpha - \gamma & = 2 k \pi + \theta_f \end{cases}. \end{equation} For a $2\pi$-arc $RSR$ path, since $\theta_f \in [0, 2\pi)$ and $\alpha, \gamma \in [0, 2\pi)$, one has $k \in \{-1,-2\}$; while for a $4\pi$-arc $RSR$ path, because $\alpha, \gamma \in [0,4\pi)$, one has $k \in \{-1, -2, -3, -4\}$. Note: We show later that we need only $k \in \{-1,-2\}$ to find a feasible minimum-time $4\pi$-arc $RSR$ path. Now, define \vspace{-3pt} \begin{equation}\label{eq:RSR_AB} \begin{cases} A^k &= x_f + r \sin \theta_f + w_x r(2k\pi + \theta_f) \\ B^k &= y_f + r (1- \cos \theta_f) + w_y r(2k\pi + \theta_f) \\ \end{cases}, \end{equation} and using (\ref{eq:RSR_conditions}) and (\ref{eq:RSR_AB}), we get: \vspace{-3pt} \begin{equation}\label{eq:RSR_AB_condition} \begin{cases} A^k &= \beta \cos \alpha + w_x \beta \\ B^k &= -\beta \sin \alpha + w_y \beta \\ \end{cases}. \end{equation} Then, $\beta$ is solved using $\big(A^k - w_x\beta \big)^2 + \big( B^k - w_y\beta \big)^2 = \beta^2$, which results in the same expression as (\ref{eq:LSL_beta}). Similarly, when $v_w <1$, $\beta$ has valid solutions. Then, $\alpha$ can be computed as \begin{equation} \alpha = \atantwo(-B^k + \beta w_y, A^k -\beta w_x) \ (\text{mod} \ \kappa), \label{eq:RSR_alpha} \end{equation} and $\gamma$ is computed as $\gamma = - 2k\pi -\theta_f - \alpha$ (mod $\kappa$). \vspace{10pt} \subsection{Feasible Ranges of Path Parameters}\label{sec:tight_bounds} According to Defn.~\ref{defn:conventional_path} and Defn.~\ref{defn:unconventional_path}, the parameters $\alpha$ and $\gamma$ are defined over $[0, 2\pi)$ and $[0, 4\pi)$ for $2\pi$-arc paths and $4\pi$-arc paths, respectively. Given the direction $\theta_f \in [0, 2\pi)$ of the goal pose, we can obtain tighter feasible ranges for $\alpha$ and $\gamma$. Table~\ref{table:range_conventional} shows the feasible ranges of path parameters for both $2\pi$-arc and $4\pi$-arc paths. An example is provided below. \vspace{6pt} \textit{Example}: Consider a $4\pi$-arc $LSL$ path, where $\alpha \in [0, 4\pi)$ and $\gamma \in [0, 4\pi)$. There are four cases to study: \begin{itemize} \item $k = 0$ (i.e., $\alpha + \gamma = \theta_f < 2\pi$): Now, $\gamma \geq 0$ $\implies$ $\alpha \leq \theta_f$. Similarly, $\alpha \geq 0$ $\implies$ $\gamma \leq \theta_f$. Thus, the feasible range for both $\alpha$ and $\gamma$ is $[0,\theta_f]$. \item $k = 1$ (i.e., $\alpha + \gamma = 2\pi + \theta_f< 4\pi$): Again, $\gamma \geq 0$ $\implies$ $\alpha \leq 2\pi + \theta_f$. Similarly, $\alpha \geq 0$ $\implies$ $\gamma \leq 2\pi + \theta_f$. Thus, the feasible range for both $\alpha$ and $\gamma$ is $[0, 2\pi + \theta_f]$. \item $k = 2$ (i.e., $\alpha + \gamma = 4\pi + \theta_f < 6\pi$): Here $\gamma < 4\pi$ $\implies$ $\alpha > \theta_f$. Similarly, $\alpha < 4\pi$ $\implies$ $\gamma > \theta_f$. Thus, the feasible range for both $\alpha$ and $\gamma$ is $(\theta_f, 4\pi)$. \item $k = 3$ (i.e., $\alpha + \gamma = 6\pi + \theta_f < 8\pi$): Here $\gamma < 4\pi$ $\implies$ $\alpha > 2\pi + \theta_f$. Similarly, $\alpha < 4\pi$ $\implies$ $\gamma > 2\pi + \theta_f$. Thus, the feasible range for both $\alpha$ and $\gamma$ is $(2\pi + \theta_f, 4\pi)$. \end{itemize} Similarly, we can obtain the feasible range of path parameters for $4\pi$-arc $RSR$ path and for $2\pi$-arc $LSL$ and $RSR$ paths. { \begin{table}[t!] \centering \footnotesize \caption{Feasible parameter ranges for $2\pi$-arc and $4\pi$-arc paths}\label{table:range_conventional} \begin{tabular}{c|cc|c|cc} \hline \multicolumn{6}{c}{$2\pi$-arc Paths ($\alpha,\gamma$ ranges are up to mod $2\pi$)}\\ \hline \multicolumn{3}{c|}{$LSL$ Path Type} & \multicolumn{3}{c}{$RSR$ Path Type} \\ \hline $k$ & $\alpha$ and $\gamma$ & $\beta$ & $k$ & $\alpha$ and $\gamma$ & $\beta$ \\ \hline $0$ & $[0, \theta_f]$ & $[0, \infty)$ & $-1$ & $[0, 2\pi-\theta_f]$ & $[0, \infty)$ \\ \hline $1$ & $(\theta_f, 2\pi)$ & $[0, \infty)$ & $-2$ & $(2\pi-\theta_f, 2\pi)$ & $[0, \infty)$ \\ \hline \multicolumn{6}{c}{$4\pi$-arc Paths ($\alpha,\gamma$ ranges are up to mod $4\pi$)}\\ \hline \multicolumn{3}{c|}{$LSL$ Path Type} & \multicolumn{3}{c}{$RSR$ Path Type} \\ \hline $k$ & $\alpha$ and $\gamma$ & $\beta$ & $k$ & $\alpha$ and $\gamma$ & $\beta$ \\ \hline $0$ & $[0, \theta_f]$ & $[0, \infty)$ & $-1$ & $[0, 2\pi-\theta_f]$ & $[0, \infty)$ \\ \hline $1$ & $[0, 2\pi + \theta_f]$ & $[0, \infty)$ & $-2$ & $[0, 4\pi -\theta_f]$ & $[0, \infty)$ \\ \hline $2$ & $(\theta_f, 4\pi)$ & $[0, \infty)$ & $-3$ & $(2\pi-\theta_f, 4\pi)$ & $[0, \infty)$ \\ \hline $3$ & $(2\pi+\theta_f, 4\pi)$ & $[0, \infty)$ & $-4$ & $(4\pi-\theta_f, 4\pi)$ & $[0, \infty)$ \\ \hline \end{tabular} \end{table} } \vspace{6pt} \section{Reachability Analysis of $2\pi$-arc Paths}\label{sec:reachabilityanalysis} This section derives the analytical expressions for generating the reachability graphs of $2\pi$-arc $LSL$ and $RSR$ path types and for finding the conditions of full reachability. \subsection{Construction of Reachability Graphs} \label{sec:reachability_graphs} First, we show that for a given $\alpha$, the reachable goal points $(x_f,y_f)$ lie on a ray. Then, we show that by varying $\alpha$, this ray rotates to form the reachability graph. \vspace{10pt} $\bullet$ \textit{\textbf{$2\pi$-arc $LSL$ Paths:}} Let us denote \begin{subequations}\label{eq:p_q_LSL} \begin{align} p^k_{LSL} & \equiv r\sin{\theta_f} + w_{x}r(2k\pi + \theta_f), \\ q^k_{LSL} & \equiv r(1-\cos{\theta_f}) + w_{y}r(2k\pi + \theta_f), \end{align} \end{subequations} which are constants for $k \in \{0,1\}$ given $\theta_f, w_x$ and $w_y$. Further, let us denote \begin{subequations}\label{eq:def_a_c} \begin{align} a(\alpha)\equiv \sin{\alpha} + w_y,\\ c(\alpha) \equiv \cos{\alpha} + w_x. \end{align} \end{subequations} Then, using (\ref{eq:LSL_AB}), (\ref{eq:LSL_AB_condition}), (\ref{eq:p_q_LSL}) and (\ref{eq:def_a_c}) we get: \begin{subequations}\label{eq:LSL_beta_two_new} \begin{align} x_f &= p^k_{LSL} + \beta \cdot c(\alpha), \label{eq:LSL_beta_1} \\ y_f &= q^k_{LSL} + \beta \cdot a(\alpha). \label{eq:LSL_beta_2} \end{align} \end{subequations} By performing $a(\alpha)\cdot$(\ref{eq:LSL_beta_1})$- c(\alpha)\cdot$(\ref{eq:LSL_beta_2}), (\ref{eq:LSL_beta_two_new}) is equivalent to the following: \vspace{-6pt} \begin{empheq}[box=\widefbox]{align}\label{eq:LSL_reachability} a(\alpha) x_f - c(\alpha) y_f - \big(a(\alpha) p^k_{LSL} - c(\alpha) q^k_{LSL} \big) = 0, \nonumber \\ \text{s.t.: } x_f \geq p_{LSL}^k, y_f \geq q_{LSL}^k \text{, if } a(\alpha) \geq 0, c(\alpha) \geq 0, \nonumber\\ x_f < p_{LSL}^k, y_f \geq q_{LSL}^k \text{, if } a(\alpha) \geq 0, c(\alpha) < 0, \\ x_f < p_{LSL}^k, y_f < q_{LSL}^k \text{, if } a(\alpha) < 0, c(\alpha) < 0, \nonumber \\ x_f \geq p_{LSL}^k, y_f < q_{LSL}^k \text{, if } a(\alpha) < 0, c(\alpha) \geq 0. \nonumber \end{empheq} \vspace{6pt} The constraints in (\ref{eq:LSL_reachability}) are obtained by using the feasible range of $\beta \geq 0$ in (\ref{eq:LSL_beta_1}) and (\ref{eq:LSL_beta_2}). As shown in Fig. \ref{fig:quadrants}, these constraints define the quadrants of the coordinate frame with center at $\left(p^k_{LSL},q^k_{LSL}\right)$. For a given $\alpha$, (\ref{eq:LSL_reachability}) represents a reachability ray and the goal $(x_f,y_f)$ is reachable if it lies on such ray. The rotation of (\ref{eq:LSL_reachability}), i.e., the angle it makes with the x-axis measured in the counterclockwise direction, is given as \begin{empheq}[box=\widefbox]{align}\label{eq:LSL_slope} \omega^{k}_{LSL}(\alpha)=\atantwo \big(a(\alpha),c(\alpha)\big) \Mod{2\pi}, \ k \in \{0,1\}. \end{empheq} \begin{figure}[t] \centering \includegraphics[width=0.80\columnwidth]{quadrants_new-eps-converted-to.pdf} \caption{Reachability region of the $LSL$ path type obtained by anticlockwise rotation of~(\ref{eq:LSL_reachability}) about the center of rotation $(p^{k}_{LSL},q^{k}_{LSL})$.} \label{fig:quadrants} \vspace{-6pt} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{reachability_2pi_new-eps-converted-to.pdf} \caption{An example showing the construction of reachability graph for $2\pi$-arc $LSL$ and $RSR$ path types. (a) MaRA for $LSL$ with $k=0$, (b) MiRA for $LSL$ with $k=1$, (c) union of MaRA and MiRA for $LSL$ path, (d) MiRA for $RSR$ with $k=-1$, (e) MaRA for $RSR$ with $k=-2$, (f) union of MaRA and MiRA for $RSR$ path, (f) complete reachability graph obtained by taking union of both $LSL$ and $RSR$ path types.}\label{fig:reachability_example} \end{figure*} \vspace{6pt} $\bullet$ \textit{\textbf{$2\pi$-arc $RSR$ Paths:}} Let us denote \begin{subequations}\label{eq:p_q_RSR} \begin{align} p^k_{RSR} &\equiv -r\sin{\theta_f} - w_x r (2k\pi + \theta_f), \\ q^k_{RSR} &\equiv -r(1-\cos{\theta_f}) - w_y r (2k\pi + \theta_f). \end{align} \end{subequations} which are constants for $k\in \{-1,-2\}$ given $\theta_f, w_x$ and $w_y$. Further, let us denote \begin{equation}\label{eq:def_b} b(\alpha) \equiv \sin{\alpha} - w_y. \end{equation} Then, using (\ref{eq:RSR_AB}), (\ref{eq:RSR_AB_condition}), (\ref{eq:p_q_RSR}) and (\ref{eq:def_b}) we get: \begin{subequations}\label{eq:RSR_beta_two_new} \begin{align} x_f &= p^k_{RSR} + \beta \cdot c(\alpha), \label{eq:RSR_beta_1} \\ y_f &= q^k_{RSR} - \beta \cdot b(\alpha). \label{eq:RSR_beta_2} \end{align} \end{subequations} By performing $b(\alpha) \cdot$(\ref{eq:RSR_beta_1})$+ c(\alpha) \cdot$(\ref{eq:RSR_beta_2}), (\ref{eq:RSR_beta_two_new}) is equivalent to the following: \vspace{-8pt} \begin{empheq}[box=\widefbox]{align}\label{eq:RSR_reachability} b(\alpha) x_f + c(\alpha) y_f - \big(b(\alpha) p^k_{RSR} + c(\alpha) q^k_{RSR} \big) = 0, \nonumber \\ \text{s.t.: } x_f \geq p_{RSR}^k, y_f \geq q_{RSR}^k \text{, if } b(\alpha) \leq 0, c(\alpha) \geq 0, \nonumber\\ x_f < p_{RSR}^k, y_f \geq q_{RSR}^k \text{, if } b(\alpha) \leq 0, c(\alpha) < 0, \\ x_f < p_{RSR}^k, y_f < q_{RSR}^k \text{, if } b(\alpha) > 0, c(\alpha) < 0, \nonumber \\ x_f \geq p_{RSR}^k, y_f < q_{RSR}^k \text{, if } b(\alpha) > 0, c(\alpha) \geq 0. \nonumber \end{empheq} \begin{figure}[t] \centering \subfloat[3D reachable space for different parameters]{ \includegraphics[width=0.70\columnwidth]{proposition1_3D_new-eps-converted-to.pdf}\label{fig:proposition1_3D_1}} \\ \subfloat[$v_w = 0.25$]{ \includegraphics[width=.5\columnwidth]{proposition1_3D_2-eps-converted-to.pdf}\label{fig:proposition1_3D_2}} \subfloat[$v_w = 0.75$]{ \includegraphics[width=.5\columnwidth]{proposition1_3D_4-eps-converted-to.pdf}\label{fig:proposition1_3D_4}} \caption{The parameter space between $\theta_f \in [0, 2\pi)$, $\theta_w \in [0,2\pi)$ and $v_w = (0,1)$, where full reachability is achieved.}\label{fig:proposition1_3D} \end{figure} \vspace{6pt} The constraints in (\ref{eq:RSR_reachability}) are obtained by using the feasible range of $\beta \geq 0$ in (\ref{eq:RSR_beta_1}) and (\ref{eq:RSR_beta_2}). Again, these constraints define the quadrants of the coordinate frame with center at $\left(p^k_{RSR},q^k_{RSR}\right)$. For any given $\alpha$, (\ref{eq:RSR_reachability}) represents a reachability ray, and the goal $(x_f,y_f)$ is reachable if it lies on such ray. The rotation of (\ref{eq:RSR_reachability}) is given as \begin{empheq}[box=\widefbox]{align}\label{eq:RSR_slope} \omega^{k}_{RSR}(\alpha)=\atantwo \big(-b(\alpha),c(\alpha)\big) \Mod{2\pi}, \ k \in \{-1,-2\}. \end{empheq} \vspace{12pt} Now, we show a lemma that helps in constructing the reachability graphs using (\ref{eq:LSL_reachability}) and (\ref{eq:RSR_reachability}). \vspace{0pt} \begin{lem} \label{lem:swipe} As $\alpha$ increases from $\alpha^k_{inf}$ to $\alpha^k_{sup}$, then for: \begin{itemize} \item $LSL$ path type: ray (\ref{eq:LSL_reachability}) rotates anticlockwise about the center $\left(p_{LSL}^k,q_{LSL}^k\right)$, $\forall k \in \{0,1\}$. \vspace{3pt} \item $RSR$ path type: ray (\ref{eq:RSR_reachability}) rotates clockwise about the center $\left(p_{RSR}^k,q_{RSR}^k\right)$, $\forall k \in \{-1,-2\}$. \end{itemize} \end{lem} \begin{proof} See Appendix~\ref{app:lemma1}. \end{proof} Lemma \ref{lem:swipe} implies that the reachable area for $LSL$ paths is obtained by rotating (\ref{eq:LSL_reachability}) about the center $\left(p_{LSL}^k,q_{LSL}^k\right)$, from $\omega^{k}_{LSL}(\alpha_{inf})$ to $\omega^{k}_{LSL}(\alpha_{sup})$, where $\alpha^k_{inf}$ to $\alpha^k_{sup}$ are the bounds of $\alpha$ (see Table~\ref{table:range_conventional}) for a given $k$. Fig. \ref{fig:quadrants} shows the reachable area for $LSL$ paths obtained by this rotation. Note that there are different reachable areas for each $k$. Similarly, the reachable region for $RSR$ paths is obtained by rotating (\ref{eq:RSR_reachability}) from $\omega^{k}_{RSR}(\alpha_{inf})$ to $\omega^{k}_{RSR}(\alpha_{sup})$ for both its $k$ values. \vspace{6pt} \begin{rem} Note that for simplicity of notation, we omit the superscript of $\alpha$ whenever it is used in the $\omega$ function, where it assumes the superscript of $\omega$. \end{rem} \vspace{6pt} For further explanation, we introduce the concepts of \textit{Major Reachable Area} (MaRA) and \textit{Minor Reachable Area} (MiRA). \begin{defn}[\textbf{MaRA}] For an $LSL$ ($RSR$) path type, MaRA is the larger of the reachable areas spanned by $k = 0$ or $1$ ($k = -1$ or $-2$). \end{defn} \begin{defn}[\textbf{MiRA}] For an $LSL$ ($RSR$) path type, MiRA is the smaller of the reachable areas spanned by $k = 0$ or $1$ ($k = -1$ or $-2$). \end{defn} \vspace{6pt} \textit{Example}: Fig.~\ref{fig:reachability_example} shows an example of the construction of the reachability graph for $2\pi$-arc $LSL$ and $RSR$ path types. Here, the environment has a current of speed $v_w = 0.5$~m/s and direction $\theta_w = \pi/3$. The goal pose has the heading angle $\theta_f = 7\pi/4$, while its position $(x_f, y_f)$ is varied within $[-10, 10]$. Figs.~\ref{fig:reachability_example}a and~\ref{fig:reachability_example}b show the MaRA ($k=0$) and MiRA ($k=1$) of the $LSL$ paths, respectively, which are obtained by rotating the ray (\ref{eq:LSL_reachability}) by varying $\alpha$ from $\alpha_{inf}^k$ to $\alpha_{sup}^k$. The corresponding centers of rotation $(p_{LSL}^0,q_{LSL}^0) = (0.67, 2.67)$ and $(p_{LSL}^1,q_{LSL}^1) = (2.24, 5.39)$ are also shown. Fig.~\ref{fig:reachability_example}c shows the total reachable area of the $LSL$ paths obtained by combining the MaRA and MiRA from Figs.~\ref{fig:reachability_example}a and \ref{fig:reachability_example}b, respectively. Clearly, the $LSL$ paths do not provide full reachability. Similarly, Figs.~\ref{fig:reachability_example}d and \ref{fig:reachability_example}e show the MiRA ($k=-1$) and MaRA ($k=-2$) of the $RSR$ paths, respectively, which are obtained by rotating the ray (\ref{eq:RSR_reachability}) by varying $\alpha$ from $\alpha_{inf}^k$ to $\alpha_{sup}^k$. The corresponding centers of rotation $(p_{RSR}^{-1},q_{RSR}^{-1})= (0.90, 0.05)$ and $(p_{RSR}^{-2},q_{RSR}^{-2})= (2.47, 2.77)$ are also shown. Again, Fig.~\ref{fig:reachability_example}f shows the total reachable area of the $RSR$ path obtained by combining the MaRA and MiRA from Figs.~\ref{fig:reachability_example}d and \ref{fig:reachability_example}e, respectively. As seen, the $RSR$ paths also do not provide full reachability. Finally, Fig.~\ref{fig:reachability_example}g shows the complete reachability graph using both $LSL$ and $RSR$ path types, which is obtained by combining Figs.~\ref{fig:reachability_example}c and \ref{fig:reachability_example}f. As seen in Fig.~\ref{fig:reachability_example}g, there is still some region that is unreachable, thus both $LSL$ and $RSR$ path types together also do not provide full reachability. \begin{figure}[t] \centering \includegraphics[width=0.85\columnwidth]{prop1_example_new-eps-converted-to.pdf} \caption{The optimal $4\pi$-arc paths in the IF and CF, while there is no feasible solution for $2\pi$-arc paths. The start pose $(x_0,y_0,\theta_0) = (0,0,0)$ and the goal pose $(x_f,y_f,\theta_f) = (6, 3, 7\pi/4)$. The optimal $4\pi$-arc path parameters are: $\alpha = 0.116\pi$, $\beta = 2.976$, $\gamma = 2.135\pi$.}\label{fig:prop1_example} \vspace{-6pt} \end{figure} \vspace{-6pt} \subsection{Full Reachability Conditions for the $2\pi$-arc Path Types} \label{sec:full_reachability_conditions} After acquiring the analytical expressions for generating the reachability graphs of the $2\pi$-arc $LSL$ and $RSR$ path types, we now investigate the conditions under which these paths provide full reachability. Note that full reachability is achieved if the entire space is covered by atleast one of the following combinations: \begin{enumerate} \item Union of MaRA and MiRA of $LSL$, and/or \item Union of MaRA and MiRA of $RSR$, and/or \item Union of MaRA of $LSL$ and MiRA of $RSR$, and/or \item Union of MaRA of $RSR$ and MiRA of $LSL$. \end{enumerate} \begin{rem} We show by Lemma \ref{lem:slopeproperty3} in Appendix \ref{app:reachability} that these four cases are sufficient for reachability analysis. \end{rem} For continuity of reading, the derivations of the full reachability conditions for the above four cases are presented in Appendix~\ref{app:reachability} and the results are summarized in Table~\ref{table:full_reachability}. If at some goal pose, all of the conditions in Table~\ref{table:full_reachability} are violated, then it is unreachable by $2\pi$-arc paths. Next, we visually verify the unreachable regions using a numerical validation. \vspace{6pt} \textbf{Numerical Validation}: The reachabilty conditions for $2\pi$-arc paths are shown in the last column of Table~\ref{table:full_reachability} in Appendix~\ref{app:reachability}. These reachability conditions only depend on parameters $\theta_f$, $\theta_w$ and $v_w$. Thus, we construct a 3D reachability graph by varying $\theta_f \in [0, 2\pi)$ and $\theta_w \in [0, 2\pi)$ in steps of $\pi/100$, and $v_w \in (0,1)$ in steps of $0.1$. For any 3D parametric point, if at least one of the full reachability conditions is satisfied, then such point is colored, and the color varies with respect to $v_w$, as shown in Fig.~\ref{fig:proposition1_3D_1}. In contrast, the white area indicates the parametric space where all the reachability conditions are violated, i.e., providing no feasible solutions. This validation illustrates that full reachability is not achieved by $2\pi$-arc $LSL$ and $RSR$ paths. Figs.~\ref{fig:proposition1_3D_2} and ~\ref{fig:proposition1_3D_4} show the cross sections of Fig.~\ref{fig:proposition1_3D_1} at $v_w= 0.25$~m/s and $v_w=0.75$~m/s, respectively. It is seen that a higher $v_w$ leads to a smaller reachable space. Fig.~\ref{fig:prop1_example} shows a specific example where $2\pi$-arc path does not exist, but $4\pi$-arc path does. The start pose $(x_0,y_0,\theta_0) = (0,0,0)$, the goal pose $(x_f,y_f,\theta_f) = (6, 3, 7\pi/4)$, and the current moves at speed $v_w = 0.5$~m/s in the direction of $\theta_w = \pi/3$. It is seen that the turning angle of the second turn in the optimal $4\pi$-arc path has $\gamma = 2.135\pi>2\pi$, which drives the vehicle to circle around at the end so that it can meet with the exact goal heading with the help of external current. \vspace{0pt} \section{Theoretical Properties of $4\pi$-arc Paths}\label{sec:main_contents} The previous section established that $2\pi$-arc $LSL$ and $RSR$ paths do not guarantee full reachability. This section presents the theoretical properties of $4\pi$-arc paths which highlight their advantages over $2\pi$-arc paths in terms of: 1) full reachability, and 2) lower time costs, while requiring similar computational complexity. First, we present the concept of a dominant path type and show an example to motivate the above properties. \begin{defn} [\textbf{Dominant Path Type}] For a given goal pose, a path type $LSL$ ($RSR$) is said to be dominant over $RSR$ ($LSL$), if it achieves a lower time cost to reach that goal pose. \end{defn} \vspace{6pt} \textit{Example}: Figs.~\ref{fig:claim1_2pi} and \ref{fig:claim1_4pi} present the reachability plots of $2\pi$-arc and $4\pi$-arc paths, respectively. These are generated for an environment which has a current of speed $v_w = 0.5$~m/s and heading angle $\theta_w = \pi/3$. The coordinates of the goal pose $(x_f,y_f)$ are varied within $[-10,10]$. The two subplots of each figure correspond to two different goal pose directions $\theta_f \in \{5\pi/4, 7\pi/4\}$. A region is color-coded cyan (orange) if an $LSL$ ($RSR$) path exists and dominant over the $RSR$ ($LSL$) path type. The white color indicates that no feasible solution exists for either path type and the region is unreachable. As seen in Fig.~\ref{fig:claim1_2pi}(2), for $\theta_f = 7\pi/4$, there exists a region which is unreachable for $2\pi$-arc paths. This implies that for any goal pose inside this region, no solutions exist for $\alpha$ and $\gamma$ within their feasible ranges defined in Table~\ref{table:range_conventional}. In contrast, as seen in Fig.~\ref{fig:claim1_4pi}(2), $4\pi$-arc paths achieve full reachability. \begin{figure}[t] \centering \subfloat[Reachability graphs of the $2\pi$-arc paths for $\theta_f=5\pi/4$ and $7\pi/4$.]{ \includegraphics[width=0.90\columnwidth]{claim1_2pi-eps-converted-to.pdf}\label{fig:claim1_2pi}} \\ \vspace{-5pt} \subfloat[Reachability graphs of the $4\pi$-arc paths for $\theta_f=5\pi/4$ and $7\pi/4$.]{ \includegraphics[width=0.90\columnwidth]{claim1_4pi-eps-converted-to.pdf}\label{fig:claim1_4pi}} \vspace{0pt} \caption{An example of reachability graphs for the $2\pi$-arc and $4\pi$-arc paths. The dominant of the $LSL$ (blue color) or $RSR$ (orange color) paths is shown in the corresponding area. White color indicates unreachable area.} \label{fig:claim1_both}\vspace{-6pt} \end{figure} Furthermore, the dominant path type (i.e., $LSL$ or $RSR$) for the same region could be different when using the $2\pi$-arc paths and $4\pi$-arc paths, as seen in Figs.~\ref{fig:claim1_2pi}(1) and \ref{fig:claim1_4pi}(1) corresponding to $\theta_f = 5\pi/4$. Since $4\pi$-arc solutions already include the $2\pi$-arc solutions, the above observation implies that there exist goal poses for which $4\pi$-arc paths can achieve even lower time costs as compared to the $2\pi$-arc paths. \vspace{6pt} \textbf{Roadmap of this Section:} In the following subsections, we present four theorems to highlight the theoretical properties of $4\pi$-arc $LSL$ and $RSR$ paths and compare them with the corresponding $2\pi$-arc paths. First, Theorem~\ref{claim1} proves that both the $LSL$ and $RSR$ $4\pi$-arc paths provide full reachability unlike the $2\pi$-arc paths. Then, Theorem~\ref{claim2} and Corollary~\ref{claim2_cor} show that the computation workload required to get a solution using the $4\pi$-arc paths is the same as that using the $2\pi$-arc paths. Next, Theorem~\ref{claim3} compares the optimality of $4\pi$-arc and $2\pi$-arc path solutions and shows that the optimal trajectory provided by $4\pi$-arc paths is either of shorter time or same as that provided by $2\pi$-arc paths. Finally, Theorem~\ref{rem:over_4pi} proves that $\alpha,\gamma\in[0,4\pi)$ is sufficient for optimality and increasing the range of these arc segments beyond $4\pi$ does not lead to a shorter time path. \vspace{0pt} \subsection{Full Reachability of $4\pi$-arc Paths}\label{4pifullreach} The following theorem relates to the reachability of the $4\pi$-arc solutions for the $LSL$ and $RSR$ path types. \begin{thm}[\textbf{Full reachability of $4\pi$-arc paths}]\label{claim1} The $4\pi$-arc $LSL$ and $RSR$ paths individually provide full reachability. \end{thm} \begin{proof} Full reachability implies the existence of solution for any goal pose. We prove for $LSL$ and $RSR$ paths below. \begin{itemize} \item \textit{$4\pi$-arc $LSL$ paths}: Consider $k = 1$. From Table~\ref{table:range_conventional}, $\alpha_{inf} = 0$ and $\alpha_{sup} = 2\pi + \theta_f > 2\pi$. Using Lemma~\ref{lem:swipe}, we construct the reachable space for $k = 1$ by rotating the ray (\ref{eq:LSL_reachability}) around $(p_{LSL}^1, q_{LSL}^1)$ by varying $\alpha$ from $0$ to $2\pi + \theta_f$. In this process, the ray (\ref{eq:LSL_reachability}) swipes in the anticlockwise direction from $\omega_{LSL}^1(0)$ to $\omega_{LSL}^1(2\pi + \theta_f)$. However, when $\alpha$ reaches $2\pi<2\pi + \theta_f$, the rotation of ray (\ref{eq:LSL_reachability}) becomes $\omega_{LSL}^1(2\pi) = \omega_{LSL}^1(0) = \atantwo(w_y, 1+w_x) \Mod{2\pi}$, which implies that the ray comes back to the start again and continues swiping thereafter. This means that for $k = 1$, the whole space is covered and full reachability is obtained. Now consider $k = 2$. From Table~\ref{table:range_conventional}, $\alpha_{inf} = \theta_f$ and $\alpha_{sup} = 4\pi$. Following the same process as for the $k = 1$ case, one can see that the swiped area for $k = 2$ also covers the whole area and full reachability is obtained. In summary, $4\pi$-arc $LSL$ paths guarantee full reachability. (Note: for $k = 0$ and $3$, the swiped area does not cover the whole space, hence they do not provide full reachability.) \vspace{6pt} \item \textit{$4\pi$-arc $RSR$ paths}: Consider $k = -2$. From Table~\ref{table:range_conventional}, $\alpha_{inf} = 0$ and $\alpha_{sup} = 4\pi - \theta_f > 2\pi$. Using Lemma~\ref{lem:swipe}, as $\alpha$ grows, the ray (\ref{eq:RSR_reachability}) rotates around $(p_{RSR}^{-2}, q_{RSR}^{-2})$ in the clockwise direction from $\omega_{RSR}^{-2}(0)$ to $\omega_{RSR}^{-2}(4\pi - \theta_f)$. During this process, when $\alpha$ reaches $2\pi<4\pi - \theta_f$, the rotation of ray (\ref{eq:RSR_reachability}) becomes $\omega_{RSR}^{-2}(2\pi) = \omega_{RSR}^{-2}(0) = \atantwo(w_y, 1 + w_x) \Mod{2\pi}$, which implies that it comes back to the start again and continues swiping thereafter. This means that for $k = -2$, the whole space is covered and full reachability is obtained. Now consider $k = -3$. From Table~\ref{table:range_conventional}, $\alpha_{inf} = 2\pi-\theta_f$ and $\alpha_{sup} = 4\pi$. Following the same process as for the $k = -2$ case, one can see that the swiped area for $k = -3$ also covers the whole space and full reachability is obtained. In summary, $4\pi$-arc $RSR$ paths guarantee full reachability. (Note: for $k = -1$ and $-4$, the swiped area does not cover the whole space, hence they do not provide full reachability.) \end{itemize} Hence proved. \end{proof} \vspace{-12pt} \subsection{Time Costs of $4\pi$-arc $LSL$ and $RSR$ Paths}\label{4pitimecost} Now, we analyse the time costs of $4\pi$-arc $LSL$ and $RSR$ paths and compare them to the corresponding $2\pi$-arc paths. Based on (\ref{eq:LSL_conditions}) and substituting $v = 1$, the time cost for an $LSL$ path type is given as \begin{equation} T = r(\alpha + \gamma) + \beta = 2k\pi r + r\theta_f + \beta. \end{equation} Similarly, based on (\ref{eq:RSR_conditions}), the time cost for an $RSR$ path type is given as \begin{equation} T = r(\alpha + \gamma) + \beta = -2k\pi r - r\theta_f + \beta. \end{equation} From this point on, let us denote $T_k$ and $\beta_k$ as the values of $T$ and $\beta$ for a given $k$, i.e., $T_k = 2k\pi r + r\theta_f + \beta_k$ for an $LSL$ path and $T_k = -2k\pi r - r\theta_f + \beta_k$ for an $RSR$ path. \vspace{12pt} \begin{thm}\label{claim2} The following are true: \begin{itemize} \item $T_0<T_{1}<T_2<T_{3}$, \ for $4\pi$-arc $LSL$ paths. \item $T_{-1}<T_{-2}<T_{-3}<T_{-4}$, \ for $4\pi$-arc $RSR$ paths. \end{itemize} \end{thm} \begin{proof} Let us denote $\Delta T_k$ as the difference in time cost $T_k$ between two consecutive $k$ values, i.e., for $LSL$ path type, \begin{equation}\label{eq:LSL_delta_T} \Delta T_k \triangleq T_{k+1} - T_k = 2\pi r + \beta_{k+1} - \beta_k, \ k = 0, 1, 2, \end{equation} and for $RSR$ path type, \begin{equation}\label{eq:RSR_delta_T} \Delta T_k \triangleq T_{k-1} - T_k = 2\pi r + \beta_{k-1} - \beta_k, \ k = -1, -2, -3. \end{equation} Consider $4\pi$-arc $LSL$ paths. To prove the theorem, we show that $\Delta T_k > 0, \forall k=0,1,2$. Fig.~\ref{fig:claim2_LSL} shows the feasible $4\pi$-arc $LSL$ paths in the CF, corresponding to $k$ (shown in solid blue) and $k+1$ (shown in solid red), to reach the goal pose $(x_f, y_f, \theta_f)$. These paths have the time costs $T_k$ and $T_{k+1}$, respectively. While these two paths share the same start pose, due to different travel times, the corresponding goal poses in the CF become $G_k = (x_f - w_x T_k, y_f - w_y T_k, \theta_f)$ and $G_{k+1} = (x_f - w_x T_{k+1}, y_f - w_y T_{k+1}, \theta_f)$, where $\norm{G_{k+1} - G_k} = \sqrt{w_x^2 \Delta T_k^2 + w_y^2 \Delta T_k^2} = v_w \abs{\Delta T_k}$. Since an $LSL$ path is comprised of an $\alpha$ arc, a straight line and a $\gamma$ arc, one can equivalently combine the two arcs followed by the straight line to reach the same goal pose, as shown by the dotted line paths in Fig.~\ref{fig:claim2_LSL}, corresponding to $k$ (shown in dotted blue) and $k+1$ (shown in dotted red). According to (\ref{eq:LSL_conditions}), $\alpha + \gamma = 2k \pi + \theta_f$, so if $k$ is increased by $1$, it adds a full $2\pi$ rotation to this combined $\alpha$ and $\gamma$ arc. This implies that after combining these arcs, the blue and red dotted straight lines share the same start point $O_k \in \mathbb{R}^2$. Note that the solid straight lines are parallel to the corresponding dotted straight lines, with lengths $\beta_k$ and $\beta_{k+1}$, respectively. \begin{figure}[!t] \centering \subfloat[$LSL$ path type]{ \includegraphics[width=.48\columnwidth]{thm_proof_LSL-eps-converted-to.pdf}\label{fig:claim2_LSL}} \subfloat[$RSR$ path type]{ \includegraphics[width=.5\columnwidth]{thm_proof_RSR-eps-converted-to.pdf}\label{fig:claim2_RSR} }\\ \caption{Illustrative figures to show $\Delta T_k > 0, \forall k$ in Theorem~\ref{claim2}.}\label{fig:claim2}\vspace{-6pt} \end{figure} Now consider the triangle formed by $O_k, G_k$ and $G_{k+1}$, shown by the shaded region in Fig.~\ref{fig:claim2_LSL}, where $\norm{O_k - G_{k}} = \beta_k$ and $\norm{O_k - G_{k+1}} = \beta_{k+1}$. Next, we consider three cases: \vspace{6pt} \begin{enumerate} \item $\Delta T_k > 0$: In this case, $\norm{G_{k+1} - G_k} = v_w \Delta T_k$. Using the triangle inequalities, we get $\abs{\beta_{k+1} - \beta_k} < v_w \Delta T_k$. By (\ref{eq:LSL_delta_T}), $\beta_{k+1} - \beta_k = \Delta T_k - 2\pi r$. Hence, $\abs{\Delta T_k - 2\pi r} < v_w \Delta T_k$ $\implies$ $\frac{2\pi r}{1+v_w} < \Delta T_k < \frac{2\pi r}{1-v_w}$. Note that if $O_k, G_k$ and $G_{k+1}$ fall on one line, then $\abs{\beta_{k+1} - \beta_k} = v_w \Delta T_k$, then $\Delta T_k = \frac{2\pi r}{1+v_w}$ or $\frac{2\pi r}{1-v_w}$. Therefore, the feasible range of $\Delta T_k$ is \vspace{-3pt} \begin{equation}\label{eq:Delta_Tk_range} \boxed{\Delta T_k \in \bigg[\frac{2\pi r}{1+v_w}, \frac{2\pi r}{1-v_w} \bigg].} \end{equation} \vspace{6pt} \item $\Delta T_k < 0$: In this case, $\norm{G_{k+1} - G_k} = -v_w \Delta T_k$. Then, based on the triangle inequalities, $\abs{\beta_{k+1} - \beta_k} < -v_w \Delta T_k$. Again substituting $\beta_{k+1} - \beta_k = \Delta T_k - 2\pi r$ from (\ref{eq:LSL_delta_T}), we get $\frac{2\pi r}{1-v_w} < \Delta T_k < \frac{2\pi r}{1+v_w}$. However, since $0 < v_w < 1$, this inequality is invalid. Thus, $\Delta T_k <0$ is impossible. \vspace{6pt} \item $\Delta T_k = 0$: In this case, $\norm{G_{k+1} - G_k} = 0$. Then, $\abs{\beta_{k+1} - \beta_k} = 0$ $\implies$ $\Delta T_k - 2\pi r=0$ $\implies$ $\Delta T_k = 2\pi r$, which is a contradiction, hence $\Delta T_k = 0$ is impossible. \end{enumerate} \vspace{6pt} Thus, $\Delta T_k > 0, \forall k$, and its bounds are given in (\ref{eq:Delta_Tk_range}). Similarly, for $4\pi$-arc $RSR$ paths, the bounds of $\Delta T_k$ can be derived using Fig.~\ref{fig:claim2_RSR}, leading to the same bounds and the derivation is omitted here. Hence proved. \end{proof} The following corollary shows that in order to obtain the minimum-time solutions using $4\pi$-arc paths, it is sufficient to use $k=\{0, 1\}$ for $LSL$ path type and $k=\{-1, -2\}$ for $RSR$ path type and the remaining $k$ values are not needed. \vspace{6pt} \begin{cor}\label{claim2_cor} A minimum-time solution for the $4\pi$-arc paths can be obtained by using \begin{itemize} \item $k \in \{0,1\}$ for $LSL$ paths and \item $k \in \{-1,-2\}$ for $RSR$ paths. \end{itemize} \end{cor} \begin{proof} Theorem~\ref{claim2} implies that based on time costs, the preferred solutions follow the order $k=0,1,2,3$ for $LSL$ paths and $k=-1,-2,-3,-4$ for $RSR$ paths. Theorem~\ref{claim1} suggests that for $LSL$ paths, $k = 0$ solutions do not provide full reachability; however full reachability can be achieved by $k=1$ solutions. Similarly, for $RSR$ paths, $k = -1$ solutions do not provide full reachability; however full reachability can be achieved by $k=-2$ solutions. Thus, in order to get full reachability and to obtain minimum-time paths, one must solve only for $k \in \{0,1\}$ for $LSL$ paths, and $k \in \{-1,-2\}$ for $RSR$ paths. Hence proved. \end{proof} \begin{rem}Corollary~\ref{claim2_cor} implies that the computation workload required to get a solution using the $4\pi$-arc paths is the same as that using the $2\pi$-arc paths. \end{rem} \begin{cor}\label{claim2_cor2} A minimum-time $4\pi$-arc $LSL$ or $RSR$ solution must satisfy $\alpha$ + $\gamma$ < 4$\pi$. \end{cor} \begin{proof} Using Corollary~\ref{claim2_cor} and that $\theta_f < 2\pi$, substitute $k=1$ into (\ref{eq:LSL_conditions}) and $k=-2$ into (\ref{eq:RSR_conditions}), one can easily get the result. Hence proved. \end{proof} \begin{rem}\label{rem:parameters} As seen from Table \ref{table:range_conventional}, the feasible ranges of parameters $\alpha$ and $\gamma$ for the $4\pi$-arc $LSL$ ($RSR$) paths for $k = 0$ ($k = -1$) are the same as those of the corresponding $2\pi$-arc paths. However, for $k = 1$ ($k = -2$), the parameter ranges for $4\pi$-arc $LSL$ ($RSR$) paths form supersets of the corresponding ranges of the $2\pi$-arc paths. \end{rem} \vspace{6pt} \begin{thm}\label{claim3} The time costs of $4\pi$-arc path solutions are lower than or same as those of the $2\pi$-arc path solutions. \end{thm} \begin{proof} First, consider the case when both $2\pi$-arc $LSL$ and $RSR$ solutions exist for a given goal pose. Remark~\ref{rem:parameters} indicates that any valid $2\pi$-arc path solution is also a valid $4\pi$-arc path solution. Hence, in this case the time cost of $4\pi$-arc path solution is the same as that of the $2\pi$-arc path solution. Second, consider the case when neither of the $2\pi$-arc $LSL$ and $RSR$ solutions exist for a given goal pose. In this case, Theorem~\ref{claim1} guarantees that $4\pi$-arc $LSL$ and $RSR$ solutions exist for that goal pose. Third, consider the case when only one of the $2\pi$-arc $LSL$ or $RSR$ path solution exists for a given goal pose, i.e., the other path type does not provide a solution. Thus, the dominant solution is the only existing path type. However, from Theorem~\ref{claim1}, for $4\pi$-arc paths both $LSL$ and $RSR$ paths exist and the dominant solution is selected from these two path types with the minimum time cost. Thus, due to the existence of an extra solution provided by the $4\pi$-arc paths, the time cost of the dominant path could be better than or same as that of the single solution provided by the $2\pi$-arc paths. The examples below validate this case. Hence proved. \end{proof} \begin{figure}[t] \centering \subfloat[Cost map of $2\pi$-arc paths.]{ \includegraphics[width=0.215\textwidth]{example_parta-eps-converted-to.pdf}\label{fig:example_2pi}} \subfloat[Cost map of $4\pi$-arc paths.]{ \includegraphics[width=0.245\textwidth]{example_partb-eps-converted-to.pdf}\label{fig:example_4pi}} \\ \subfloat[The $2\pi$-arc and $4\pi$-arc path solutions in the IF and CF. The start pose $(x_0, y_0,\theta_0)=(0,0,0)$ and the goal pose $(x_f, y_f, \theta_f)=(-1, 4, \pi/4)$. The optimal $2\pi$-arc path has: $\alpha=1.890\pi, \beta=12.691$ and $\gamma=1.860\pi$; and the optimal $4\pi$-arc path has: $\alpha=0.206\pi, \beta=6.143$ and $\gamma=2.044\pi$.]{ \includegraphics[width=0.48\textwidth]{example_partc-eps-converted-to.pdf}\label{fig:example_paths}} \caption{An example to illustrate the result of Theorem~\ref{claim3} that the $4\pi$-arc paths provide faster solutions than the $2\pi$-arc paths.} \vspace{-6pt} \label{fig:claim3_example1} \end{figure} \vspace{0pt} \textit{Example}: We show an example where the $4\pi$-arc paths provide faster (i.e., lower time cost) solutions as compared to the $2\pi$-arc paths. We first construct the time cost map for a fixed set of $\theta_f$, $v_w$ and $\theta_w$, where each $(x_f, y_f)$ is assigned the time cost of the dominant path between $LSL$ and $RSR$ paths. Fig.~\ref{fig:claim3_example1} shows the example generated for an environment with current of $v_w = 0.5$~m/s and $\theta_w = \pi$. For constructing the time cost map, the goal poses are varied within $x_f, y_f \in [-10,10]$~m with a fixed heading angle $\theta_f = \pi/4$. Figs.~\ref{fig:example_2pi} and \ref{fig:example_4pi} show the time cost maps for $2\pi$-arc paths and $4\pi$-arc paths, respectively. The color code indicates the value of the time cost. Clearly, there exist many goal poses where $4\pi$-arc paths provide significantly lower time costs. Next, we pick a goal pose where $4\pi$-arc paths provide a lower time cost, say $(x_f,y_f,\theta_f) = (-1, 4, \pi/4)$. Then, we draw the optimal $2\pi$-arc and $4\pi$-arc paths in the IF and the CF, as shown in Fig.~\ref{fig:example_paths}. The $2\pi$-arc path follows the $RSR$ path type, and requires a total time cost of $24.47$~s. In comparison, the $4\pi$-arc path follows the $LSL$ path type and the total time cost is reduced to $13.21$~s. This is because on the $2\pi$-arc path, the vehicle has to travel a longer straight-line segment that is almost in an opposite direction to the current, hence its actual speed in the inertial frame becomes slower. On the other hand, the $4\pi$-arc path first makes a small left turn, followed by a much shorter straight-line segment; then, it starts circling for over $2\pi$ while letting the current help it to reach the goal. \vspace{6pt} \begin{thm}\label{rem:over_4pi} The time cost $T$ cannot be reduced further by extending the ranges of arc segments ($\alpha$ and $\gamma$) over $4\pi$. \end{thm} \begin{proof} Suppose the ranges of $\alpha$ and $\gamma$ are defined over $[0, 2n\pi)$, where $n > 2$ and $n \in \mathbb{N}^+$. Then, using the same procedure as described in Section~\ref{sec:method}, we get a larger set of feasible values of $k$, s.t. for $LSL$ paths, $k \in \{0,1,\ldots,2n-1\}$, and for $RSR$ paths, $k \in \{-1, -2,\ldots, -2n\}$. Then, one can derive the feasible ranges for $\alpha$ and $\gamma$. Consider a $2n\pi$-arc $LSL$ path, where $\alpha \in [0, 2n\pi)$ and $\gamma \in [0, 2n\pi)$. We examine only $k=0,1$ cases as necessary. \begin{itemize} \item $k = 0$ (i.e., $\alpha + \gamma = \theta_f < 2\pi$): Now, $\gamma \geq 0$ $\implies$ $\alpha \leq \theta_f$. Similarly, $\alpha \geq 0$ $\implies$ $\gamma \leq \theta_f$. Thus, the feasible range for both $\alpha$ and $\gamma$ is $[0,\theta_f]$. \item $k = 1$ (i.e., $\alpha + \gamma = 2\pi + \theta_f< 4\pi$): Again, $\gamma \geq 0$ $\implies$ $\alpha \leq 2\pi + \theta_f$. Similarly, $\alpha \geq 0$ $\implies$ $\gamma \leq 2\pi + \theta_f$. Thus, the feasible range for both $\alpha$ and $\gamma$ is $[0, 2\pi + \theta_f]$. \end{itemize} The above analysis indicates that for $2n\pi$-arc $LSL$ paths, if $n > 2$, the feasible ranges of $\alpha$ and $\gamma$ for $k = 0, 1$ are the same to the corresponding ones for $4\pi$-arc $LSL$ paths, as presented in Table~\ref{table:range_conventional}. Similarly, one can verify that for $2n\pi$-arc $RSR$ paths, if $n > 2$, the feasible ranges of $\alpha$ and $\gamma$ for $k = -1, -2$ are also the same to the corresponding ones for $4\pi$-arc $RSR$ paths. Since the feasible ranges of $\alpha$ and $\gamma$ for $2n\pi$-arcs are the same as those for $4\pi$-arc paths, by Theorem~\ref{claim1} full reachability is achieved using $k = 0, 1$ for $LSL$ paths and $k = -1, -2$ for $RSR$ paths. Further, by Theorem~\ref{claim2}, $\Delta T_k > 0, \forall k$. Therefore, for $n > 2$, we only need to search over $k = 0,1$ for $LSL$ paths and $k = -1,-2$ for $RSR$ paths to get the minimum-time path. This implies that the time cost $T$ is not reduced by extending the feasible ranges of $\alpha$ and $\gamma$ over $4\pi$. Hence proved. \end{proof} \vspace{0pt} \section{Results and Discussion}\label{sec:results} This section presents the results of the proposed approach, which uses the $4\pi$-arc $LSL$ and $RSR$ paths, in comparison to the Dubins approach, which uses the six $2\pi$-arc paths. We discuss the performance of these two approaches first in an environment with static current and then in an environment with dynamically changing current. We conduct Monte Carlo simulations as needed for statistical performance evaluation. The simulations were done on a computer with $2.4$~GHz and $8$~GB RAM. In order to obtain a solution using the Dubins approach, the transcendental functions are solved using the function \textit{fsolve} in MATLAB. On average, the Dubins approach took $\sim8.72$~s to get a solution with $100$ initial guesses, while the $4\pi$-arc paths approach took only $\sim0.64$~ms which is orders of magnitude faster than that of the Dubins computation. \subsection{Comparison of $4\pi$-arc $LSL$ and $RSR$ solutions with Dubins solutions in a static current environment} \label{app:dubins_comparison} \vspace{6pt} First, we considered an environment with a static current where the planning is done offline. This comparative study is presented using two metrics: a) the solution quality (i.e., the travel time cost) and b) the total time cost (i.e., the offline computation time cost plus the travel time cost). \textit{Simulation Setup}: The start pose is fixed at $(x_0, y_0, \theta_0)=(0,0,0)$. Then, $80$ different goal positions are distributed uniformly on the boundaries of concentric squares at a distance of $R=\{5, 10, 50, 100, 200\}$~m around the origin. For each goal position, $6$ different heading angles $\theta_f \in \{ \frac{m\pi}{3}, m = 0,\ldots 5\}$ are considered. This leads to a total of $480$ goal poses. The vehicle and current speeds are taken to be $v=1$~m/s and $v_w=0.5$~m/s, respectively, where $6$ different current heading angles $\theta_w \in \{ \frac{m\pi}{3}, m = 0,\ldots 5\}$ are considered, thus leading to a total number of $2880$ runs. For each run, the travel time cost and computation time cost are obtained for the two approaches. Fig~\ref{fig:comparison_Dubins} shows the savings obtained with the proposed $4\pi$-arc path solutions as compared to the Dubins solutions. Fig~\ref{fig:comparison_Dubins_travel} shows the savings in travel time, computed as $T_{Dubins}$ - $T_{4\pi}$, where $T_{Dubins}$ and $T_{4\pi}$ refer to the travel time costs of Dubins paths and $4\pi$-arc paths, respectively. As seen in the figure, in more than $50\%$ of the cases, the travel time costs of $4\pi$-arc path solutions match those of the Dubins solutions. Although the performance of Dubins paths is better than the $4\pi$-arc paths for the remaining cases, the travel time cost difference is not that significant. Fig.~\ref{fig:comparison_Dubins_total} shows the total time cost obtained by adding the computation time costs taken by the two approaches to their respective travel time costs. It is seen that in more than $90\%$ of the cases the total time of the $4\pi$-arc solutions is lower than that of the Dubins solutions; thus, $4\pi$-arc solutions yield a superior performance upon considering the computation times. Based on these trends, it is observed that although Dubins solutions are suitable for applications requiring offline planning, they do not provide significant advantage over the $4\pi$-arc $LSL$ and $RSR$ solutions in terms of travel time costs. Furthermore, when computation times are added then Dubins solutions provide worse total time costs in a significant majority of cases. Moreover, as discussed in Section \ref{changingcurrents}, for applications requiring online planning in dynamic current environments, the high computation times of Dubins solutions cause significant vehicle drifts, thus, resulting in longer sub-optimal trajectories which sometimes do not even converge to the goal pose. In such situations, $4\pi$-arc paths lead to faster and reliable solutions with negligible drifts allowing the vehicle to reach the goal pose precisely in shorter times. \begin{figure}[t] \centering \subfloat[Savings in travel time: $T_{Dubins}$ - $T_{4\pi}$.]{ \includegraphics[width=0.50\columnwidth]{comparison_Dubins_travel-eps-converted-to.pdf}\label{fig:comparison_Dubins_travel}} \subfloat[Savings in total time after including computation time.]{ \includegraphics[width=0.50\columnwidth]{comparison_Dubins_total-eps-converted-to.pdf}\label{fig:comparison_Dubins_total}} \caption{Time savings of the $4\pi$-arc solutions w.r.t. the Dubins solutions over $2880$ different simulation runs in a static current environment.} \label{fig:comparison_Dubins} \end{figure} \begin{figure*}[!t] \centering \subfloat[An example of path replanning under changing current. Start pose $(x_0,y_0,\theta_0)=(0,0,0)$ and goal pose $(x_f,y_f,\theta_f)=(5, 8.5, 3\pi/4)$. Initially, the current has $v_w = 0.5$~m/s and $\theta_w = \pi$, which changed at time $3.2$~s to a new current with $v_w = 0.75$~m/s and $\theta_w = 3\pi/2$. The radius of precision circle is $1$~m.]{ \includegraphics[width=1\textwidth]{fig11_parta-eps-converted-to.pdf}\label{fig:replan_results}}\\ \subfloat[An example to show the effect of the net velocity of the vehicle drift. Start pose $(x_0,y_0,\theta_0)=(0,0,0)$ and goal pose $(x_f,y_f,\theta_f)=(5, 8.5, 3\pi/4)$. Initially, the current has $v_w = 0.5$~m/s and $\theta_w = 3\pi/2$, which changed at time $3.72$~s to a new current.]{ \includegraphics[width=1\textwidth]{fig11_partb-eps-converted-to.pdf}\label{fig:drift_results}}\\ \vspace{3pt} \includegraphics[width=1\textwidth]{fig11_legend_new-eps-converted-to.pdf}\label{fig:legend_results} \caption{Illustrative examples of replanning under changing current, and the effect of $v_{net}$ on the vehicle drift.} \label{fig:traj_results} \end{figure*} \subsection{Effect of a Change in Current}\label{sec:res_realtime} During path execution, a change in the current's speed or heading could deviate the vehicle from its original path if left unattended. Hence, it is necessary to replan online upon detection of a change in current. However, as explained in Section~\ref{sec:intro}, using Dubins solution to regenerate the path to reach the goal pose requires considerable amount of computation time to solve the transcendental functions, during which the vehicle can drift noticeably. In particular, the vehicle drift would be along the direction of the net velocity of the vehicle and the current at that moment. To account for such drifts, the replanning is done by using a predicted position of the vehicle after the drift as the new start pose. This predicted position is computed by adding a translation (i.e., the product of the average computation time of $\sim8.72$~s and the net velocity) to the vehicle pose. Note that the predicted position is needed only for the Dubins solution, while it is unnecessary for the $4\pi$-arc path solution due to its negligible computation time. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{fig12-eps-converted-to.pdf}\\ \vspace{3pt} \includegraphics[width=1\textwidth]{fig11_legend-eps-converted-to.pdf} \caption{An example to show the effect of precision on planning time. Start pose $(x_0,y_0,\theta_0)=(0,0,0)$ and goal pose $(x_f,y_f,\theta_f)=(2, 8, \pi/2)$. Initially, the current has $v_w = 0.75$~m/s and $\theta_w = 0$, which changed at time $3.72$~s to a new current with $v_w = 0.65$~m/s and $\theta_w = \pi$.} \label{fig:prec_results} \end{figure*} The vehicle is considered to be successful in reaching the goal if it: 1) arrives within a precision circle of radius $1$~m centered at the goal, and 2) achieves a heading within $\theta_f\pm5^{\circ}$. Fig.~\ref{fig:replan_results} shows an illustrative example of the effect of current on replanning and the resulting total travel times using both approaches. Fig.~\ref{fig:replan_results}(1) shows the initially planned path using the Dubins approach from the start pose $(x_0, y_0, \theta_0) = (0,0,0)$ to the goal pose $(x_f, y_f, \theta_f) = (5,8.5,3\pi/4)$. The environment was considered to have an initial current of speed $v_w=0.5$~m/s and direction $\theta_w=\pi$. After the vehicle traveled for $3.2$~s and reached a point $A$, the current speed changed to $v_w=0.75$~m/s and its direction changed to $\theta_w=3\pi/2$, which forced the vehicle to replan a new path \textit{in situ}. Fig.~\ref{fig:replan_results}(2) shows the replanning process using the Dubins approach. During replanning, the vehicle is drifted along the net velocity $\mathbf{v}_{net} = \mathbf{v} + \mathbf{v}_w$, where $\mathbf{v} = (v\cos{\theta}, v\sin{\theta})$ and $\mathbf{v}_w = (v_w \cos{\theta_w}, v_w \sin{\theta_w})$. The vehicle drift is shown by the green dashed line in the figure. The points $B$ and $\hat{B}$ denote the actual and the predicted position of the vehicle after replanning is over, respectively. Due to the difference between the predicted and the actual position, instead of executing the replanned path from the predicted position $\hat{B}$, marked by the blue dotted line, the vehicle actually traveled from point $B$, marked by the solid blue line. The vehicle then converged to the goal with its end-point lying inside the precision circle with an acceptable heading error. The total time taken by the vehicle to reach the goal is obtained by adding the initial execution time of $\sim3.2$~s before the change of current, the replanning time of $\sim8$~s, and the execution time of $\sim40.08$~s along the replanned path, which leads to the total travel time of $\sim51.28$~s. In comparison, Fig.~\ref{fig:replan_results}(3) shows the replanning process using the $4\pi$-arc $LSL$ and $RSR$ paths approach. Due to the negligible computation time, the points $A$, $B$ and $\hat{B}$ coincided, thus resulting in a much faster total travel time of $\sim33.57$~s. Also, the goal pose was achieved more accurately as compared to the Dubins solution. This example clearly highlights the benefits of the proposed rapid solution using the $4\pi$-arc paths over the Dubins approach. \vspace{-6pt} \subsection{Effect of $\mathbf{v}_{net}$} During replanning, the vehicle is drifted along the direction of $\mathbf{v}_{net}$, with a magnitude of $v_{net}\in\mathbb{R}^+$ times the computation time. To examine the effect of $\mathbf{v}_{net}$ over the vehicle drift, we tested three scenarios over a range of $v_{net}$ and the results are shown in Fig.~\ref{fig:drift_results}(1)$-$(3). The start pose, the goal pose and the initial environmental current are set to be the same as those in Section~\ref{sec:res_realtime}; and the replanning occurs due to a change of current after $3.2$~s, when the vehicle has reached point $A$. As seen in Fig.~\ref{fig:drift_results}(1)$-$(3), the $4\pi$-arc path solution generates trajectories with negligible drifts, while the Dubins solution results in significant vehicle drifts of lengths $0.875$~m for low $v_{net}=0.112$~m/s, $3.56$~m for medium $v_{net}=0.432$~m/s and $7.39$~m for high $v_{net}=0.924$~m/s. In all cases, since the Dubins solution incurs high computation time, it leads to a higher overall execution time. In particular, even for the scenario with low $v_{net}$ as shown in Fig.~\ref{fig:drift_results}(1), where the drift is very close to the vehicle's initial state and within its turning radius, $4\pi$-arc paths provide a faster solution than the Dubins solution because of the high computation time of the latter. \begin{figure*}[!t] \centering \subfloat[Savings for naval application]{ \includegraphics[width=0.40\textwidth]{res_time_saving_AUV1-eps-converted-to.pdf}\label{fig:naval_results}} \subfloat[Savings for aerial application]{ \includegraphics[width=0.40\textwidth]{res_time_saving_UAV-eps-converted-to.pdf}\label{fig:aerial_results}} \caption{Monte Carlo simulation results: Time savings of the $4\pi$-arc solutions w.r.t. the Dubins solutions.} \label{fig:mc_results} \vspace{-6pt} \end{figure*} \subsection{Effect of the Size of Precision Circle} Next, we study the effect of the size of precision circle, centered at the goal, on the total travel time using the two approaches. The vehicle is assumed to keep replanning until it converges inside the precision circle with an acceptable heading error. Fig.~\ref{fig:prec_results} shows the results obtained by varying the radii of the precision circle as: $1.5$~m, $1$~m and $0.5$~m. The start pose is $(x_0, y_0, \theta_0) = (0,0,0)$ and the goal pose is $(x_f, y_f, \theta_f) = (2,8,\pi/2)$. The environment was considered to have an initial current of speed $v_w=0.75$~m/s and direction $\theta_w=0$, which changed to $v_w=0.65$~m/s and $\theta_w=\pi$ at time $3.72$~s. As seen in Fig.~\ref{fig:prec_results}, after the change of current, the Dubins approach faces serious difficulty in convergence to the goal requiring several replannings as the precision radius decreases, while the $4\pi$-arc approach converged easily every time in a single replanning. Specifically, for precision radius of $1.5$~m, $1$~m and $0.5$~m, the Dubins approach required $2$, $3$ and $4$ replannings before convergence to the goal; accordingly, the total travel times to reach the goal were $77.62$~s, $110.14$~s and 146.02~s, respectively. As expected, the total travel time of $4\pi$-arc solution was $21.23$~s which is much smaller than the Dubins solution and was unaffected by the shrinking precision radius. This is due to the significantly less replanning time of the $4\pi$-arc paths, which allows them to reach the goal with high accuracy in shorter times. \vspace{-6pt} \subsection{Comparison of $4\pi$-arc $LSL$ and $RSR$ solutions with Dubins solutions in a dynamic current environment} \label{changingcurrents} Now, we present a comparative evaluation of the $4\pi$-arc $LSL$ and $RSR$ solutions with Dubins solutions in a dynamic current environment. The performance of the two approaches is evaluated statistically using Monte Carlo simulations which cover a wide range of environmental conditions, considering realistic vehicle properties and sensing capabilities. The simulation setup is described as follows. \vspace{6pt} \textit{Sampled Goal Poses}: The start pose is fixed at $(x_0, y_0, \theta_0)=(0,0,0)$. Then, six different goal positions are chosen located at a distance of $R=100$~m from the origin. For each goal position, six different heading angles $\theta_f \in \{ \frac{m\pi}{3}, m = 0,\ldots 5\}$ are considered, which leads to a total number of $36$ start and goal pose pairs. Due to noise (discussed later), $10$ Monte Carlo simulation runs were conducted for each goal pose, thus leading to a total number of $360$ runs. \vspace{6pt} \textit{Changing Environment}: To validate the effectiveness of the proposed method, the current with speed $v_w$ is set to change its direction with a random heading angle $\theta_w \in \{\frac{m\pi}{6}, m = 0,\ldots, 11\}$. This change happens after a random time interval $T_0 \in \{30, 45, 60\}$~s. Specifically, for each simulation run, the current heading $\theta_w$ and its time period $T_0$ are randomly generated from their corresponding sets. Then, after $T_0$, the updated current heading $\theta_w$ and its time period $T_0$ are randomly chosen again and the process is repeated. Thus, the vehicle has to replan its path based on the updated $\theta_w$ every time the current changes. Since the measurements of $\theta_w$ include noise (discussed later), the vehicle estimates its value using a Maximum Likelihood Estimator (MLE)~\cite{BLK04}, which utilizes measurements of $\theta_w$ within a period of $T_1=12$~s. \vspace{6pt} \textit{Termination Conditions}: The vehicle is assumed to successfully reach the goal pose if: (1) it arrives within a precision circle of radius $1.5$~m centered at the goal, and (2) its heading falls between $\theta_f \pm 5^o$. However, if the vehicle cannot converge to the goal pose in $T_{\max}=1000$~s, then the solution is considered to be not convergent. \vspace{6pt} \textit{Performance Metric:} The performance of the proposed $4\pi$-arc solution is evaluated in comparison to the Dubins solution based on the percentage of savings in the total travel time: \begin{equation} Savings (\%)= \frac{T_{Dubins} - T_{4\pi}}{T_{Dubins}}\cdot 100, \end{equation} where $T_{Dubins}$ and $T_{4\pi}$ denote the total time cost using Dubins solution and the proposed $4\pi$-arc solution, respectively. \vspace{6pt} \textit{Applications:} Since sensing capabilities can vary significantly for different vehicles and in different operation environments, we evaluated the performance for two different applications: 1) naval (unmanned underwater vehicles (UUVs)) and 2) aerial (unmanned aerial vehicles (UAVs)). \vspace{6pt} \subsubsection{Naval Application} Consider a typical UUV that travels at a speed of $v = 2.5$~m/s. The ocean environment is assumed to have currents that move at a speed of $v_w = 2$~m/s with an initial heading of $\theta_w = 0$. Regarding the sensing systems, the ocean current speed and heading are usually measured using an Acoustic Doppler Current Profiler (ADCP)~\cite{ADCP} with a sampling rate of $1$~Hz. On the other hand, the location and heading of UUV can be measured using Long Baseline (LBL) localization system~\cite{PSSL14} and compass, respectively. The sensor uncertainties are modeled using Additive White Gaussian Noise (AWGN) with parameters listed in Table~\ref{table:noise}. \begin{table}[b!] \centering \caption{The specifics in Monte Carlo simulations}\label{table:noise} \begin{tabular}{lll} \toprule {Application} & {Naval} & {Aerial}\\ \midrule \eqparbox{Col}{Vehicle speed} & \eqparbox{Col}{$v = 2.5$~m/s} & \eqparbox{Col}{$v = 10$~m/s}\\ \addlinespace[0.1cm] \eqparbox{Col}{External current} & \eqparbox{Col}{Ocean currents \\ $v_w = 2$~m/s} & \eqparbox{Col}{Wind \\ $v_w = 8$~m/s}\\ \addlinespace[0.2cm] \eqparbox{Col}{Noise in vehicle\\state measurement} & \eqparbox{Col}{$\sigma_{GPS} = 0.3$~m \\ $\sigma_{compass} = 0.5^o$} & \eqparbox{Col}{$\sigma_{GPS} = 0.01$~m\\ $\sigma_{compass} = 0.5^o$}\\ \addlinespace[0.2cm] \eqparbox{Col}{Noise in current state\\ measurement} & \eqparbox{Col}{$\sigma_{v_w} = 0.75\%\cdot{v_w}$ \\ $\sigma_{\theta_w} = 0.67^o$} & \eqparbox{Col}{$\sigma_{v_w} = 1.25\%\cdot{v_w}$ \\ $\sigma_{\theta_w} = 4^o$}\\ \addlinespace[0.1cm] \bottomrule \end{tabular} \end{table} Fig.~\ref{fig:naval_results} shows the distribution of percentage savings in time for the $4\pi$-arc path solutions in comparison to the corresponding Dubins solutions over all Monte Carlo runs. While $4\pi$-arc path solutions always converged, Dubins solutions could not converge within the precision circle in $T_{\max}$ time for $6.11\%$ of the runs. As explained in Section~\ref{sec:res_realtime}, this happens mainly due to their significantly high computation times during replanning which makes them keep replanning due to errors caused by the vehicle drift. For the remaining runs where both methods converged, the proposed $4\pi$-arc path solutions achieved an average of $57.62\%$ time savings, thus showing their superiority over Dubins solutions in a dynamic naval environment. This implies that the $4\pi$-arc path solutions can guide the UUV to successfully reach the goal pose in significantly less time cost as compared to the Dubins solutions. Furthermore, we note that only a very small fraction of all test cases result in negative time savings, which could be perhaps when the vehicle drift directly took the vehicle to the goal. \vspace{6pt} \subsubsection{Aerial Application} Consider a typical UAV that travels at a speed of $v = 10$~m/s. The environment is assumed to have wind that moves at a speed of $v_w = 8$~m/s with an initial heading $\theta_w = 0$. As for the sensing systems, the wind profile can be measured using the Acoustic Resonance Wind Sensor system of FT 205~\cite{FT205}, which has a sampling rate of $10$~Hz. For localization of the UAV, a Real-Time Kinematic (RTK) GPS is used~\cite{RTK}. The sensor uncertainties are modeled using AWGN, with parameters listed in Table~\ref{table:noise}. Fig.~\ref{fig:aerial_results} shows the distribution of percentage savings in time for the $4\pi$-arc path solutions in comparison to the corresponding Dubins solutions over all Monte Carlo runs. While $4\pi$-arc path solutions always converged, Dubins solutions could not converge within the precision circle in $T_{\max}$ time for $56.11\%$ of the runs. This number is higher than that of the naval applications due to the much higher uncertainties in current state measurements using wind sensors. The significantly increased number of non-converging runs shows the poor performance of Dubins approach in severe environments, thus highlighting the benefits of $4\pi$-arc path solutions. For the remaining runs where both methods converged, the proposed $4\pi$-arc path solutions achieved an average of $68.47\%$ time savings, thus showing their superiority over the Dubins solutions in a dynamic aerial environment. Furthermore, we note that only a very small fraction of all test runs result in negative time savings, while a significant majority have faster $4\pi$-arc path solutions. \vspace{0pt} \section{{Summary and Future Work}}\label{sec:conclusion} \vspace{0pt} \subsection{Summary} The paper presents a rapid (real-time) solution to the minimum-time path planning problem for Dubins vehicles in the presence of environmental currents. The standard Dubins solution is obtained by solving for six path types ($LSL, RSR, LSR, RSL, LRL, RLR$); however, due to the presence of currents, four of these path types require solving of the root-finding problem involving transcendental functions. Thus, the existing Dubins solution results in high computation times which are not suitable for real-time applications. Therefore, to obtain a real-time solution, this paper proposed a novel approach which utilizes only the $LSL$ and $RSR$ path types from the Dubins solution set which have direct analytical solutions; however they lack full reachability. In this regard, the paper established the following properties for $LSL$ and $RSR$ paths: \begin{enumerate} \item Full reachability is guaranteed by extending their arc ranges from $2\pi$ to $4\pi$; \item $4\pi$-arc paths yield superior or same performance in terms of time costs as compared to the corresponding $2\pi$-arc paths; \item $4\pi$-arc paths require the same computational load to obtain a solution as needed for $2\pi$-arc paths. \end{enumerate} Based on the above, it is established that for real-time applications, the planner should consider the $4\pi$-arc $LSL$ and $RSR$ path solutions, while $2\pi$-arc solutions are not needed. Furthermore, the performance of the proposed approach was evaluated against the Dubins solution with all six path types. For this purpose, two applications were considered: i) naval and ii) aerial, where extensive Monte Carlo simulations were conducted for statistical analysis under stochastic uncertainties in dynamically changing environments. The results showed that the $4\pi$-arc solutions converged to the goal pose in all runs as opposed to the Dubins solutions which failed to converge in a significant portion of runs. For the cases where Dubins solutions converged, the $4\pi$-arc solutions yielded superior performance and achieved significantly lower time costs to reach the goal poses with high precision. \subsection{Future Work} Future research will consider the following challenging problems for Dubins vehicles: 1) minimum-time path planning under spatio-temporally varying currents, 2) complete coverage in unknown environments~\cite{SG18}~\cite{SG19}, and 3) Dubins orienteering problem in dynamic environments~\cite{PFVS17}. \vspace{0pt}
1,108,101,563,718
arxiv
\section{Introduction} All graphs considered in this paper are undirected, connected and simple. Let $G$ be a graph with vertex set $V(G)=\{v_1,v_2,\ldots,v_n\}$ and edge set $E(G)$. The \emph{distance} between $v_i$ and $v_j$, denoted by $d_G(v_i,\,v_j)$ (or $d_{ij}$), is the length of a shortest path from $v_i$ to $v_j$. The \emph{distance matrix} of $G$, denoted by $D(G)$, is the $n\times n$ real symmetric matrix whose $(i,\,j)$-entry is $d_G(v_i,\,v_j)$ $(\mbox{or }d_{ij})$, then we can order the eigenvalues of $D(G)$ as $$\lambda_1(D(G))\geq \lambda_2(D(G))\geq \cdots\geq \lambda_n(D(G)).$$ By the Perron-Frobenius theorem, $\lambda_1(D(G))$ is always positive (unless $G$ is trivial) and $\lambda_1(D(G)) \ge |\lambda_{i}(D(G))|$ for $i=2,3,\ldots, n$, and we call $\lambda_1(D(G))$ the \emph{distance spectral radius}. The study of distance eigenvalues can be traced back to 1971 by Graham and Pollack \cite{Graham} and they described a relationship between the number of negative distance eigenvalues and the addressing problem in data communication system. In the same paper, they showed a very interesting and insightful result that the determinant of the distance matrix of a tree with order $n$ is $(-1)^{n-1}(n-1)2^{n-2}$, which is independent of the structure of the tree. Since then, the study of distance eigenvalues of a graph has become a research subject of enormous interest and this topic has received growing attention over recent years, some latest results see \cite{AH1,Lin,lhq,lzz,xj}. For more results on the distance matrix and its spectral properties, we refer the reader to the excellent survey \cite{AH}. A \emph{matching} $M$ of a graph $G$ is a set of pairwise nonadjacent edges. The maximum number of edges of a matching in $G$ is called \emph{matching number} of $G$, denoted by $\alpha(G)$. A vertex incident with an edge in $M$ is called to be \emph{saturated} by $M$. A \emph{perfect matching} is one matching which all vertices of $G$ are saturated by it. Obviously, a graph with a perfect matching has an even number of vertices and $\alpha(G)=\frac{|V(G)|}{2}$. The studies of the connections between eigenvalues and the matching number of a graph were mainly based on the tree in 1990s. Chang \cite{an} obtained an upper bound and a tight lower bound for the second largest eigenvalue of an $n$-vertex tree with a given matching number. Later, Hou and Li \cite{hou} gave some upper bounds for the spectral radius of a tree in terms of its order and matching number. Brouwer and Heamers \cite{Brouwer} investigated this problem to the general graph and showed that if $\mu_1(G) \le 2\mu_{n-1}(G)$, then $G$ contains a perfect matching where $\mu_1(G)$ and $\mu_{n-1}(G)$ are the largest and second smallest Laplacian eigenvalues of $G$, respectively. Feng, Yu and Zhang \cite{feng} investigated the maximal spectral radius of graphs with a given matching number and order. In the past decade, some quality and interesting results on the matching number of a graph and its distance spectral radius have been obtained. Ili\'{c} \cite{llic} characterized $ n $-vertex trees with a given matching number which minimize the distance spectral radius. Liu \cite{liu} characterized graphs with minimum distance spectral radius in connected graphs on $n$ vertices with fixed matching number. Zhang \cite{zhang} and Lu and Luo \cite{luluo} characterized unicyclic graphs with a perfect matching and a given matching number which minimize the distance spectral radius, respectively. Very recently, O \cite{suil} proved a lower bound for the spectral radius in an $ n $-vertex graph to guarantee the existence of a perfect matching. Along this line, we consider this problem with respect to the distance spectral radius in this paper. We denote by $ G\cup H $ the \emph{disjoint union} of two graphs $G$ and $ H $, which is the graph with $V(G\cup H)=V(G)\cup V(H)$ and $E(G\cup H)=E(G)\cup E(H)$, particularly if $G\cong H$, then write $2G= G\cup H $ for short. Denote by $G\vee H $ the \emph{join} of two graphs $ G $ and $ H $, which is the graph such that $ V(G \vee H) = V(G) \cup V(H) $ and $ E(G \vee H) = E(G)\cup E(G) \cup \{vu: u \in V(G)\ \mbox{and}\ v \in V(H)\}. $ The graph $S_{n,k}$ is obtained from a copy of $K_k$ by adding $n-k$ vertices, each of which has neighborhood $V(K_k)$ i.e. $S_{n,k}\cong K_k\vee(n-k)K_1$. In the following, we first give a distance spectral radius condition which guarantees a graph to have a perfect matching. \begin{figure}[t] \setlength{\unitlength}{0.9pt} \begin{center} \begin{picture}(158.1,83.4) \qbezier(81.2,42.8)(81.2,26.0)(69.3,14.1)\qbezier(69.3,14.1)(57.4,2.2)(40.6,2.2)\qbezier(40.6,2.2)(23.8,2.2)(11.9,14.1)\qbezier(11.9,14.1)(0.0,26.0)(0.0,42.8)\qbezier(0.0,42.8)(0.0,59.6)(11.9,71.5)\qbezier(11.9,71.5)(23.8,83.4)(40.6,83.4)\qbezier(40.6,83.4)(57.4,83.4)(69.3,71.5)\qbezier(69.3,71.5)(81.2,59.6)(81.2,42.8) \put(129.1,45.0){\circle*{5}} \qbezier(56.6,71.8)(92.8,58.4)(129.1,45.0) \qbezier(129.1,45.0)(93.9,30.5)(58.7,16.0) \put(158.1,74.0){\circle*{5}} \qbezier(129.1,45.0)(143.6,59.5)(158.1,74.0) \put(158.1,16.0){\circle*{5}} \qbezier(129.1,45.0)(143.6,30.5)(158.1,16.0) \put(27.6,49.3){\makebox(0,0)[tl]{$K_{n-3}$}} \put(79.8,0.0){\makebox(0,0)[tl]{$G^*$}} \put(90.5,50.7){\circle*{2}} \put(90.5,37.1){\circle*{2}} \put(90.5,44.2){\circle*{2}} \end{picture} \end{center} \caption{The extremal graph $G^*$ of Theorem \ref{pm}.} \label{gstar} \end{figure} \begin{thm}\label{pm} Let $G$ be a connected graph with order $n$ and $n\ge 4$ be an even integer. \begin{enumerate}[(i)] \item For $ n \le 10$, if ${\lambda }_{1} (D\left(G\right))\le {\lambda }_{1} (D(S_{n,{\frac{n}{2}}-1}))$, then $G$ contains a perfect matching unless $G\cong S_{n,{\frac{n}{2}-1}}$. \item For $n\ge 12$, if ${\lambda }_{1} (D\left(G\right))\le {\lambda }_{1} (D(G^*))$, then $G$ contains a perfect matching unless $G\cong G^*$ where $G^*\cong K_1\vee (K_{n-3}\cup2K_1)$ (see Fig. \ref{gstar}). \end{enumerate} \end{thm} A connected bipartite graph with two parts of size $ n_1 $ and $ n_2 $. We say that it is balanced if $ n_1 = n_2 $. Note that if a connected bipartite graph is not balanced. Then it has no perfect matching. Let $B_{n-1 ,n-2}$ be the graph obtained from $K_{n,n-2}$ by attaching two pendent vertices to a vertex in $n$-vertex part (see Fig. \ref{bstar}). Then we have the following result. \begin{figure}[t] \setlength{\unitlength}{1.2pt} \begin{center} \begin{picture}(150.1,92.1) \put(77.6,22.5){\oval(87.0,29.7)}\put(136.3,76.9){\circle*{4}} \put(101.5,76.9){\circle*{4}} \put(65.3,76.9){\circle*{4}} \put(47.9,21.8){\circle*{4}} \put(101.5,21.8){\circle*{4}} \put(65.3,21.8){\circle*{4}} \put(76.1,21.8){\circle*{2}} \put(91.4,21.8){\circle*{2}} \put(83.4,21.8){\circle*{2}} \put(75.4,76.9){\circle*{2}} \put(92.1,76.9){\circle*{2}} \put(83.4,76.9){\circle*{2}} \qbezier(136.3,76.9)(92.1,49.3)(47.9,21.8) \qbezier(65.3,76.9)(65.3,49.3)(65.3,21.8) \qbezier(101.5,76.9)(101.5,49.3)(101.5,21.8) \qbezier(136.3,76.9)(100.8,49.3)(65.3,21.8) \qbezier(136.3,76.9)(118.9,49.3)(101.5,21.8) \qbezier(65.3,76.9)(56.6,49.3)(47.9,21.8) \qbezier(65.3,76.9)(83.4,49.3)(101.5,21.8) \qbezier(101.5,76.9)(74.7,49.3)(47.9,21.8) \qbezier(101.5,76.9)(83.4,49.3)(65.3,21.8) \put(150.1,22.5){\circle*{4}} \qbezier(136.3,76.9)(143.2,49.7)(150.1,22.5) \put(136.3,22.5){\circle*{4}} \qbezier(136.3,76.9)(136.3,49.7)(136.3,22.5) \put(-40,85.0){\makebox(0,0)[tl]{$(n-1)$ vertices}} \put(-40,27.0){\makebox(0,0)[tl]{$(n-2)$ vertices}} \put(70.0,-5.0){\makebox(0,0)[tl]{$B_{n-1,n-2}$}} \put(79.0,76.9){\oval(84.1,30.5)}\put(47.9,76.9){\circle*{4}} \qbezier(47.9,76.9)(47.9,49.3)(47.9,21.8) \qbezier(47.9,76.9)(56.6,49.3)(65.3,21.8) \qbezier(47.9,76.9)(74.7,49.3)(101.5,21.8) \end{picture} \end{center} \caption{The extremal graph $B_{n-1,n-2}$ of Theorem \ref{bppm}.} \label{bstar} \end{figure} \begin{thm}\label{bppm} Let $G$ be a connected balanced bipartite graph with order $ 2n $ where $ n $ is an integer and $ n\ge 3 $. If $\lambda_{1}(D(G))\le \lambda_{1}(D(B_{n-1,n-2})) $, then $G$ has a perfect matching unless $G\cong B_{n-1,n-2}$ (see Fig. \ref{bstar}). \end{thm} \section{Proofs} For $S\subseteq V(G) $, the subgraph induced by $V(G)-S$ is denoted by $G-S$. A component is called an odd (even) component if the number of vertices in this component is odd (even) and let $o(G)$ denote the number of odd components of $ G $. \begin{figure}[t] \setlength{\unitlength}{1pt} \begin{center} \begin{picture}(258.1,115.3) \qbezier(175.6,56.6)(175.6,46.9)(168.8,40.0)\qbezier(168.8,40.0)(161.9,33.2)(152.3,33.2)\qbezier(152.3,33.2)(142.6,33.2)(135.7,40.0)\qbezier(135.7,40.0)(128.9,46.9)(128.9,56.5)\qbezier(128.9,56.5)(128.9,66.2)(135.7,73.1)\qbezier(135.7,73.1)(142.6,79.9)(152.2,79.9)\qbezier(152.2,79.9)(161.9,79.9)(168.8,73.1)\qbezier(168.8,73.1)(175.6,66.2)(175.6,56.6) \qbezier(114.6,64.5)(114.6,58.8)(108.5,54.8)\qbezier(108.5,54.8)(102.4,50.8)(93.9,50.8)\qbezier(93.9,50.8)(85.3,50.8)(79.3,54.8)\qbezier(79.3,54.8)(73.2,58.8)(73.2,64.5)\qbezier(73.2,64.5)(73.2,70.2)(79.3,74.3)\qbezier(79.3,74.3)(85.3,78.3)(93.9,78.3)\qbezier(93.9,78.3)(102.4,78.3)(108.5,74.3)\qbezier(108.5,74.3)(114.6,70.2)(114.6,64.5) \qbezier(230.6,101.5)(230.6,95.8)(224.3,91.8)\qbezier(224.3,91.8)(218.0,87.7)(209.2,87.7)\qbezier(209.2,87.7)(200.3,87.7)(194.0,91.8)\qbezier(194.0,91.8)(187.8,95.8)(187.8,101.5)\qbezier(187.8,101.5)(187.8,107.2)(194.0,111.2)\qbezier(194.0,111.2)(200.3,115.3)(209.2,115.3)\qbezier(209.2,115.3)(218.0,115.3)(224.3,111.2)\qbezier(224.3,111.2)(230.6,107.2)(230.6,101.5) \qbezier(230.6,65.6)(230.6,59.8)(224.3,55.6)\qbezier(224.3,55.6)(218.0,51.5)(209.2,51.5)\qbezier(209.2,51.5)(200.3,51.5)(194.0,55.6)\qbezier(194.0,55.6)(187.8,59.8)(187.8,65.6)\qbezier(187.8,65.6)(187.8,71.5)(194.0,75.6)\qbezier(194.0,75.6)(200.3,79.8)(209.2,79.8)\qbezier(209.2,79.8)(218.0,79.8)(224.3,75.6)\qbezier(224.3,75.6)(230.6,71.5)(230.6,65.6) \qbezier(114.6,100.4)(114.6,94.6)(108.5,90.4)\qbezier(108.5,90.4)(102.4,86.3)(93.9,86.3)\qbezier(93.9,86.3)(85.3,86.3)(79.3,90.4)\qbezier(79.3,90.4)(73.2,94.6)(73.2,100.4)\qbezier(73.2,100.4)(73.2,106.3)(79.3,110.4)\qbezier(79.3,110.4)(85.3,114.6)(93.9,114.6)\qbezier(93.9,114.6)(102.4,114.6)(108.5,110.4)\qbezier(108.5,110.4)(114.6,106.3)(114.6,100.4) \qbezier(114.6,14.1)(114.6,8.3)(108.5,4.1)\qbezier(108.5,4.1)(102.4,0.0)(93.9,0.0)\qbezier(93.9,0.0)(85.3,0.0)(79.3,4.1)\qbezier(79.3,4.1)(73.2,8.3)(73.2,14.1)\qbezier(73.2,14.1)(73.2,20.0)(79.3,24.1)\qbezier(79.3,24.1)(85.3,28.3)(93.9,28.3)\qbezier(93.9,28.3)(102.4,28.3)(108.5,24.1)\qbezier(108.5,24.1)(114.6,20.0)(114.6,14.1) \qbezier(232.0,14.1)(232.0,8.3)(225.7,4.1)\qbezier(225.7,4.1)(219.5,0.0)(210.6,0.0)\qbezier(210.6,0.0)(201.8,0.0)(195.5,4.1)\qbezier(195.5,4.1)(189.2,8.3)(189.2,14.1)\qbezier(189.2,14.1)(189.2,20.0)(195.5,24.1)\qbezier(195.5,24.1)(201.8,28.3)(210.6,28.3)\qbezier(210.6,28.3)(219.5,28.3)(225.7,24.1)\qbezier(225.7,24.1)(232.0,20.0)(232.0,14.1) \qbezier(107.3,99.3)(122.9,83.7)(138.5,68.2) \qbezier(106.6,64.5)(120.7,61.6)(134.9,58.7) \qbezier(104.4,14.5)(121.1,31.2)(137.8,47.9) \qbezier(197.2,101.5)(180.9,85.2)(164.6,68.9) \qbezier(195.8,64.5)(182.3,61.3)(168.9,58.0) \qbezier(197.9,13.8)(181.3,30.5)(164.6,47.1) \put(95.0,45.7){\circle*{2}} \put(95.0,32.6){\circle*{2}} \put(95.0,39.2){\circle*{2}} \put(210.3,46.4){\circle*{2}} \put(210.3,34.1){\circle*{2}} \put(210.3,40.6){\circle*{2}} \put(87.7,105.1){\makebox(0,0)[tl]{$G_1$}} \put(87.7,68.9){\makebox(0,0)[tl]{$G_2$}} \put(87.7,18.9){\makebox(0,0)[tl]{$G_q$}} \put(202.3,106.6){\makebox(0,0)[tl]{$R_1$}} \put(202.3,69.6){\makebox(0,0)[tl]{$R_2$}} \put(205.2,18.9){\makebox(0,0)[tl]{$R_k$}} \put(149.2,61.1){\makebox(0,0)[tl]{$S$}} \put(147.2,-7.7){\makebox(0,0)[tl]{$G$}} \put(27.6,63.8){\makebox(0,0)[tl]{odd}} \put(0.0,49.3){\makebox(0,0)[tl]{components}} \put(245.1,66.7){\makebox(0,0)[tl]{even}} \put(229.8,54.4){\makebox(0,0)[tl]{components}} \end{picture} \end{center} \caption{The graph in Tutte's theorem.} \label{ttgraph} \end{figure} \begin{lemma}[Tutte's theorem \cite{Tutte}]\label{tt} A graph $ G $ has a perfect matching (see Fig.\ref{ttgraph}) if and only if\par \centering $ o(G- S) \le |S| $ for each $ S \subseteq V(G) $. \end{lemma} Let $W(G)=\sum_{i<j}d_{ij}$ be the \emph{Wiener index} of a connected graph $G$ with order $n$. Note that $\lambda_1(D(G))=\max \limits_{\textbf{x}\in {\mathbb{R}}^{n}}\frac{\textbf{x}^{t}D(G)\textbf{x}}{\textbf{x}^{t}\textbf{x}}$. Then we have $$\lambda_1(D(G))=\max \limits_{\textbf{x}\in {\mathbb{R}}^{n}}\frac{\textbf{x}^{t}D(G)\textbf{x}}{\textbf{x}^{t}\textbf{x}}\geq \frac{\mathbf{1}^{t}D\mathbf{1}}{\mathbf{1}^{t}\mathbf{1}}\geq \frac{2W(G)}{n},$$ where $\mathbf{1}=(1,1,\ldots,1)^{t}$. Now we give the proof of Theorem \ref{pm}. {\flushleft \textbf{Proof of Theorem \ref{pm}.}} By way of contradiction assume that $ G $ has no perfect matching with the minimum distance spectral radius. By Lemma \ref{tt}, there exists $ S \subset V (G) $ such that $ q -|S| > 0 $, where $ o(G-S) = q $ and all components of $ G-S $ are odd, otherwise, we can remove one vertex from each even component to the set $S$, in consequence, the number of odd component and the number of vertices in $S$ have the same increase, so that $q$ is always larger than $|S|$ and all components of $G-S $ are odd. Note that $G$ is connected, which implies that $S$ is not empty. $|S|$ and $q$ have the same parity because $ n $ is even, so $q-|S| \ge 2$. Let $ G'$ be the graph obtained from $ G $ by joining $ S $ and $ G-S $ and by adding edges in $ S $ and in all components of $ G-S $ so that all components of $ G-S $ and $ G[S] $ are cliques. It's clear that ${G}^{\prime }\cong {K}_{s}\vee \left({K}_{n_1}\cup K_{n_2}\cup \cdots \cup {K}_{n_q}\right)$ where $ |S|=s$, ${n}_{1}\ge {n}_{2}\ge \dots \ge {n}_{q}\ge 1$ and ${n}_{1}+{n}_{2}+ \dots + {n}_{q}= n-s$. Note that $G'\cong G$, otherwise, referring to the Perron-Frobenius theorem, we have ${\lambda }_{1}\left(D\left(G'\right)\right)<{\lambda }_{1}\left(D\left(G\right)\right)$, a contradiction. Suppose ${n}_{2}={n}_{3}= \dots = {n}_{q}= 1$, then we get a graph ${G}^{\prime \prime }\cong {K}_{s}\vee \left({K}_{n-s-\left(q-1\right)}\cup (q-1)K_1\right)$. \begin{claim} ${\lambda }_{1}\left(D\left(G'\right)\right)\ge {\lambda }_{1}\left(D\left(G''\right)\right)$ with equality if and only if $G' \cong G''$. \end{claim} If $n_1=1$, then $n_1={n}_{2}={n}_{3}= \dots = {n}_{q}= 1$ and $G' \cong G''$. Now we consider $n_1\ge 3$. Denote the vertex set of $G''$ by $V(G'')=V(K_{s})\cup V(K_{n-s-(q-1)}) \cup V((q-1)K_{1})$. Suppose that $X$ is the Perron vector of $D(G'')$, and let $x(v)$ denote the entry of $X$ corresponding to the vertex $v\in V(G'')$. By symmetry, it is easy to see that all vertices of $V(K_{s})$ (resp. $V(K_{n-s-(q-1)})$ and $V((q-1)K_{1})$) have the same entries in $X$. Thus we can suppose $x(u)=a$ for any $u\in V((q-1)K_{1})$, $x(v)=b$ for any $v\in V(K_{n-s-(q-1)})$ and $x(w)=c$ for any $w\in V(K_{s})$. Then \[ \left\{ \begin{array}{l} \lambda_1(D(G''))a=sc+2\left(n-s-q+1\right)b+2\left(q-2\right)a,\\[3mm] \lambda_1(D(G''))b=sc+\left(n-s-q\right)b+2\left(q-1\right)a,\\[3mm] \lambda_1(D(G''))c=(s-1)c+\left(n-s-q+1\right)b+\left(q-1\right)a. \end{array} \right. \] Thus, $$a=\left[1+\frac{n-s-q}{\lambda_1(D(G''))+2}\right]b.$$ It follows that \begin{eqnarray*} &&{\lambda }_{1}\left(D\left(G'\right)\right)- {\lambda }_{1}\left(D\left(G''\right)\right)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\[2mm] &\ge& {X}^{t}\left(D\left({G}^{\prime }\right)-D\left({G}^{\prime \prime }\right)\right)X\\ &=&n_1\sum _{k=2}^{q}\left({n}_{k}-1\right){b}^{2} +\left({n}_{2}-1\right)\left[\left(n-s-{n}_{2}-\left(q-2\right)\right){b}^{2}-2ab\right]\\ &+&\left({n}_{3}-1\right)\left[\left(n-s-{n}_{3}-\left(q-2\right)\right){b}^{2}-2ab\right]+\left({n}_{4}-1\right)\left[\left(n-s-{n}_{4}-\left(q-2\right)\right){b}^{2}-2ab\right]\\ &+&\cdots +\left({n}_{q}-1\right)\left[\left(n-s-{n}_{q}-\left(q-2\right)\right){b}^{2}-2ab\right]. \end{eqnarray*} Since $ n_1\ge 3 $ and $ n_2\ge n_3 \ge \cdots\ge n_q\ge 1 $, in order to show ${\lambda }_{1}\left(D\left(G'\right)\right)- {\lambda }_{1}\left(D\left(G''\right)\right)>0$, we only need to prove $\left(n-s-{n}_{2}-\left(q-2\right)\right){b}^{2}-2ab>0$. Note that $K_{n-q+1}$ is a subgraph of $G''$. Then ${\lambda }_{1}\left(D\left(G''\right)\right) >{\lambda }_{1}\left(D\left(K_{n-q+1}\right)\right)=n-q,$ which implies \begin{eqnarray*} &&\left(n-s-{n}_{2}-\left(q-2\right)\right){b}^{2}-2ab~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ &=&b^2\left(n-s-n_2-q-\frac{2n-2s-2q}{\lambda_1(D(G''))+2}\right)\\[3mm] &>&b^2\left(n-s-n_2-q-\frac{2n-2s-2q}{n-q+2}\right)\\[3mm] &=&b^2\left(n-s-n_2-q-2+\frac{2s+4}{n-q+2}\right)\\[3mm] &>&{b}^{2}\left(n-s-{n}_{2}-q-2\right)\\[3mm] &=&{b}^{2}(n_1+n_2+\cdots+n_{q-1}+n_q-n_2-q-2)\\[3mm] &=&{b}^{2}\left(\sum _{i=1, i\ne 2}^{q}\left({n}_{i}-1\right)-1\right)\\&>& 0. \end{eqnarray*} So Claim 1 holds. So $G\cong G''$, or there will be a contradiction. We know that $q-|S|\ge 2$. Let $\widetilde{G}\cong {K}_{s}\vee \left({K}_{n-2s-1}\cup \left(s+1\right){K}_{1}\right)$. We compare the distance spectral radius of $G''$ and $\widetilde{G}$ in the following. \begin{claim} ${\lambda }_{1}\left(D\left(G''\right)\right)\ge {\lambda }_{1}(D(\widetilde{G}))$ with equality if and only if $G'' \cong \widetilde{G}$. \end{claim} Recall ${G}^{\prime \prime }\cong {K}_{s}\vee \left({K}_{n-s-\left(q-1\right)}\cup (q-1)K_1\right)$. If $q=s+2$, then $\widetilde{G} \cong G''$. Now we consider $q\ge s+4$. Denote the vertex set of $\widetilde{G}$ by $V(\widetilde{G})=V(K_{s})\cup V(K_{n-2s-1}) \cup V((s+1)K_{1})$. Suppose that $Y$ is the Perron vector of $D(\widetilde{G})$, and let $Y(v)$ denote the entry of $Y$ corresponding to the vertex $v\in V(\widetilde{G})$. By symmetry, it is easy to see that all vertices of $V(K_{s})$ (resp. $V(K_{n-2s-1})$ and $V((s+1)K_{1})$) have the same entries in $X$. Thus we can suppose $Y(u)=y_1$ for any $u\in V((s+1)K_{1})$, $Y(v)=y_2$ for any $v\in V(K_{n-2s-1})$ and $Y(w)=y_3$ for any $w\in V(K_{s})$. Let $n_1'=n-s-(q-1)\ge 1$ and ${G}^{\prime \prime }\cong {K}_{s}\vee \left({K}_{n_1'}\cup (q-1)K_1\right)$. Then \begin{eqnarray*} &&{\lambda }_{1}\left(D\left(G''\right)\right)- {\lambda }_{1}(D(\widetilde{G}))~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ &\ge& {Y}^{t}\left(D\left({G''}\right)-D(\widetilde{G})\right)Y\\ &=&n_1'(q-s-2)y_2^2+(n_1'+q-s-3)(q-s-2)y_2^2\\ &=&y_2^2\left[{q}^{2}+\left(2{n}_{1}^{\prime }-2s-5\right)q+{s}^{2}+5s-2{n}_{1}^{\prime }s-4{n}_{1}^{\prime }+6\right]. \end{eqnarray*} Since $$\frac{-\left(2{n}_{1}^{\prime }-2s-5\right)}{2}=-{n}_{1}^{\prime }+s+\frac{5}{2}<s+4,$$ we obtain \begin{eqnarray*} &&y_2^2\left[{q}^{2}+\left(2{n}_{1}^{\prime }-2s-5\right)q+{s}^{2}+5s-2{n}_{1}^{\prime }s-4{n}_{1}^{\prime }+6\right]~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ &\ge &y_2^2\left[{(s+4)}^{2}+\left(2{n}_{1}^{\prime }-2s-5\right)(s+4)+{s}^{2}+5s-2{n}_{1}^{\prime }s-4{n}_{1}^{\prime }+6\right]\\ &=&y_2^2(4{n}_{1}^{\prime }+2)\\&>& 0. \end{eqnarray*} So Claim 2 holds. So $G\cong \widetilde{G}$, or there will be a contradiction. Let $ G^* \cong K_1\vee (K_{n-3}\cup 2K_1)$. In the end, we shall show that $G^*$ contains the minimum distance spectral radius and no perfect matching under the most situations. \begin{claim} If $ n\ge 2s+4 $, then ${\lambda }_{1}(D(\widetilde{G}))\ge {\lambda }_{1}\left(D\left(G^*\right)\right)$ with equality if and only if $\widetilde{G} \cong G^*$. \end{claim} Recall $\widetilde{G}\cong {K}_{s}\vee \left({K}_{n-2s-1}\cup \left(s+1\right){K}_{1}\right)$. If $s=1$, then $G^*\cong \widetilde{G}$. Now we suppose $s\ge 2$ so that $n\ge 2s+4\ge 8$. Then the quotient matrix of the partition $\{V(K_{n-2s-1},V(K_s),V((s+1)K_1))\}$ of $\widetilde{G}$ is $$\left(\begin{array}{ccccccc} n-2s-2 & s & 2(s+1)\\ \ &\ &\ \\ n-2s-1 & s-1 & s+1\\ \ &\ &\ \\ 2(n-2s-1) & s & 2s \end{array}\right),$$ the characteristic polynomial of the matrix is \begin{eqnarray*} f(x)={x}^{3}-\left(s+n-3\right){x}^{2}-\left(2sn-5{s}^{2}+5n-6s-6\right)x+n{s}^{2}-2{s}^{3}-sn+2{s}^{2}-4n+6s+4.&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray*} We know that $ {\lambda }_{1}(D(\widetilde{G}))$ is the largest root of $ f(x)=0 $. Since $ {\lambda }_{1}\left(D\left(G^*\right)\right)= \theta(n) $ (simply $\theta$) is the largest root of the equation $q(x)={x}^{3}-\left(n-2\right){x}^{2}-\left(7n-17\right)x-4n+10=0$, we have \begin{eqnarray*} h(\theta)&=&f\left(\theta \right)-q\left(\theta \right)\\ &=&-\left(s-1\right){\theta }^{2}-\left(2ns-5{s}^{2}-2n-6s+11\right)\theta +n{s}^{2}-2{s}^{3}-sn+2{s}^{2}+6s-6\\ &=&(s - 1)(-\theta^2 + (-2n + 5s + 11)\theta + sn - 2s^2 + 6). \end{eqnarray*} Moreover, $\theta={\lambda }_{1}\left(D\left(G^*\right)\right)\ge \frac{2W(G^*)}{n}=\frac{n^2+3n-10}{n}\ge n+1$ and $s\ge 2$. To show ${\lambda }_{1}(D(\widetilde{G}))>{\lambda }_{1}\left(D\left(G^*\right)\right)$, we only need to prove $ h_1(\theta) =-\theta^2 + (-2n + 5s + 11)\theta + sn - 2s^2 + 6< 0$ when $\theta\ge n+1$, then $h(\theta)<h_1(\theta)<0$. Note that $$\frac{-2n+5s+11}{2}=-n+\frac{5}{2}s+\frac{11}{2}< n+1.$$ So when $\theta\ge n+1$, $ h_1(\theta)$ monotonically decreases as $\theta$ increases and $ h_1(\theta)\le h_1(n+1) $. Let $ g(n)=h_1(n+1)=-3n^2 + (6s + 7)n - 2s^2 + 5s + 16 $ where $ n\ge 2s+4 $. Then $$\frac{6{s}+7}{6}=s+\frac{7}{6}<2s+4.$$ So when $n\ge 2s+4$, $ g(n)$ is monotonically decreasing and $ h(\theta)<h_1(\theta)\le g(n)\le g(2s+4)=-2s^2 - 5s - 4<0$. So Claim 3 holds. If $ n=2s+2 $, observe that $\widetilde{G}\cong S_{n,{\frac{n}{2}-1}}$ and the quotient matrix of the partition $\{V(K_{\frac{n}{2}-1}),V(({\frac{n}{2}+1})K_1)\}$ of $S_{n,{\frac{n}{2}-1}}$ is $$\left(\begin{array}{cccccccc} {\frac{n}{2}-2} & \ &{\frac{n}{2}+1}\\ \ &\ &\ \\ {\frac{n}{2}-1}&\ & n \end{array}\right).$$ By a simple calculation, $$\lambda_{1}(D( S_{n,{\frac{n}{2}-1}}))=\frac{3n-4+\sqrt{n^2+24n-16}}{4}.$$ \begin{claim} If $ n=2s+2 $ and $n\ge 12$, then ${\lambda }_{1}(D(S_{n,{\frac{n}{2}-1}}))> {\lambda }_{1}\left(D\left(G^*\right)\right)$. \end{claim} Let $n=2s+2$, then $g(n)<0$ when $ n\ge 12 $. So Claim 4 holds. \begin{claim} If $ n=2s+2 $ and $n\le 10$, then ${\lambda }_{1}(D(S_{n,{\frac{n}{2}-1}}))\le {\lambda }_{1}\left(D\left(G^*\right)\right)$ with equality if and only if $G^*\cong S_{n,{\frac{n}{2}-1}}$. \end{claim} If $ s=1$ and $n=4 $, then $S_{n,{\frac{n}{2}-1}}\cong G^*$. If $ s=2 $ and $ n=6 $, then $\lambda_{1}(D( S_{6,2}))=\frac{7+\sqrt{41}}{2}$ and $ q(\frac{7+\sqrt{41}}{2})<0 $, so ${\lambda }_{1}(D(S_{6,2}))< \theta(6)$. If $ s=3 $ and $ n=8 $, then $\lambda_{1}(D( S_{8,3}))=5+\sqrt{15}$ and $ q(5+\sqrt{15})<0 $, so ${\lambda }_{1}(D(S_{8,3}))< \theta(8)$. If $ s=4$ and $ n=10 $, then $\lambda_{1}(D( S_{10,4}))=11$ and $ q(11)<0 $, so ${\lambda }_{1}(D(S_{10,4}))< \theta(10)$. So Claim 5 holds. In conclusion, $G\cong G^*$ if $n\ge 12$ and $G\cong S_{n,{\frac{n}{2}-1}}$ if $4\le n\le 10 $, otherwise $G$ doesn't contain the minimum distance spectral radius, a contradiction. \qed In some specific applications, one needs to find a matching in a bipartite graph which can cover one partite. Essential and sufficient conditions for the existence of such a matching was first proposed by Hall \cite{hallthm}. Let $ S $ be a set of vertices in a graph $ G $. The set of all neighbours of the vertices in $ S $ is denoted by $ N(S) $. \begin{lemma}[Hall's theorem \cite{hallthm}] \label{hall} A bipartite graph $ G := G[X,Y ] $ has a matching which covers every vertex in $ X $ if and only if\par \centering $ |N(S)| \ge |S| $ for each $ S \subseteq X. $ \end{lemma} \begin{figure}[t] \setlength{\unitlength}{1pt} \begin{center} \begin{picture}(190.0,140.7) \put(43.5,110.2){\oval(87.7,29.7)}\put(146.5,109.5){\oval(86.3,29.0)}\put(43.5,48.6){\oval(87.7,31.2)}\put(146.5,49.3){\oval(87.0,29.7)}\put(125.4,47.9){\circle*{4}} \put(125.4,109.5){\circle*{4}} \put(13.1,108.8){\circle*{4}} \put(43.5,108.8){\circle*{4}} \put(28.3,108.8){\circle*{4}} \put(166.8,48.6){\circle*{4}} \put(166.8,109.5){\circle*{4}} \put(18.9,47.9){\circle*{4}} \put(43.5,47.9){\circle*{4}} \put(76.1,108.8){\circle*{4}} \put(76.1,47.9){\circle*{4}} \qbezier(13.1,108.8)(16.0,78.3)(18.9,47.9) \qbezier(28.3,108.8)(23.6,78.3)(18.9,47.9) \qbezier(43.5,108.8)(31.2,78.3)(18.9,47.9) \qbezier(18.9,47.9)(47.5,78.3)(76.1,108.8) \qbezier(13.1,108.8)(28.3,78.3)(43.5,47.9) \qbezier(28.3,108.8)(35.9,78.3)(43.5,47.9) \qbezier(43.5,108.8)(43.5,78.3)(43.5,47.9) \qbezier(76.1,108.8)(59.8,78.3)(43.5,47.9) \qbezier(13.1,108.8)(44.6,78.3)(76.1,47.9) \qbezier(28.3,108.8)(52.2,78.3)(76.1,47.9) \qbezier(43.5,108.8)(59.8,78.3)(76.1,47.9) \qbezier(76.1,108.8)(76.1,78.3)(76.1,47.9) \qbezier(18.9,47.9)(72.1,78.7)(125.4,109.5) \qbezier(18.9,47.9)(92.8,78.7)(166.8,109.5) \qbezier(43.5,47.9)(84.5,78.7)(125.4,109.5) \qbezier(43.5,47.9)(105.1,78.7)(166.8,109.5) \qbezier(125.4,109.5)(100.8,78.7)(76.1,47.9) \qbezier(76.1,47.9)(121.4,78.7)(166.8,109.5) \qbezier(125.4,109.5)(125.4,78.7)(125.4,47.9) \qbezier(166.8,109.5)(146.1,78.7)(125.4,47.9) \qbezier(125.4,109.5)(146.1,79.0)(166.8,48.6) \qbezier(166.8,109.5)(166.8,79.0)(166.8,48.6) \put(136.3,109.5){\circle*{2}} \put(155.2,109.5){\circle*{2}} \put(145.7,109.5){\circle*{2}} \put(52.2,47.9){\circle*{2}} \put(68.2,47.9){\circle*{2}} \put(60.2,47.9){\circle*{2}} \put(137.0,47.9){\circle*{2}} \put(155.2,47.9){\circle*{2}} \put(145.7,47.9){\circle*{2}} \put(52.9,108.8){\circle*{2}} \put(67.4,108.8){\circle*{2}} \put(60.2,108.8){\circle*{2}} \put(39.2,140.7){\makebox(0,0)[tl]{$S$}} \put(34.1,30.5){\makebox(0,0)[tl]{$N(S)$}} \put(129.8,139.9){\makebox(0,0)[tl]{$X-S$}} \put(120.4,31.9){\makebox(0,0)[tl]{$Y-N(S)$}} \put(74.7,0.0){\makebox(0,0)[tl]{$G[X,Y]$}} \end{picture} \end{center} \caption{The graph $B_{|S|,|N(S)|}$ in Theorem \ref{bppm}.} \label{hallgraph} \end{figure} For two vertex sets $X$ and $Y$, let $e(X,Y)$ be the set of all edges between $X$ and $Y$. Now we are in a position to prove Theorem \ref{bppm}. {\flushleft \textbf{Proof of Theorem \ref{bppm}.}} Assume to the contrary that $ G $ has no perfect matching with the minimum distance spectral radius. Let $ G:=G[X,Y] $ be a connected balanced bipartite graph, where $ |X|=|Y|=n $. By Lemma \ref{bppm}, since $G$ has no perfect matching, there exists $ S\subset X $ and $ |N(S)|<|S| $. Notice that there exists no edges between $ S $ and $ Y-N(S) $, otherwise, we can find a vertex $ v\in S $ and $ Y-N(S) $ contains its neighbors, a contradiction. Let $ B_{s,p} $ be the connected balanced bipartite graph obtained from $ G $ by joining $ S $ and $ N(S) $, $ X-S $ and $ Y-N(S) $ and by adding all possible edges between $ X-S $ and $ N(S) $ where $|S|=s$, $|N(S)|=p $ and $ 1\le p<s\le n-1$, so that $ B_{s,p}\cong K_{s,p} \cup K_{n-s,n-p} +e(N(S),X-S)$, i.e. $ B_{s,p}\cong K_{n,n} -e(S,Y-N(S))$ (see Fig.\ref{hallgraph}). Note that $G\cong B_{s,p} $, otherwise, we have ${\lambda }_{1}\left(D\left(B_{s,p}\right)\right)<{\lambda }_{1}\left(D\left(G\right)\right)$ by Perron-Frobenius theorem, a contradiction. \setcounter{claim}{0} \begin{claim} $\lambda_{1}(D(B_{s,p}))\ge \lambda_{1}(D(B_{s,s-1}))$ with equality if and only if $B_{s,p} \cong B_{s,s-1}$. \end{claim} If $p=s-1$, then $B_{s,p} \cong B_{s,s-1}$. Now we consider $1\le p\le s-2$. Denote the vertex set of $B_{s,s-1}$ by $V(B_{s,s-1})=S\cup (X-S) \cup N(S) \cup (Y-N(S))$ where $|S|=s$ and $|N(S)|=s-1$. Referring to the Perron-Frobenius theorem, suppose that a positive vector $Z$ is the Perron vector of $D(B_{s,s-1})$, and let $Z(v)$ denote the entry of $Z$ corresponding to the vertex $v\in V(B_{s,s-1})$. By symmetry, it is easy to see that all vertices of $S$ (resp. $X-S$, $N(S)$ and $Y-N(S)$) have the same entries in $Z$. Thus we can suppose $Z(u)=z_1$ for any $u\in S$, $Z(v)=z_2$ for any $v\in X-S$, $Z(w)=z_3$ for any $w\in N(S)$ and $Z(z)=z_4$ for any $z\in Y-N(S)$. Therefore, we have \begin{eqnarray*} &&{\lambda }_{1}\left(D\left(B_{s,p}\right)\right)- {\lambda }_{1}(D(B_{s,s-1}))~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\&\ge& {Z}^{t}\left(D\left({B_{s,p}}\right)-D(B_{s,s-1})\right)Z\\ &=&4s(s-1-p)z_1z_3\\&>&0. \end{eqnarray*} So Claim 1 holds. Therefore, $G\cong B_{s,s-1}$, or there will be a contradiction. \begin{claim} $\lambda_{1}(D(B_{s,s-1}))\ge \lambda_{1}(D(B_{n-1,n-2}))$ with equality if and only if $B_{s,s-1} \cong B_{n-1,n-2}$. \end{claim} If $s=n-1$, then $B_{s,s-1} \cong B_{n-1,n-2}$. Now consider $2\le s\le n-2$. The quotient matrix of the partition $\{S, X-S, N(S), Y-N(S)\}$ of $B_{s,s-1}$ where $|S|=s\le n-2$ and $|N(S)|=s-1$ is $$\left(\begin{array}{ccccccccc} 2s-2 \ \ & s-1 \ \ & 2n-2s \ \ & 3n-3s+3 \\[3mm] s \ \ & 2s-4 \ \ & n-s \ \ & 2n-2s+2 \\[3mm] 2s \ \ & s-1 \ \ & 2n-2s-2 \ \ & n-s+1 \\[3mm] 3s \ \ & 2s-2 \ \ & n-s \ \ & 2n-2s \\[3mm] \end{array}\right),$$ the characteristic polynomial of the matrix is \begin{eqnarray*} f(\lambda)=\lambda^4 &+& (-4n + 8)\lambda^3 + (3n^2 - 8ns + 8s^2 - 24n - 8s + 24)\lambda^2 \\&+& (8n^2s - 8ns^2 + 12n^2 - 32ns + 40s^2 - 48n - 40s + 32)\lambda\\ &-& 12n^2s^2 + 24s^3n - 12s^4 + 28n^2s - 52ns^2 + 24s^3 + 12n^2 \\&-& 20sn + 36s^2 - 32n - 48s + 16. \end{eqnarray*} We know that $ {\lambda }_{1}(D(B_{s,s-1}))$ is the largest root of $ f(\lambda )=0 $. Since $ {\lambda }_{1}\left(D\left(B_{n-1,n-2}\right)\right)$, written by $\rho $, is the largest root of the equation $$q(\lambda)=\lambda^4 + (-4n + 8)\lambda^3 + (3n^2 - 40n + 40)\lambda^2 + (28n^2 - 144n + 112)\lambda + 20n^2 - 88n + 64=0,$$ we have \begin{eqnarray*} h(\rho)&=&f\left(\rho \right)-q\left(\rho \right)\\ &=&( 8s^2-(8n+8)s + 16n - 16)\rho^2 \\&+& (8sn^2 - 8ns^2 - 16n^2 - 32ns + 40s^2 + 96n - 40s - 80)\rho \\&-& 12n^2s^2 + 24s^3n - 12s^4 + 28sn^2 - 52ns^2 + 24s^3 - 8n^2 - 20sn + 36s^2 + 56n - 48s - 48. \end{eqnarray*} Now we need to state $h(\rho)<0$. Firstly, we give a lower bound on $\rho$, $$\frac{2W(B_{n-1,n-2})}{2n}=2n+5-\frac{6}{n}<\rho.$$ By a simple computation, we obtain $$ 8s^2-(8n+8)s + 16n - 16\le 0 \ \mbox{for}\ 2\le s\le n-2,$$ and \begin{eqnarray*} \frac{8sn^2 - 8ns^2 - 16n^2 - 32ns + 40s^2 + 96n - 40s - 80}{-2( 8s^2-(8n+8)s + 16n - 16)}=\frac{n-5}{2}<2n<\rho. \end{eqnarray*} Thus, $h(\rho)$ is monotonically decreasing for $\rho> 2n$ and $h(\rho)<h(2n)$. Let $h(2n)$ be written by $ g(s) $ and we only need to prove $ g(s)<0 $ for $2\le s\le n-2$ in the following steps. Based on Matlab programming, we have \begin{eqnarray*} g(s)=-4(s - 2)(-s + n - 1)[- 3s^2 + (3s+3)n+19n +4n^2+ 6]. \end{eqnarray*} It is easy to see $s-2\ge 0$ and $-s + n - 1>0$. Define $ r(s)=- 3s^2 + (3s+3)n+19n +4n^2+ 6 $, we can easily get for $2\le s\le n-2$ and $n\ge 3$, \begin{eqnarray*} &&r(s)=- 3s^2 + (3s+3)n+19n +4n^2+ 6\\&>&\min \{r(2),\ r(n-2)\}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ &=&\min \{4n^2+25n,\ 4n^2+28n-12\}\\ &>&0. \end{eqnarray*} Therefore, $h(\rho)<h(2n)=g(s)<0$. So Claim 2 holds. In conclusion, $G\cong B_{n-1,n-2}$ has the minimum distance spectral radius among all $2n$-vertex balanced bipartite graphs without a perfect matching.\qed \vspace{3mm}
1,108,101,563,719
arxiv
\section{Introduction and Main Result} The general problem discussed here is the persistency of quasi-periodic solutions of linear or integrable equations after Hamiltonian perturbation. There have been many remarkable results in KAM (Kolmogorov--Arnold--Moser) theory of Hamiltonian PDEs achieved either by methods from the finite dimensional KAM theory \cite{Ba3, CY, E, EK, GXY, GY2, GY3, GY4, GY5,GY6, KaP, K1, K, KP, P1, P2, P3, PP, PP2, W, XQY1, XQY2, Yuan2}, or by a Newtonian scheme developed by Craig, Wayne, Bourgain \cite{B1, B2, B3, B4, B5, B6, BW, CW, Wa}. The advantage of the method from the finite dimensional KAM theory is the construction of a local normal form in a neighborhood of the obtained solutions in addition to the existence of quasi-periodic solutions. The normal form is helpful to understand the dynamics of the corresponding equations. For example, one sees the linear stability and zero Lyapunov exponents. The scheme of Craig-Wayne-Bourgain avoids the cumbersome second Melnikov conditions by solving angle dependent homological equations. The method is less Hamiltonian and more flexible than the KAM scheme to deal with resonant cases. All those methods are well developed for one dimensional Hamiltonian PDEs. However, they meet difficulties in higher dimensional Hamiltonian PDEs. Bourgain \cite{B1} made the first breakthrough by proving that the two dimensional nonlinear Schr\"odinger equations admit small--amplitude quasi--periodic solutions. Later he improved in \cite{B4} his method and proved that the higher dimensional nonlinear Schr\"odinger and wave equations admit small--amplitude quasi--periodic solutions. Recently, W.--M. Wang \cite{Wa} proved that the energy supercritical nonlinear Schr\"odinger equations admit small--amplitude quasi--periodic solutions. Constructing quasi-periodic solutions of higher dimensional Hamiltonian PDEs by method developed from the finite dimensional KAM theory appeared later. Geng--You \cite{GY3, GY4} proved that the higher dimensional nonlinear beam equations and nonlocal Schr\"odinger equations admit small--amplitude linearly--stable quasi--periodic solutions. The breakthrough of constructing quasi-periodic solutions for more interesting higher dimensional Schr\"odinger equation by modified KAM method was made recently by Eliasson--Kuksin \cite{EK}. They proved that the higher dimensional nonlinear Schr\"odinger equations admit small--amplitude linearly--stable quasi--periodic solutions. Quasi--periodic solutions of two dimensional cubic Schr\"odinger equation $$ {\rm i}u_t-\tr u+|u|^2 u=0,\qquad x\in \T^2,\ t\in \R, $$ \noindent with periodic boundary conditions are obtained by Geng--Xu--You \cite{GXY}. By carefully choosing tangential sites $\{i_1, \cdots, i_b\}\in\Z^2$, the authors proved that the above nonlinear Schr\"odinger equation admits a family of small-amplitude quasi-periodic solutions (see also \cite{PP2}). Very recently, Eliasson--Grebert--Kuksin \cite{EGK} proved that the higher dimensional nonlinear beam equations admit small--amplitude quasi--periodic solutions. \sss \sss In this paper,our aim here is to pursue further investigations of the 2D nonlinear Schr\"odinger equation by developing the methods of \cite{GXY}. In \cite{GXY}, the authors require that the nonlinearity is independent of the space variable, so that a lot of technical complexity about the unbounded multiple eigenvalues is successfully avoided. More precisely, when considering nonlinear Schr\"odinger equation especially in space dimension larger than one, a significant problem appears due to the presence of clusters of normal frequencies. Here the normal frequencies may have unbounded multiplicity because the equation \beq m^2_1+m^2_2=R^2,m_1,m_2\in\Z\label{mmR}\eeq (lattice points on a circle) may have a large number of solutions for given $R$. It is important for our analysis that the integer solutions of (\ref{mmR}) appear in well-separated small clusters (of cardinality$\leq2$) and that the total number of integer solutions is at most $e^{\frac{\log R}{\log\log R}}\ll R^\varepsilon$. The idea of the measure estimate comes from Geng--You\cite{GY6}. We use the elementary repeated limit to substitute Lipschitz domain by Eliasson--Kuksin \cite{EK}, thus our measure estimates are easier and the whole proof is more KAM--like. More concretely, we consider the $2$-dimensional nonlinear Schr\"{o}dinger equation \beq\label{nonlinearschro1} iu_t-\triangle u +|u|^2u+\frac{\partial{f(x,u,\bar u)}}{\partial{\bar u}}=0, \quad t\in\Bbb R, x\in\Bbb T^2 \eeq with periodic boundary conditions $$ u(t,x_1+2\pi,x_2)=u(t,x_1,x_2+2\pi)=u(t,x_1,x_2), $$ where $\displaystyle f(x,u,\bar u)=\sum_{j,l,j+l\geq6}a_{jl}(x)u^j\bar u^l,a_{jl}=a_{lj}$ is a real analytic function in a neighborhood of the origin. The operator $A=-\triangle$ with periodic boundary conditions has eigenvalues $\{\lambda_n\}$ satisfying $$\lambda_n=|n|^2=|n_1|^2+|n_2|^2, n=(n_1,n_2)\in \Z^2$$ and the corresponding eigenfunctions $\phi_n(x)=\frac{1}{2\pi}e^{{\rm i}\la n,x\ra}$ form a basis in the domain of the operator. A finite set $S=\{i_1,\cdots,i_b\}\subset\Z^2$ is called \emph{admissible} if\\ \noindent 1.Any three of them are not vertices of a rectangle.\\ 2.For any $n\in \Z^2\setminus S$, there exists at most one triplet $\{i,j,m\}$ with $i,j\in S$, $m\in \Z^2\setminus S$ such that $n-m+i-j=0$ and $|n|^2-|m|^2+|i|^2-|j|^2=0$. If such triplet exists, we say that $n,m$ are resonant of the first type. By definition, $n,m$ are mutually uniquely determined. We say that $(n,m)$ is a resonant pair of first type. Geometrically, $(n,m,i,j)$ forms a rectangle with $n,m$ being two adjacent vertices.\\ 3.For any $n\in \Z^2\setminus S$, there exists at most one triplet $\{i,j,m\}$ with $i,j\in S$, $m\in \Z^2\setminus S$ such that $n+m-i-j=0$ and $|n|^2+|m|^2-|i|^2-|j|^2=0$. If such triplet exists, we say that $n,m$ are resonant of the second type. By definition, $n,m$ are mutually uniquely determined. We say that $(n,m)$ is a resonant pair of second type. Geometrically, $(n,m,i,j)$ forms a rectangle with $n,m$ being two diagonal vertices.\\ 4.Any $n\in \Z^2\setminus S$ is not resonant of both the first type and the second type, i.e., there exist no $i,j,f,g\in S$ and $m,m'\in \Z^2\setminus S$, such that $$ \left\{ \begin{array}{lcl} n-m+i-j=0\\ |n|^2-|m|^2+|i|^2-|j|^2=0\\ n+m'-f-g=0\\ |n|^2+|m'|^2-|f|^2-|g|^2=0 \end{array} \right. $$ Geometrically, any two of the above defined rectangles cannot share vertex in $\Z^2\setminus S$. In Appendix A of \cite{GXY}, a concrete way of constructing the admissible set is given. It is plausible that any randomly chosen set $S$ is almost surely admissible. Now we state the main theorem as follows. \begin{Theorem}\label{main} Let $S=\{i_1,\cdots,i_b\}\subset\Z^2$ be an admissible set. There exists a Cantor set $\Cal C$ of positive--measure such that for any $\xi=(\xi_1,\cdots,\xi_b)\in\Cal C$, when $\xi_i^2+\xi_j^2<14\xi_i\xi_j$, the nonlinear Schr\"odinger equation (\ref{nonlinearschro1}) admits a small--amplitude, quasi--periodic solution of the form \[ u(t, x)=\sum_{j=1}^b \sqrt{\xi_j} e^{{\rm i}\omega_j t}\phi_{i_j}+O(|\xi|^{\frac32}),\omega_j=|i_j|^2+O(|\xi|).\] \end{Theorem} \sss \noindent {\bf Remark } We require $\xi_i^2+\xi_j^2<14\xi_i\xi_j$, such that the obtained tori are partially hyperbolic. When $\xi_i^2+\xi_j^2\geq 14\xi_i\xi_j$, one can prove the existence of the elliptic tori, however the proof is more complicated, which will be considered in the forthcoming paper. This paper is organized as follows: In section 2 we give an infinite dimensional KAM theorem; in section 3, we give its application to two-dimensional Schr\"odinger equations. The proof of the KAM theorem is given in section 4, 5, 6. Some technical lemmas are given in the Appendix. \section {An Infinite Dimensional KAM Theorem for Hamiltonian Partial Differential Equations} \sss In this section, we will formulate an infinite dimensional KAM theorem that can be applied to two-dimensional Schr\"odinger equations under periodic boundary conditions. We start by introducing some notations. For given $b$ vectors in $\Z^2$, say $\{i_1, \cdots, i_b\}$, we denote $\Z^2_1=\Z^2\setminus \{ i_1, \cdots, i_b\}$. Let $w=(\cdots, w_n,\cdots)_{n\in \Z^2_1}$, and its complex conjugate $\bar w=(\cdots,\bar w_n,\cdots)_{n\in \Z^2_1}$. We introduce the weighted norm $$\|w\|_{\rho} =\sum_{{n\in\Z^2_1}}|w_n|e^{|n|\rho},$$ where $|n|=\sqrt{n_1^2+n_2^2}$, $n=(n_1,n_2)\in \Z^2$ and $\rho > 0$. Denote a neighborhood of $\T^b\times\{I=0\}\times\{w=0\}\times\{\bar w=0\}$ by $$D_\rho(r,s)=\{(\theta,I,w,\bar w):|{\rm Im} \theta|<r,|I|<s^2,{\|w\|}_{\rho}<s, {\|\bar w\|}_{\rho}<s\},$$ where $|\cdot|$ denotes the sup-norm of complex vectors. Moreover, we denote by $\Cal O$ a positive--measure parameter set in $\R^b$. Let $\alpha\equiv (\cdots,\alpha_n,\cdots)_{n\in\Z_1^2}$, $\beta \equiv (\cdots, \beta _n, \cdots)_{n\in\Z_1^2}$, $\alpha_n$ and $ \beta_n\in \N $ with finitely many non-zero components of positive integers. The product $w^{\alpha} \bar w^{\beta }$ denotes $\prod_n w_n^{\alpha_n}\bar w_n^{\beta_n}$. For any given function \beq\label{2.2} F(\theta, I, w, \bar w)=\sum_{\alpha,\beta } F_{\alpha\beta}(\theta, I)w^{\alpha} \bar w^{\beta },\eeq where $\displaystyle F_{\alpha\beta}=\sum_{k\in\Z^b,l\in \N^b}F_{kl\alpha\beta }(\xi)I^le^{{\rm i} \la k,\theta\ra}$ is $C_W^4$ function in parameter $\xi$ in the sense of Whitney, we denote \beq\label{2.4}\|F\|_{\mathcal O}= \sum_{\alpha,\beta,k,l } |F_{kl\alpha \beta }|_{\mathcal O}\ |I^{l}|e^{|k||{\rm Im} \theta|}\,|w^{\alpha}||\bar w^{\beta }|\eeq where $|F_{kl\alpha \beta }|_{\mathcal O}$ is short for $$|F_{kl\alpha \beta }|_{\cal O}\equiv \sup_{\xi\in \Cal O}\sum_{0\leq d\leq 4}|{\partial_\xi^d F_{kl\alpha \beta }}|.$$ \noindent (the derivatives with respect to $\xi$ are in the sense of Whitney). We define the weighted norm of $F$ by \begin{equation}\label{2.3} \|F\|_{ D_\rho(r,s) ,\mathcal O}\equiv \sup_{D_\rho(r,s)}\|F\|_{\mathcal O}, \end{equation} To a function $F$, we associate a Hamiltonian vector field defined by $$ X_F=(F_I, -F_\theta, \{{\rm i}F_{w_n}\}_{n\in \Z_1^2}, \{-{\rm i}F_{\bar w_n}\}_{n\in\Z_1^2}).$$ Its weighted norm is defined by \footnote{The norm $\|\cdot\|_{D_\rho( r,s), \cal O}$ for scalar functions is defined in (\ref{2.3}). The vector function $G: D_\rho( r,s)\times {\cal O}\to \C^m$, ($m<\infty$) is similarly defined as $\|G\|_{D_\rho( r,s), \cal O}=\sum_{i=1}^m\|G_i\|_{D_\rho( r,s), \cal O}$.} \begin{eqnarray} \|X_F\|_{\!{}_{ D_\rho(r,s) , \cal O}}&\equiv& \|F_I\|_{ D_\rho(r,s) , \cal O}+ \frac 1{s^2}\|F_\theta\|_{ D_\rho(r,s) , \cal O}\nonumber\\ &+&\sup_{D_\rho(r,s)}[ \frac 1s\sum_{n\in\Z_1^2} \|F_{w_n}\|_{\cal O}e^{|n|\rho}+ \frac 1s\sum_{n\in\Z_1^2} \|F_{\bar w_n}\|_{\cal O} e^{|n|\rho} ]\label{2.6} \end{eqnarray} Suppose that $S$ is an admissible set. Let ${\cal L}_2$ be the subset of $Z^2_1$ with the following property: for each $n\in {\cal L}_2$, there exists a unique triplet $(i,j,m)$ with $m\in Z^2_1$,$i,j\in S$ such that $$-i-j+n+m=0,-|i|^2-|j|^2+|n|^2+|m|^2=0.$$ We now describe a family of Hamiltonians studied in this paper. Let $$H_0=N+{\cal B}+\bar{\cal B},$$ \begin{eqnarray*} N &=&\la\omega(\xi),I\ra+ \sum_{n\in \Z_1^2\backslash{{\cal L}_2}}\Omega_n(\xi)w_n \bar w_n+\sum_{n'\in {{\cal L}_2}}(\Omega_{n'}(\xi)-\omega_{i'}(\xi))w_{n'} \bar w_{n'} \end{eqnarray*} Recall that $(i',j')$ is uniquely determined by the corresponding resonant pair $(n',m')$ in ${\cal L}_2$. $${\cal B}=\sum_{n'\in {\cal L}_2}a_{n'}(\xi)w_{n'} w_{m'}$$ $$\bar{\cal B}=\sum_{n'\in {\cal L}_2}\bar a_{n'}(\xi)\bar w_{n'} \bar w_{m'}$$ where $\xi\in \Cal O$ is a parameter, the phase space is endowed with the symplectic structure $\displaystyle dI\wedge d\theta + {\rm i} \sum_{n\in \Z_1^2} dw_n \wedge d \bar w_n$. \sss For each $\xi\in \Cal O$, the Hamiltonian equation for $H_0$ admits special solutions $(\theta, 0, 0, 0)\to (\theta+\omega t, 0,0,0)$ that corresponds to an invariant torus on the phase space. \sss Consider now the perturbed Hamiltonian \begin{equation}\label{hamH} H=H_0+P=N+{\cal B}+\bar{\cal B}+P(\theta,I,w,\bar w, \xi). \end{equation} Our goal is to prove that, for most values of parameter $\xi \in \Cal O$ (in Lebesgue measure sense), the Hamiltonians $H=N+{\cal B}+\bar{\cal B}+P$ still admit invariant tori provided that $\|X_P\|_{\!{}_{ D_\rho(r,s) , \cal O}}$ is sufficiently small. \sss Decomposition of $\Z_1^2\backslash{{\cal L}_2}$. For a nonnegative integer $\Delta$ we define an equivalence relation on $\Z_1^2\backslash{{\cal L}_2}$ generated by the pre-equivalence relation $$a\sim b \Longleftrightarrow \{ |a|^2=|b|^2 ,|a-b|\leq\Delta\} $$ Let $[a]_\Delta$ denote the equivalence class (block) and let $(\Z_1^2\backslash{{\cal L}_2})_\Delta$ be the set of equivalence classes. It is trivial that each block $[a]_\Delta$ is finite (we will write $[\cdot]$ for $[\cdot]_\bigtriangleup$). Case 1:$|a|\leq \Delta$, we know $\sharp\{b:|a|=|b|,b\in \Z^2\}\leq e^{\frac{\log\Delta}{\log\log\Delta}}\ll \Delta^\varepsilon$; Case 2:$|a|>\Delta$, we have $\sharp\{b:|a|=|b|,|a-b|\leq\Delta^{\frac{1}{3}},b\in \Z^2\}\leq2$. In order to have a compact formulation when solving homological equations, we rewrite $H$ into matrix form. Let $z_{[n]}=(w_i)_{i\in [n]}$, $\bar z_{[n]}=(\bar{w_i})_{i\in [n]}$; else $z_n=w_n,\bar{z}_n=\bar{w}_n$. \begin{eqnarray*} H&=&\la\omega(\xi),I\ra+ \sum_{n\in \Z_1^2\backslash{{\cal L}_2}}\Omega_n(\xi)w_n \bar w_n+\sum_{n'\in {{\cal L}_2}}(\Omega_{n'}(\xi)-\omega_{i'}(\xi))w_{n'} \bar w_{n'}+{\cal B}+\bar{\cal B}+P\\ &=&\la\omega(\xi),I\ra +\sum_{[n]} \la A_{[n]} z_{[n]}, \bar z_{[n]}\ra+\sum_{n'\in {{\cal L}_2}}(\Omega_{n'}(\xi)-\omega_{i'}(\xi))z_{n'} \bar z_{n'} +{\cal B}+\bar{\cal B}+P \end{eqnarray*} where $A_{[n]}$ is $\sharp [n]\times\sharp [n]$ matrix.\\ We consider Hamiltonian $H$ satisfying the following hypotheses: \bs \noindent $(A1)${\it Nondegeneracy:} The map $\xi\to \omega(\xi)$ is a $C^4_W(\Cal O)$ diffeomorphism between $\Cal O$ and its image. \bs \noindent $(A2)${\it Asymptotics of normal frequencies:} \begin{equation}\label{asymp1} \Omega_n=\varepsilon^{-a}|n|^2+\tilde{\Omega}_n,a\geq0,n\in\Z_1^2\backslash{{\cal L}_2} \end{equation} where $\tilde{\Omega}_n$'s are $C^4_W(\Cal O)$ functions of $\xi$ with $C^4_W(\Cal O)$-norm bounded by some positive constant $L$. \bs \noindent $(A3)$ {\it Melnikov's non--resonance conditions:} For $n\in\Z_1^2\backslash{{\cal L}_2}$, let $$A_{[n]}=\Omega_{[n]}+(P_{i j }^{011})_{i\in [n],j\in [n]}=(\Omega_{i j }+P_{i j }^{011})_{i\in [n],j\in [n]}$$ where if $i\neq j$,$\Omega_{i j }=0$; if $i = j$, $\Omega_{i j }=\Omega_i$. When $|i-j|> K$, $P_{i j }^{011}=0$. where ${A_{[n]} }'s$ are $C^4_W$ functions of $\xi$ with $C^4_W$-norm bounded by some positive constant $L$, that is to say $$\sup_{\xi\in \Cal O}\max_{0< d\leq4}\|{\partial^{d}_\xi A_{[n]}}\|\leq L$$ we assume that $\omega(\xi)$,$A_{[n]}(\xi)\in C^4_W(\Cal O)$ and there exist $\gamma,\tau>0$ such that, for $|k|\leq K$, \[ |\langle k,\omega\rangle|\ge \frac{\gamma}{K^\tau}, k\neq 0,\] \[|\langle k,\omega\rangle \pm\widetilde{\lambda}_j|\ge \frac{\gamma}{K^\tau},j\in{[n]} \] \[|\langle k,\omega\rangle \pm \widetilde{\lambda}_i\pm \widetilde{\lambda}_j|\ge \frac{\gamma}{K^\tau},i\in{[m]},j\in{[n]} \] where $\widetilde{\lambda}_i,\widetilde{\lambda}_j$ are $A_{[n]}$ and $A_{[m]}$'s eigenvalues respectively. Let $${\cal A}_n=A_{[n]},n\in\Z_1^2\backslash{{\cal L}_2}$$ $$ {\cal A}_n=\left( \begin{array}{cccc} \Omega_{n}-\omega_{i} & -\frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}\\ \frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}& -(\Omega_{m}-\omega_{j}) \end{array} \right) \\,n\in {\cal L}_2$$ where $(n,m)$ are resonant pairs, $(i,j)$ are uniquely determined by $(n,m)$ in ${\cal L}_2$. \\ We assume that $\omega(\xi)$, ${\cal A}_n(\xi)\in C^4_W(\Cal O)$ and there exist $\gamma,\tau>0$ such that\footnote{The tensor product (or direct product) of two $m\times n,k\times l$ matrices $A=(a_{ij}),B$ is a $(mk)\times(nl)$ matrix defined by $$ A\otimes B=(a_{ij}B)=\left( \begin{array}{cccc} a_{11}B &\cdots& a_{1n}B\\ \cdots&\cdots&\cdots\\ a_{m1}B&\cdots & a_{mn}B \end{array} \right) \\\cdots$$ $\|\cdot\|$ for matrix denotes the operator norm, i.e., $\|M\|=\sup_{|y|=1}|My|$. Recall that $\omega$ and ${\cal A}_n,{\cal A}'_n$ depend on $\xi$.} (here $I_2$ is $2\times2$ identity matrix) $$|det(\la k,\omega\ra I\pm{\cal A}_n\otimes I_2 \pm I_2\otimes {\cal A}_{n'})|\geq \frac{\gamma}{K^\tau}, k\neq0,n,n'\in{\cal L}_2.$$ {\bf We assume that the eigenvalues of ${\cal A}_n$ ($n\in {\cal L}_2$) have the non--zero imaginary parts so that the obtained tori are partially hyperbolic.} \bs \noindent $(A4)$ {\it Regularity of ${\cal B}+\bar{\cal B}+P$:} ${\cal B}+\bar{\cal B}+P$ is real analytic in $I,\theta,w,\bar w$ and Whitney smooth in $\xi$; in addition $$\|X_{\cal B}\|_{D_\rho(r,s), \cal O}<1,\|X_P\|_{D_\rho(r,s), \cal O}<\varepsilon$$ \bs \noindent $(A5)$ {\it T\"{o}plitz-Lipschitz property:} For any fixed $n, m\in {\Z}^2$, $c\in\Z^2\setminus\{0\}$, the limits $$\lim_{t\to \infty}\frac{\partial^2 ({\cal B}+P)}{\partial w_{n+tc}\partial w_{m-tc}}, \quad \lim_{t\to \infty}\frac{\partial^2 (\sum_{n\in {\Z}_1^2}\tilde{\Omega}_nw_n\bar w_n+P)}{\partial w_{n+tc}\partial \bar w_{m+tc}},\quad \lim_{t\to \infty}\frac{\partial^2 (\bar{\cal B}+P)}{\partial \bar w_{n+tc}\partial \bar w_{m-tc}}$$ exist. Moreover, there exists $K>0$, such that when $|t|>K$, $N+{\cal B}+\bar{\cal B}+P$ satisfies \[ \|\frac{\partial^2 ({\cal B}+P)}{\partial w_{n+tc}\partial w_{m-tc}} -\lim_{t\to \infty}\frac{\partial^2 ({\cal B}+P)}{\partial w_{n+tc}\partial w_{m-tc}} \|_{\!{}_{D_\rho(r,s), \Cal O}}\le \frac{\varepsilon}{|t|}e^{-|n+m|\rho}, \] \[ \|\frac{\partial^2 (\sum_{n\in {\Z}_1^2}\tilde{\Omega}_nw_n\bar w_n+P)}{\partial w_{n+tc}\partial \bar w_{m+tc}} -\lim_{t\to \infty} \frac{\partial^2 (\sum_{n\in {\Z}_1^2}\tilde{\Omega}_nw_n\bar w_n+P)} {\partial w_{n+tc}\partial \bar w_{m+tc}}\|_{\!{}_{D_\rho(r,s), \Cal O}}\le \frac{\varepsilon}{|t|}e^{-|n-m|\rho}, \] \[ \|\frac{\partial^2 (\bar{\cal B}+P)}{\partial \bar w_{n+tc}\partial \bar w_{m-tc}}- \lim_{t\to \infty}\frac{\partial^2 (\bar{\cal B}+P)}{\partial \bar w_{n+tc}\partial \bar w_{m-tc}} \|_{\!{}_{D_\rho(r,s), \Cal O}}\le \frac{\varepsilon}{|t|}e^{-|n+m|\rho}. \] \bs \noindent Now we are ready to state an infinite dimensional KAM Theorem. \begin{Theorem}\label{KAM} Assume that the Hamiltonian $H_0+P$ in (\ref{hamH}) satisfies $(A1)$--$(A5)$. Let $\gamma>0$ be small enough, there exists a positive constant $\varepsilon=\varepsilon(b, K, \tau,\gamma,r,s,\rho)$. Such that if $\|X_P\|_{\!{}_{D_\rho(r,s), \cal O}}<\varepsilon$, then the following holds true: There exist a Cantor set $\Cal O_\gamma\subset\Cal O$ with ${\rm meas}(\Cal O\setminus \Cal O_\gamma)=O(\gamma^{\frac14})$ and two maps ( analytic in $\theta$ and $C_W^4$ in $\xi$) $$\Psi: \T^b\times \Cal O_\gamma\to D_\rho(r,s),\ \ \ \ \tilde\omega:\Cal O_\gamma\to \R^b,$$ where $\Psi$ is $\frac{\varepsilon}{\gamma^4}$-close to the trivial embedding $\Psi_0:\T^b\times \Cal O\to \T^b\times\{0,0,0\}$ and $\tilde \omega$ is $\varepsilon$-close to the unperturbed frequency $\omega$. Then for any $\xi\in \Cal O_\gamma$ and $\theta\in \T^b$, the curve $t\to \Psi(\theta+\tilde\omega(\xi) t,\xi)$ is a quasi-periodic solution of the Hamiltonian equations governed by $H=H_0+P$. The obtained tori are partially hyperbolic. \end{Theorem} \section{Application to the Two-dimensional Schr\"odinger Equations} \sss \noindent We consider the two--dimensional nonlinear Schr\"odinger equations \beq\label{nonlinearschro} {\rm i}u_t-\Delta u+|u|^2u+\frac{\partial{f(x,u,\bar u)}}{\partial{\bar u}}=0, \qquad x\in \T^2,\ t\in \R \eeq with periodic boundary conditions $$ u(t,x_1+2\pi,x_2)=u(t,x_1,x_2+2\pi)=u(t,x_1,x_2), $$ where $\displaystyle f(x,u,\bar u)=\sum_{j,l,j+l\geq6}a_{jl}(x)u^j\bar u^l,a_{jl}=a_{lj}$ is a real analytic function in a neighborhood of the origin. The operator $A=-\triangle$ with periodic boundary conditions has eigenvalues $\{\lambda_n\}$ satisfying $$\lambda_n=|n|^2=|n_1|^2+|n_2|^2, n=(n_1,n_2)\in \Z^2$$ and the corresponding eigenfunctions $\phi_n(x)=\frac{1}{2\pi}e^{{\rm i}\la n,x\ra}$ form a basis in the domain of the operator. Equation (\ref{nonlinearschro}) can be rewritten as a Hamiltonian equation \beq\label{3.7+7} u_t={\rm i}\frac{\partial H}{\partial \bar u} \eeq and the corresponding Hamiltonian is \beq\label{hamiltoniann} H=\la Au,u\ra +\frac{1}{2}\int_{\T^2} |u|^4\ dx+\int_{\T^2} f(x,u,\bar u)\ dx,\eeq where $\la \cdot,\cdot\ra$ denotes the inner product in $L^2$. Let \[u(x)=\sum_{n\in \Z^2}{q_n}\phi_n(x), \] System (\ref{3.7+7}) is then equivalent to the lattice Hamiltonian equations \begin{equation}\label{3.8+8} \dot q_n={\rm i}(\lambda_n q_n+ \frac{\partial G}{\partial \bar q_n}), \quad G\equiv \frac{1}{8\pi^2}\sum_{i-j+n-m=0}q_i\bar q_jq_n\bar q_m+\int_{\T^2} f(x,u,\bar u)\ dx ,\end{equation} with corresponding Hamiltonian function \begin{eqnarray} H&=& \sum_{n\in \Z^2}\lambda_nq_n\bar q_n+\frac{1}{8\pi^2}\sum_{i-j+n-m=0}q_i\bar q_jq_n\bar q_m+\int_{\T^2} f(x,\sum_{n\in \Z^2}{q_n}\phi_n(x),\sum_{n\in \Z^2}{\bar{q}_n}\bar{\phi}_n(x))\ dx\nonumber\\ &= &\sum_{n\in \Z^2}\lambda_n|q_n|^2+G\label{PH}\\ G&=& \frac{1}{8\pi^2}\sum_{i-j+n-m=0}q_i\bar q_jq_n\bar q_m+\int_{\T^2} f(x,\sum_{n\in \Z^2}{q_n}\phi_n(x),\sum_{n\in \Z^2}{\bar{q}_n}\bar{\phi}_n(x))\ dx\nonumber \end{eqnarray} As in \cite {KP,P1,GY2}, the perturbation $G$ in (\ref{3.8+8}) has the following regularity property. \begin{Lemma}\label{regularityGG} For any fixed $\rho>0$, the gradient $G_{\bar q}$ is real analytic as a map in a neighborhood of the origin with \beq \|G_{\bar q}\|_{\rho}\le c\|q\|_{\rho}^3. \label{3.16+6} \eeq \end{Lemma} \proof \begin{eqnarray*} \|G_{\bar q}\|_{\rho}&=&\sum_{n\in\Z^2}|G_{\bar q_n}|e^{|n|\rho}\\ &\leq&c\sum_{{n,\alpha,\beta-e_n,|\alpha|+|\beta-e_n|= 3}}|q^\alpha\bar q^{\beta-e_n}|e^{|n|\rho}\\ &\leq&c\sum_{\alpha,\beta-e_n,|\alpha|+|\beta-e_n|= 3}|q^\alpha\bar q^{\beta-e_n}|e^{|\alpha|\rho}e^{|\beta-e_n|\rho}\\ &\leq&c\|q\|_{\rho}^3. \end{eqnarray*}\qed For an admissible set of tangential site $S=\{i_1,\cdots,i_b\}\subset\Z^2$, we have a nice normal form for $H$.\\ \begin{Proposition}\label{P1}Let $S$ be admissible.For Hamiltonian function (\ref{PH}), there is a symplectic transformation $\Psi$, such that \begin{equation}\label{P11}H\circ \Psi=\la \omega,I\ra+\la \Omega w,w\ra+{\cal A}+{\cal B}+\bar{\cal B}+P \end{equation} with $$ \left\{ \begin{array}{lcl} \omega_i(\xi)=\displaystyle\varepsilon^{-3}|i|^2-\frac{1}{4\pi^2}\xi_i+\sum_{j\in S}\frac{1}{2\pi^2}\xi_j\\ \Omega_n=\displaystyle\varepsilon^{-3}|n|^2+\sum_{j\in S}\frac{1}{2\pi^2}\xi_j \end{array} \right. $$ $${\cal A}=\frac{1}{2\pi^2}\sum_{n\in {\cal L}_1}\sqrt{\xi_i\xi_j}w_n\bar w_me^{i\theta_i-i\theta_j}$$ $${\cal B}=\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}w_{n'} w_{m'}e^{-i\theta_{i'}-i\theta_{j'}}$$ $$\bar{\cal B}=\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2} \sqrt{\xi_{i'}\xi_{j'}}\bar w_{n'} \bar w_{m'}e^{i\theta_{i'}+i\theta_{j'}}$$ \begin{eqnarray} |P|=&&O(\varepsilon^2|I|^2+\varepsilon^2|I|\|w\|^2_\rho+\varepsilon\xi^{\frac{1}{2}}\|w\|^3_\rho+\varepsilon^2\|w\|^4_\rho+\varepsilon\xi^3\nonumber\\ &&+\varepsilon^2\xi^{\frac{5}{2}}\|w\|_\rho+\varepsilon^3\xi^2\|w\|^2_\rho+\varepsilon^4\xi^{\frac{3}{2}}\|w\|^3_\rho). \end{eqnarray} \end{Proposition} \proof The proof consists of several sympiectic change of variables. Firstly, let \begin{equation}\label{P12} F=\sum_{{{i-j+n-m=0}\atop{|i|^2-|j|^2+|n|^2-|m|^2\neq0}}\atop\sharp S\cap\{i,j,n,m\}\geq2}\frac{i}{8\pi^2(\lambda_i-\lambda_j+\lambda_n-\lambda_m)}q_i\bar q_jq_n\bar q_m, \end{equation} and ${X}^1_F$ be the time one map of the flow of the associated Hamiltonian systems. The change of variables ${X}^1_F$ sends $H$ to \begin{eqnarray} H\circ{X}^1_F&=&H+ \{H,F\}+\int_0^1 (1-t)\{\{H,F\},F\}\circ \phi_F^{t}dt\nonumber\\ &=&\sum_{i\in S}\lambda_i|q_i|^2+\sum_{i\in \Z^2_1}\lambda_i|w_i|^2+\sum_{i\in S}\frac{1}{8\pi^2}|q_i|^4\\ &+&\sum_{i,j\in S,i\neq j}\frac{1}{2\pi^2}|q_i|^2|q_j|^2+\sum_{i\in S,j\in \Z^2_1}\frac{1}{2\pi^2}|q_i|^2|w_j|^2\\ &+&\sum_{n\in {\cal L}_1}\frac{1}{2\pi^2}q_i\bar q_jw_n\bar w_m+\sum_{{n'}\in {\cal L}_2}\frac{1}{2\pi^2}(q_{i'}q_{j'}\bar w_{n'}\bar w_{m'}+\bar q_{i'}\bar q_{j'} w_{n'} w_{m'})\label{P13}\\ &+&O(|q|\|w\|^3_\rho+\|w\|^4_\rho+|q|^6+|q|^5\|w\|_\rho+|q|^4\|w\|_\rho^2+|q|^3\|w\|_\rho^3).\nonumber \end{eqnarray} We remind that $(n,m)$ are resonant pairs and $(i,j)$ is uniquely determined by $(n,m)$; $(n',m')$ are resonant pairs and $(i',j')$ is uniquely determined by $(n',m')$ in (\ref{P13}). Next we introduce standard action-angle variables in the tangential space $$q_j=\sqrt{I_j+\xi_j}e^{{\rm i}\theta_j},\bar q_j=\sqrt{I_j+\xi_j}e^{-{\rm i}\theta_j},j\in S,$$ and $$q_n=w_n, \bar q_n=\bar w_n, n\in \Z^2_1,$$ we have \begin{eqnarray*} H\circ{X}^1_F=&&\sum_{i\in S}\lambda_i(I_i+\xi_i)+\sum_{i\in \Z^2_1}\lambda_i|w_i|^2+\sum_{i\in S}\frac{1}{8\pi^2}(I_i+\xi_i)^2\\ &+&\frac{1}{2\pi^2}\sum_{i,j\in S,i\neq j}(I_i+\xi_i)(I_j+\xi_j)+\frac{1}{2\pi^2}\sum_{i\in S,j\in \Z^2_1}(I_i+\xi_i)|w_j|^2\\ &+&\frac{1}{2\pi^2}\sum_{n\in {\cal L}_1}\sqrt{(I_i+\xi_i)(I_j+\xi_j)}w_n\bar w_me^{i\theta_i-i\theta_j}\\ &+&\frac{1}{2\pi^2}\sum_{{n'}\in {\cal L}_2}\sqrt{(I_{i'}+\xi_{i'})(I_{j'}+\xi_{j'})}w_{n'} w_{m'}e^{-i\theta_{i'}-i\theta_{j'}}\\ &+&\frac{1}{2\pi^2}\sum_{{n'}\in {\cal L}_2}\sqrt{(I_{i'}+\xi_{i'})(I_{j'}+\xi_{j'})}\bar w_{n'}\bar w_{m'}e^{i\theta_{i'}+i\theta_{j'}}\\ &+&O(\xi^{\frac12}\|w\|^3_\rho+\|w\|^4_\rho+\xi^3+\xi^{\frac52}\|w\|_\rho+\xi^2\|w\|_\rho^2 +\xi^{\frac32}\|w\|_\rho^3).\\ =&&\sum_{i\in S}\lambda_iI_i+\sum_{i\in \Z^2_1}\lambda_i|w_i|^2+\sum_{i\in S}\frac{1}{4\pi^2}\xi_i I_i +\sum_{i,j\in S,i\neq j}\frac{1}{2\pi^2}\xi_iI_j+\sum_{i\in S,j\in \Z^2_1}\frac{1}{2\pi^2}\xi_i|w_j|^2\\ &+&\frac{1}{2\pi^2}\sum_{n\in {\cal L}_1}\sqrt{\xi_i\xi_j}w_n\bar w_me^{i\theta_i-i\theta_j}\\ &+&\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}w_{n'} w_{m'}e^{-i\theta_{i'}-i\theta_{j'}}\\ &+&\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2} \sqrt{\xi_{i'}\xi_{j'}}\bar w_{n'} \bar w_{m'}e^{i\theta_{i'}+i\theta_{j'}}\\ &+&O(|I|^2+|I|\|w\|^2_\rho+\xi^{\frac{1}{2}}\|w\|^3_\rho+\|w\|^4_\rho+\xi^3+\xi^{\frac{5}{2}}\|w\|_\rho+\xi^2\|w\|^2_\rho+\xi^{\frac{3}{2}}\|w\|^3_\rho)\\ =&&N+{\cal A}+{\cal B}+\bar{\cal B}+P \end{eqnarray*} where $$N=\sum_{i\in S}\lambda_iI_i+\sum_{j\in \Z^2_1}\lambda_j|w_j|^2-\sum_{i\in S}\frac{1}{4\pi^2}\xi_i I_i +\sum_{i,j\in S}\frac{1}{2\pi^2}\xi_iI_j+\sum_{i\in S,j\in \Z^2_1}\frac{1}{2\pi^2}\xi_i|w_j|^2$$ $${\cal A}=\frac{1}{2\pi^2}\sum_{n\in {\cal L}_1}\sqrt{\xi_i\xi_j}w_n\bar w_me^{i\theta_i-i\theta_j}$$ $${\cal B}=\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}w_{n'} w_{m'}e^{-i\theta_{i'}-i\theta_{j'}}$$ $$\bar{\cal B}=\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2} \sqrt{\xi_{i'}\xi_{j'}}\bar w_{n'} \bar w_{m'}e^{i\theta_{i'}+i\theta_{j'}}$$ By the scaling in time $$\xi\rightarrow \varepsilon^3\xi,I\rightarrow \varepsilon^5I,\theta\rightarrow \theta,w\rightarrow\varepsilon^{\frac52}w,\bar w\rightarrow\varepsilon^{\frac52}\bar w$$ we finally arrive at the rescaled Hamiltonian $$H=\varepsilon^{-8}H(\varepsilon^3\xi,\varepsilon^5I,\theta,\varepsilon^{\frac52}w,\varepsilon^{\frac52}\bar w)=\la \omega,I\ra+\la \Omega w,w\ra+{\cal A}+{\cal B}+\bar{\cal B}+P$$ where $$ \left\{ \begin{array}{lcl} \omega_i(\xi)=\displaystyle\varepsilon^{-3}|i|^2-\frac{1}{4\pi^2}\xi_i+\sum_{j\in S}\frac{1}{2\pi^2}\xi_j\\ \Omega_n=\displaystyle\varepsilon^{-3}|n|^2+\sum_{j\in S}\frac{1}{2\pi^2}\xi_j \end{array} \right. $$ $${\cal A}=\frac{1}{2\pi^2}\sum_{n\in {\cal L}_1}\sqrt{\xi_i\xi_j}w_n\bar w_me^{i\theta_i-i\theta_j}$$ $${\cal B}=\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}w_{n'} w_{m'}e^{-i\theta_{i'}-i\theta_{j'}}$$ $$\bar{\cal B}=\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2} \sqrt{\xi_{i'}\xi_{j'}}\bar w_{n'} \bar w_{m'}e^{i\theta_{i'}+i\theta_{j'}}$$ \begin{eqnarray} |P|=&&O(\varepsilon^2|I|^2+\varepsilon^2|I|\|w\|^2_\rho+\varepsilon\xi^{\frac{1}{2}}\|w\|^3_\rho+\varepsilon^2\|w\|^4_\rho+\varepsilon\xi^3\nonumber\\ &&+\varepsilon^2\xi^{\frac{5}{2}}\|w\|_\rho+\varepsilon^3\xi^2\|w\|^2_\rho+\varepsilon^4\xi^{\frac{3}{2}}\|w\|^3_\rho). \end{eqnarray}\qed We will show that, by a nonlinear symplectic coordinates transformation, the normal form in Proposition \ref{P1} can be transformed into the more elegant form. For this purpose, we need the following lemma from \cite{XY}. \begin{Lemma}\label{symchange} For any $k_1,k_2,\cdots,k_m\in\Z^b$, non-singular $m\times m$ matrix $S$ with $S^T\bar S=I$, the map $\Phi_0:(\theta,I,w,w)\rightarrow(\theta_+,I_+,z,z)$ defined by $$ \left\{ \begin{array}{lcl} \theta_+=\theta\\ I_+=I-\sum_{j=1}^{m}w_j\bar w_jk_j\\ z=SEw\\ \bar z=\bar S\bar E\bar w \end{array} \right. $$ is symplectic with diagonal matrix $$E=E(k_1,k_2,\cdots,k_m)=diag(e^{i\la k_1,\theta\ra},e^{i\la k_2,\theta\ra},\cdots,e^{i\la k_m,\theta\ra}).$$ \end{Lemma} The proof of the above lemma refers to \cite{XY}.\\ A nonlinear symplectic coordinates transformation $\Phi$($\exists S$): $$ \left\{ \begin{array}{lcl} \theta_+=\theta\\ I_+=I-\displaystyle\sum_{n\in {\cal L}_1}(w_n\bar w_ne_i+w_m\bar w_me_j)+\sum_{n'\in {\cal L}_2}(w_{n'}\bar w_{n'}e_{i'}+w_{m'}\bar w_{m'}e_{j'})\\ {\left( \begin{array}{cccc} z_n\\ z_m \end{array} \right)}=S{\left( \begin{array}{cccc} e^{i\la k_i,\theta\ra} & 0 \\ 0& e^{i\la k_j,\theta\ra} \end{array} \right)}{\left( \begin{array}{cccc} w_n\\ w_m \end{array} \right)}, {\left( \begin{array}{cccc} \bar z_n\\ \bar z_m \end{array} \right)}=\bar S{\left( \begin{array}{cccc} e^{-i\la k_i,\theta\ra} & 0 \\ 0& e^{-i\la k_j,\theta\ra} \end{array} \right)}{\left( \begin{array}{cccc} \bar w_n\\ \bar w_m \end{array} \right)},n\in {\cal L}_1\\ z_{n'}=w_{n'}e^{-i\theta_{i'}},\bar z_{n'}=\bar w_{n'}e^{i\theta_{i'}};z_{m'}=w_{m'}e^{-i\theta_{j'}},\bar z_{m'}=\bar w_{m'}e^{i\theta_{j'}},n'\in {\cal L}_2\\ z_n=w_n,\bar z_n=\bar w_n,n\in \Z^2_1\setminus({\cal L}_1\cup{\cal L}_2) \end{array} \right. $$ We get Hamiltonian systems with the Hamiltonian \begin{eqnarray}\label{Hamiltoniann} H\circ\Psi\circ\Phi&=&\la\omega(\xi), I_+\ra+\sum_{{n\in \Z^2_1\setminus({\cal L}_1\cup{\cal L}_2)}} \Omega_n(\xi)z_n\bar z_n\nonumber\\ &+&\sum_{{n\in {\cal L}_1}}[ (\varepsilon^{-3}(|n|^2+|i|^2)+\sum_{j\in S}\frac{1}{\pi^2}\xi_j-\frac{1}{8\pi^2}(\xi_i+\xi_j)+\frac{1}{8\pi^2}\sqrt{\xi^2_i+14\xi_i\xi_j+\xi_j^2})z_n\bar z_n\nonumber\\ &+&(\varepsilon^{-3}(|m|^2+|j|^2)+\sum_{j\in S}\frac{1}{\pi^2}\xi_j-\frac{1}{8\pi^2}(\xi_i+\xi_j)-\frac{1}{8\pi^2}\sqrt{\xi^2_i+14\xi_i\xi_j+\xi_j^2})z_m\bar z_m]\nonumber\\ &+&\sum_{{n'\in {\cal L}_2}}[ (\Omega_{n'}-\omega_{i'})z_{n'}\bar z_{n'}+(\Omega_{m'}-\omega_{j'})z_{m'}\bar z_{m'}]\nonumber\\ &+&\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}z_{n'} z_{m'}+\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}\bar z_{n'} \bar z_{m'}\nonumber\\ &+&P(\theta_+, I_+, z, \bar z, \xi)\nonumber\\ =&&N+{\cal B}+\bar{\cal B}+P \end{eqnarray} where \begin{eqnarray*}N&=&\la\omega(\xi), I_+\ra+\sum_{{n\in \Z^2_1\setminus{\cal L}_2}} \Omega_n(\xi)z_n\bar z_n\\ &+&\sum_{{n'\in {\cal L}_2}}[ (\Omega_{n'}-\omega_{i'})z_{n'}\bar z_{n'}+(\Omega_{m'}-\omega_{j'})z_{m'}\bar z_{m'}] \end{eqnarray*} $$ \left\{ \begin{array}{lcl} \omega_i(\xi)=\displaystyle\varepsilon^{-3}|i|^2-\frac{1}{4\pi^2}\xi_i+\sum_{j\in S}\frac{1}{2\pi^2}\xi_j\\ \Omega_n=\displaystyle\varepsilon^{-3}|n|^2+\sum_{j\in S}\frac{1}{2\pi^2}\xi_j,{n\in \Z^2_1\setminus{\cal L}_1}\\ \Omega_n=\displaystyle\varepsilon^{-3}(|n|^2+|i|^2)+\sum_{j\in S}\frac{1}{\pi^2}\xi_j-\frac{1}{8\pi^2}(\xi_i+\xi_j)+\frac{1}{8\pi^2}\sqrt{\xi^2_i+14\xi_i\xi_j+\xi_j^2},n\in{\cal L}_1\\ \Omega_m=\varepsilon^{-3}(|m|^2+|j|^2)+\sum_{j\in S}\frac{1}{\pi^2}\xi_j-\frac{1}{8\pi^2}(\xi_i+\xi_j)-\frac{1}{8\pi^2}\sqrt{\xi^2_i+14\xi_i\xi_j+\xi_j^2},n\in{\cal L}_1 \end{array} \right. $$ $${\cal B}=\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}z_{n'} z_{m'}$$ $$\bar{\cal B}=\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2} \sqrt{\xi_{i'}\xi_{j'}}\bar z_{n'} \bar z_{m'}$$ For the notational simplicity, $I,\theta,H$ refer to $I_+,\theta_+,H\circ\Psi\circ\Phi$. Where $P$ is just $G$ with the $(q_{i_1}, \cdots, q_{i_b}, \bar q_{i_1}, \cdots, \bar q_{i_b}, q_n, \bar q_n )$-variables expressed in terms of the $(\theta,I, z_n,\bar z_n)$ variables. \begin{eqnarray} H=&&\la \omega,I\ra+\sum_{[n]} \la A_{[n]} z_{[n]}, \bar z_{[n]}\ra\label{Am1}\\ &+&\sum_{{n'\in {\cal L}_2}}[ (\Omega_{n'}-\omega_{i'})z_{n'}\bar z_{n'}+(\Omega_{m'}-\omega_{j'})z_{m'}\bar z_{m'}]\nonumber\\ &+&\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}z_{n'} z_{m'}+\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}\bar z_{n'} \bar z_{m'}\nonumber\\ &+&P(\theta, I, z, \bar z, \xi)\nonumber\\ =&&N+{\cal B}+\bar{\cal B}+P\nonumber \end{eqnarray} \noindent where $A_{[n]}$ is $\sharp [n]\times\sharp [n]$ matrice in (\ref{Am1}) $$A_{[n]}=\Omega_{[n]}+(P_{i j }^{011})_{i\in [n],j\in [n]}=(\Omega_{i j }+P_{i j }^{011})_{i\in [n],j\in [n]}$$ where if $i\neq j$,$\Omega_{i j }=0$; if $i = j$,$\Omega_{i j }=\Omega_i$. When $|i-j|> K$,$P_{i j }^{011}=0$. Next let us verify that $H=N+{\cal B}+\bar{\cal B}+P$ satisfies the assumptions $(A1)$--$(A5)$. \noindent {\it Verification of $(A1)$}:$$ \frac{\partial \omega}{\partial \xi}=\frac{1}{4\pi^2}{\left( \begin{array}{cccc} 1 & 2 & \cdots & 2\\ 2& 1 & \cdots & 2\\ \vdots & \vdots & \ddots &\vdots\\ 2 & 2 & \cdots & 1 \end{array} \right)}_{b\times b} \\=A, $$ It is easy to check that $\det A\neq 0$, Thus $(A1)$ is verified. \noindent {\it Verification of $(A2)$}: Take $a=3$, the proof is obvious.\\ \noindent {\it Verification of $(A3)$}: This part is the same as \cite{GXY}, for the sake of completeness, we rewrite it as follows: In the following, we only give the proof for the most complicated case. Let $${\cal A}_n=A_{[n]},n\in\Z_1^2\backslash{{\cal L}_2}$$ $$ {\cal A}_n=\left( \begin{array}{cccc} \Omega_{n}-\omega_{i} & -\frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}\\ \frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}& -(\Omega_{m}-\omega_{j}) \end{array} \right) \\,n\in {\cal L}_2$$ where $(m,i,j)$ is uniquely determined by $n$. We only verify $(A3)$ for $det(\la k,\omega\ra I\pm{\cal A}_n\otimes I_2 \pm I_2\otimes {\cal A}_{n'})$ which is the most complicated. Let $A,B$ be $2\times2$ matrices, we know that $\lambda I+A\otimes I-I\otimes B=(\lambda I+A)\otimes I-I\otimes B$. Moreover,we have \begin{Lemma}\label{matrices} $$|A\otimes I\pm I\otimes B|={(|A|-|B|)}^2+|A|{(tr(B))}^2+|B|{(tr(A))}^2\pm(|A|+|B|)tr(A)tr(B)$$ where $|\cdot|$ denotes the determinant of the corresponding matrices. \end{Lemma} Case $1$. $n,n'\in {\cal L}_1$. $$\la k,\omega\ra\pm\Omega_n\pm\Omega_{n'}$$ Set $\alpha=\varepsilon^{-3}({|i_1|^2},{|i_2|^2},\cdots,{|i_b|^2})$,$\xi=(\xi_{i_1},\xi_{i_2},\cdots,\xi_{i_b})$,$\beta=\frac{1}{4\pi^2}(2,2,\cdots,2)$, and notice that $|n|^2+|i|^2=|m|^2+|j|^2,|n'|^2+|i'|^2=|m'|^2+|j'|^2$. Its eigenvalues are \begin{eqnarray*} &&\la k,\alpha\ra\pm\varepsilon^{-3}(|n|^2+|i|^2)\pm\varepsilon^{-3}(|n'|^2+|i'|^2)+\la Ak\pm2\beta\pm2\beta,\xi\ra\\ &\pm&\frac{1}{8\pi^2}[(-\xi_i-\xi_j\pm\sqrt{{\xi_i}^2+14\xi_i\xi_j+{\xi_j}^2})\pm(-\xi_{i'}-\xi_{j'}\pm\sqrt{{\xi_{i'}}^2+14\xi_{i'}\xi_{j'}+{\xi_{j'}}^2})]. \end{eqnarray*} If $i\neq i'$,all the eigenvalues are not identically zero due to the presence of the square root terms.\\ If $i=i'$,consequently $j=j'$,hence if the eigenvalue is \begin{eqnarray*} &&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2+|i|^2)-\varepsilon^{-3}(|n'|^2+|i|^2)+\la Ak+2\beta-2\beta,\xi\ra\\ &+&\frac{1}{8\pi^2}[(-\xi_i-\xi_j+\sqrt{{\xi_i}^2+14\xi_i\xi_j+{\xi_j}^2})-(-\xi_{i}-\xi_{j}+\sqrt{{\xi_{i}}^2+14\xi_{i}\xi_{j}+{\xi_{j}}^2})]\\ &=&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2-|n'|^2)+\la Ak,\xi\ra \end{eqnarray*} then $Ak\neq 0$ for $k\neq0$; if the eigenvalue is \begin{eqnarray*} &&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2+|i|^2)+\varepsilon^{-4}(|n'|^2+|i|^2)+\la Ak+2\beta+2\beta,\xi\ra\\ &+&\frac{1}{8\pi^2}[(-\xi_i-\xi_j+\sqrt{{\xi_i}^2+14\xi_i\xi_j+{\xi_j}^2})+(-\xi_{i}-\xi_{j}-\sqrt{{\xi_{i}}^2+14\xi_{i}\xi_{j}+{\xi_{j}}^2})]\\ &=&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2+|i|^2)+\varepsilon^{-3}(|n'|^2+|i|^2)+\la Ak+2\beta+2\beta,\xi\ra+\frac{1}{4\pi^2}(-\xi_i-\xi_j)\\ &=&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2+|i|^2)+\varepsilon^{-3}(|n'|^2+|i|^2)+\la Ak+2\beta+2\beta+\frac{1}{4\pi^2}(-e_i-e_j),\xi\ra \end{eqnarray*} then when $Ak+2\beta+2\beta+\frac{1}{4\pi^2}(-e_i-e_j)=0$,all components of $k+e_i+e_j$ are equal and $(2b-1){(k+e_i+e_j)}_1+8=0(b\geq2)$,this equation has no integer solutions.Thus all eigenvalues are not identically zero. Case $2$. $n\in {\cal L}_1,n'\in {\cal L}_2$. In this case, the eigenvalues of $(\la k,\omega\ra\pm\Omega_n) I\pm{\cal A}_{n'}$ are \begin{eqnarray*} &&\la k,\alpha\ra\pm\varepsilon^{-3}(|n|^2+|i|^2)\pm\varepsilon^{-3}(|n'|^2-|i'|^2)+\la Ak\pm2\beta,\xi\ra\\ &\pm&\frac{1}{8\pi^2}[(-\xi_i-\xi_j\pm\sqrt{{\xi_i}^2+14\xi_i\xi_j+{\xi_j}^2})\pm(\xi_{i'}-\xi_{j'}\pm\sqrt{{\xi_{i'}}^2-14\xi_{i'}\xi_{j'}+{\xi_{j'}}^2})]. \end{eqnarray*} Because $\sqrt{{\xi_{i'}}^2-14\xi_{i'}\xi_{j'}+{\xi_{j'}}^2}$ has non--zero imaginary part, there will be no small divisor. Case $3$. $n,n'\in {\cal L}_2$. In this case, the eigenvalues of $\la k,\omega\ra I\pm{\cal A}_n\otimes I_2 \pm I_2\otimes {\cal A}_{n'}$ are \begin{eqnarray*} &&\la k,\alpha\ra\pm\varepsilon^{-3}(|n|^2-|i|^2)\pm\varepsilon^{-3}(|n'|^2-|i'|^2)+\la Ak,\xi\ra\\ &\pm&\frac{1}{8\pi^2}[(\xi_i-\xi_j\pm\sqrt{{\xi_i}^2-14\xi_i\xi_j+{\xi_j}^2})\pm(\xi_{i'}-\xi_{j'}\pm\sqrt{{\xi_{i'}}^2-14\xi_{i'}\xi_{j'}+{\xi_{j'}}^2})]. \end{eqnarray*} If $i\neq i'$, all the eigenvalues are not identically zero due to the presence of the square root terms.\\ If $i=i'$, consequently $j=j'$, hence if the eigenvalue is \begin{eqnarray*} &&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2-|i|^2)-\varepsilon^{-3}(|n'|^2-|i|^2)+\la Ak,\xi\ra\\ &+&\frac{1}{8\pi^2}[(\xi_i-\xi_j+\sqrt{{\xi_i}^2-14\xi_i\xi_j+{\xi_j}^2})-(\xi_{i}-\xi_{j}+\sqrt{{\xi_{i}}^2-14\xi_{i}\xi_{j}+{\xi_{j}}^2})]\\ &=&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2-|n'|^2)+\la Ak,\xi\ra \end{eqnarray*} then $Ak\neq 0$ for $k\neq0$; if the eigenvalue is \begin{eqnarray*} &&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2-|i|^2)+\varepsilon^{-3}(|n'|^2-|i|^2)+\la Ak,\xi\ra\\ &+&\frac{1}{8\pi^2}[(\xi_i-\xi_j+\sqrt{{\xi_i}^2-14\xi_i\xi_j+{\xi_j}^2})+(\xi_{i}-\xi_{j}-\sqrt{{\xi_{i}}^2-14\xi_{i}\xi_{j}+{\xi_{j}}^2})]\\ &=&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2-|i|^2)+\varepsilon^{-3}(|n'|^2-|i|^2)+\la Ak,\xi\ra+\frac{1}{4\pi^2}(\xi_i-\xi_j)\\ &=&\la k,\alpha\ra+\varepsilon^{-3}(|n|^2-|i|^2)+\varepsilon^{-3}(|n'|^2-|i|^2)+\la Ak+\frac{1}{4\pi^2}(e_i-e_j),\xi\ra \end{eqnarray*} then when $Ak+\frac{1}{4\pi^2}(e_i-e_j)=0$, all components of $k-e_i+e_j$ are equal and $(2b-1){(k-e_i+e_j)}_1=0(b\geq2)$, this integer equation to this equation are $k=e_i-e_j$. While at this time,when $|n|\neq|m'|$, \begin{eqnarray*} &&\la e_i-e_j,\alpha\ra+\varepsilon^{-3}(|n|^2-|i|^2)+\varepsilon^{-3}(|n'|^2-|i|^2)\\ &=&\varepsilon^{-3}(|i|^2-|j|^2+|n|^2-|i|^2+(-|m'|^2+|j|^2))\\ &=&\varepsilon^{-3}(|n|^2-|m'|^2)\neq0 \end{eqnarray*} Thus all the eigenvalues are not identically zero. In other cases, the proof is similar, so we omit it. Due to Lemma \ref{matrices}, $det(\la k,\omega\ra I\pm{\cal A}_n\otimes I_2 \pm I_2\otimes {\cal A}_{n'})$ is polynomizl function in $\xi$ of order at most four. Thus $$|\partial^4_\xi(det(\la k,\omega\ra I\pm{\cal A}_n\otimes I_2 \pm I_2\otimes {\cal A}_{n'}))|\geq \frac12|k|\neq0.$$ By excluding some parameter set with measure $\Cal O(\gamma^{\frac14})$, we have $$|det(\la k,\omega\ra I\pm{\cal A}_n\otimes I_2 \pm I_2\otimes {\cal A}_{n'})|\geq \frac{\gamma}{K^\tau},k\neq 0,n,n'\in{\cal L}_2,$$ Thus $(A3)$ is verified.\\ \noindent {\it Verification of $(A4)$}: For a given $0<r<1$ and $s=\varepsilon^{\frac 12}$, according to Lemma \ref{regularityGG}, $\|G_{\bar q}\|_\rho\leq c\|q\|_\rho^3$, then $$\sum_{n\in \Z_1^2}\|P_{w_n}\|_{\Cal O}e^{|n|\rho}+\sum_{n\in \Z_1^2}\|P_{\bar w_n}\|_{\Cal O}e^{|n|\rho}=\|P_w\|_\rho+\|P_{\bar w}\|_\rho\leq c\|q\|_\rho^3\leq c(|I|^{\frac 32}+\|w\|_\rho^3).$$ In addition, $$\sup_{\|q\|_\rho<2s}\|G\|_{\Cal O}\leq c\sup_{\|q\|_\rho<2s}\|q\|_\rho^4\le cs^4,$$ thus $$\|P\|_{D_\rho(2r,2s),\Cal O}=\sup_{D_\rho(2r,2s)}\|P\|_{\Cal O}\leq cs^4.$$ According to Cauchy estimates, $$\|P_I\|_{D_\rho(r,s),\Cal O}\leq cs^2, \|P_\theta\|_{D_\rho(r,s),\Cal O}\leq cs^4.$$ Hence \begin{eqnarray*} \|X_P\|_{D_\rho(r,s),\Cal O}&=& \|P_I\|_{ D_\rho(r,s) , \Cal O}+ \frac 1{s^2}\|P_\theta\|_{ D_\rho(r,s) , \Cal O}\\ &+&\sup_{D_\rho(r,s)}[ \frac 1s\sum_{n\in\Z_1^2} \|P_{w_n}\|_{\cal O}e^{|n|\rho}+ \frac 1s\sum_{n\in\Z_1^2} \|P_{\bar w_n}\|_{\cal O} e^{|n|\rho} ]\\ &\leq&cs^2+\frac{cs^4}{s^2}+c\sup_{D_\rho(r,s)}\frac 1s(|I|^{\frac 32}+\|z\|_\rho^3)\\ &\leq&cs^2\leq c\varepsilon. \end{eqnarray*} Thus $(A4)$ is verified.\\ \noindent {\it Verification of $(A5)$}: We only need to check $P$ satisfies $(A5)$. Recall $(\ref{P12})$. $F$ is given as $$F=\sum_{{{i-j+n-m=0}\atop{|i|^2-|j|^2+|n|^2-|m|^2\neq0}}\atop\sharp S\cap\{i,j,n,m\}\geq2}\frac{i}{8\pi^2(\lambda_i-\lambda_j+\lambda_n-\lambda_m)}q_i\bar q_jw_n\bar w_m.$$ Then for $t$ large enough and $\forall c\in \Z^2\setminus\{0\}$, we have \begin{eqnarray*} &&\sum_{i,j,n,m,t}\frac{i}{8\pi^2(\lambda_i-\lambda_j+\lambda_{n+tc}-\lambda_{m+tc})}q_i\bar q_jw_{n+tc}\bar w_{m+tc}\\ &=&\sum_{i,j,n,m,t}\frac{i}{8\pi^2(|i|^2-|j|^2+|n|^2-|m|^2+2t\la n-m,c\ra)}q_i\bar q_jw_{n+tc}\bar w_{m+tc}. \end{eqnarray*} Hence,when $\la n-m,c\ra=0$, $$\frac{\partial^2 F}{\partial w_{n+tc}\partial \bar w_{m+tc}}=\frac{\partial^2 F}{\partial w_{n}\partial \bar w_{m}};$$ when $\la n-m,c\ra\neq 0$, $$\|\frac{\partial^2 F}{\partial w_{n+tc}\partial \bar w_{m+tc}}-0\|\leq\frac{\varepsilon}{|t|}e^{-|n-m|\rho}.$$ Similarly, $$\|\frac{\partial^2 F}{\partial w_{n+tc}\partial w_{m-tc}}-\lim_{t\to \infty}\frac{\partial^2 F}{\partial w_{n+tc}\partial w_{m-tc}}\|,\|\frac{\partial^2 F}{\partial \bar w_{n+tc}\partial \bar w_{m-tc}}-\lim_{t\to \infty}\frac{\partial^2 F}{\partial \bar w_{n+tc}\partial \bar w_{m-tc}}\|\leq\frac{\varepsilon}{|t|}e^{-|n+m|\rho}.$$ That is to say, $F$ satisfies T\"{o}plitz-Lipschitz property. Recalling the construction of Hamiltonian (\ref{PH}), we only need to check that $\{G,F\}$ also satisfies the T\"{o}plitz-Lipschitz property. Lemma \ref{toplitz} in the next section shows that Poisson bracket preserves T\"{o}plitz-Lipschitz property. Thus $N+{\cal B}+\bar{\cal B}+P$ satisies $(A5)$. Thus $(A5)$ is verified.\\ So we have verified all the assumptions of Theorem \ref{KAM} for (\ref{Hamiltoniann}). By applying Theorem 2, we get Theorem 1. \section{ KAM Step} Theorem \ref{KAM} will be proved by a KAM iteration which involves an infinite sequence of change of variables. Each step of KAM iteration makes the perturbation smaller than that of the previous step at the cost of excluding a small set of parameters and contraction of weight. We have to prove the convergence of the iteration and estimate the measure of the excluded set after infinite KAM steps. At the $\nu$--step of the KAM iteration, we consider Hamiltonian function $$ H_\nu=N_\nu+{\cal B}_\nu+\bar{\cal B}_\nu+ P_\nu,$$ where $N_\nu$ is an ``integrable normal form", ${\cal B}_\nu+\bar{\cal B}_\nu+ P_\nu$ defined in $D_{\rho_\nu}(r_\nu, s_\nu)\times \Cal O_{\nu}$ with satisfying $(A1)$--$(A5)$.\\ Our goal is to construct a map $$\Phi_v: D_{\rho_\nu}(r_{\nu+1}, s_{\nu+1})\times\Cal O_{\nu} \to D_{\rho_\nu}(r_{\nu}, s_{\nu})\times\Cal O_{\nu}$$ and \begin{equation}\label{4.P2} H_{\nu+1}=H_\nu\circ\Phi_\nu=N_{\nu+1}+{\cal B}_{\nu+1}+\bar{\cal B}_{\nu+1}+ P_{\nu+1} \end{equation} satisfies all the above iterative assumptions $(A1)-(A5)$ on $D_{\rho_{\nu+1}}(r_{\nu+1}, s_{\nu+1})\times\Cal O_{\nu}$. Moreover, $$ \|X_{P_{\nu+1}}\|_{D_{\rho_{\nu+1}}(r_{\nu+1}, s_{\nu+1}), \Cal O_{\nu}}=\|X_{H_\nu\circ\Phi_\nu}-X_{N_{\nu+1}+{\cal B}_{\nu+1}+\bar{\cal B}_{\nu+1}}\|_{D_{\rho_{\nu+1}}(r_{\nu+1}, s_{\nu+1}), \Cal O_{\nu}}\leq\varepsilon_{\nu+1} $$ \sss To simplify notations, in what follows, the quantities without subscripts and superscripts refer to quantities at the $\nu^{\rm th}$ step, while the quantities with subscript $+$ or superscript $+$ denote the corresponding quantities at the $(\nu+1)^{\rm th}$ step. Let us then consider the Hamiltonian \begin{eqnarray} H=&&N+{\cal B}+\bar{\cal B}+P\nonumber\\ =&&\la \omega,I\ra+\sum_{[n]} \la A_{[n]} z_{[n]}, \bar z_{[n]}\ra\label{Am}\\ &+&\sum_{{n'\in {\cal L}_2}}[ (\Omega_{n'}-\omega_{i'})z_{n'}\bar z_{n'}+(\Omega_{m'}-\omega_{j'})z_{m'}\bar z_{m'}]\nonumber\\ &+&\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}z_{n'} z_{m'}+\frac{1}{2\pi^2}\sum_{n'\in {\cal L}_2}\sqrt{\xi_{i'}\xi_{j'}}\bar z_{n'} \bar z_{m'}\nonumber\\ &+&P(\theta, I, z, \bar z, \xi)\nonumber \end{eqnarray}\noindent defined in $D_\rho(r, s)\times\Cal O$. We assume that $|k|\le K$, \[ |\langle k,\omega\rangle|\ge \frac{\gamma}{K^\tau}, k\neq 0\] \[|\langle k,\omega\rangle +\widetilde{\lambda}_j|\ge \frac{\gamma}{K^\tau},j\in{[n]} \] \[|\langle k,\omega\rangle \pm \widetilde{\lambda}_i\pm \widetilde{\lambda}_j|\ge \frac{\gamma}{K^\tau},i\in{[m]},j\in{[n]}\] where $\widetilde{\lambda}_i,\widetilde{\lambda}_j$ are eigenvalues. \noindent $$|det(\la k,\omega\ra I\pm{\cal A}_n\otimes I_2 \pm I_2\otimes {\cal A}_{n'})|\geq \frac{\gamma}{K^\tau},k\neq 0,n,n'\in{\cal L}_2,$$ where $$ {\cal A}_n=\left( \begin{array}{cccc} \Omega_{n}-\omega_{i} & -\frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}\\ \frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}& -(\Omega_{m}-\omega_{j}) \end{array} \right) \\,n\in {\cal L}_2$$ where $(n,m)$ are resonant pairs, $(i,j)$ are uniquely determined by $(n,m)$ in ${\cal L}_2$.\\ Moreover, $N+{\cal B}+\bar{\cal B}+P$ satisfies $(A4),(A5)$. \sss \noindent {\bf Remark } The assumption $(A5)$ makes the measure estimate available at each KAM step. We now let $0<r_+<r$ and define \begin{equation}\label{4.51} s_+=\frac 14s\varepsilon^{\frac 13}, \quad \varepsilon_+=c\gamma^{-5}K^{5(\tau+1)}(r-r_+)^{-c} \varepsilon^{\frac {4}{3}}. \end{equation} Here and later, the letter $c$ denotes suitable (possibly different) constants that do not depend on the iteration steps. We now describe how to construct a set $\Cal O_+\subset \Cal O$ and a change of variables $\Phi: D_+\times\Cal O_+=D_\rho(r_+, s_+)\times \Cal O_+\to D_\rho(r,s) \times \Cal O$ such that the transformed Hamiltonian $H_+=N_++{\cal B}_++\bar{\cal B}_++P_+\equiv H\circ \Phi$ satisfies all the above iterative assumptions with new parameters $s_+, \varepsilon_+, r_+$ and with $\xi\in \Cal O_+$. \subsection{Solving the linearized equations}\label{4.1} Expand $P$ into the Fourier-Taylor series $$P=\sum_{k,l,\alpha,\beta} P_{kl\alpha\beta}\kth I^lw^\alpha\bar w^\beta$$ where $k\in \Z^b, l\in \N^b$ and the multi--indices $\alpha$ and $\beta $ run over the set of all infinite dimensional vectors $\alpha\equiv (\cdots,\alpha_n,\cdots)_{n\in\Z_1^2}$, $\beta \equiv(\cdots, \beta _n, \cdots)_{n\in\Z_1^2}$ with finitely many nonzero components of positive integers. Let $R$ be the truncation of $P$ given by $$R(\theta,I,z,\bar z)=R_0+R_1+R_2$$ where $$R_0=\sum_{|k|\le K,|l|\le 1} P_{kl00}\kth I^l$$ \begin{eqnarray} R_1&=&\sum_{{|k|\le K,n'\in {\cal L}_2}} ({P}_{n'}^{k10}z_{n'}+{P}_{m'}^{k10} z_{m'}+{P}^{k01}_{n'}\bar z_{n'}+{P}^{k01}_{m'}\bar z_{m'}) \kth \nonumber\\ &+&\sum_{{|k|\le K,[n]}} (\la R_{[n]}^{k10}, z_{[n]}\ra+\la R^{k01}_{[n]},\bar z_{[n]} \ra) \kth \nonumber \end{eqnarray} \begin{eqnarray} R_2&=&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2}({P}^{k11}_{nn'}z_n \bar{z}_{n'}+ {P}^{k11}_{mn'}z_m \bar{z}_{n'}+{P}^{k11}_{nm'}z_n \bar{z}_{m'}+{P}^{k11}_{mm'}z_m\bar{z}_{m'}\nonumber\\ && +{P}^{k11}_{n' n}z_{n'} \bar{z}_n +{P}^{k11}_{m' n}z_{m'} \bar{z}_n+{P}^{k11}_{n' m}z_{n'} \bar{z}_m+{P}^{k11}_{m' m}z_{m'} \bar{z}_m)\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2} ({P}^{k20}_{nn'}z_{n} z_{n'}+{P}^{k20}_{mn'}z_{m} z_{n'}+{P}^{k20}_{nm'}z_{n} z_{m'}+{P}^{k20}_{mm'}z_{m} z_{m'})\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2} ({P}^{k02}_{nn'}\bar{z}_{n} \bar{z}_{n'}+{P}^{k02}_{mn'}\bar{z}_{m} \bar{z}_{n'}+{P}^{k02}_{nm'}\bar{z}_{n} \bar{z}_{m'}+{P}^{k02}_{mm'}\bar{z}_{m} \bar{z}_{m'})\kth\nonumber\\ &+&\sum_{|k|\le K,[n],[m]}(\la R^{k20}_{[m][n]}z_{[n]}, z_{[m]}\ra+\la R^{k02}_{[m][n]}\bar z_{[n]},\bar z_{[m]}\ra)\kth\nonumber\\ &+&\sum_{|k|\le K,[n],[m]}\la R^{k11}_{[m][n]}z_{[n]},\bar z_{[m]}\ra\kth\nonumber\\ &+&\sum_{|k|\le K,n\in \Z^2_1\setminus{\cal L}_2,n'\in {\cal L}_2}({P}^{k20}_{nn'}z_nz_{n'}+{P}^{k20}_{nm'}z_nz_{m'})\kth\nonumber\\ &+&\sum_{|k|\le K,n\in \Z^2_1\setminus{\cal L}_2,n'\in {\cal L}_2}({P}^{k02}_{nn'}\bar{z}_n\bar{z}_{n'}+{P}^{k0 2}_{nm'}\bar{z}_n\bar{z}_{m'})\kth\nonumber\\ &+&\sum_{|k|\le K,n\in \Z^2_1\setminus{\cal L}_2,n'\in {\cal L}_2}({P}^{k11}_{nn'}z_n\bar{z}_{n'}+{P}^{k11}_{nm'}z_n\bar{z}_{m'}+{P}^{k11}_{n'n}z_{n'}\bar{z}_n+{P}^{k11}_{m'n}z_{m'}\bar{z}_n)\kth\nonumber \end{eqnarray} where $P_{n}^{k10}=P_{kl\alpha\beta}$ with $\alpha=e_n, \beta=0$, here $e_n$ denotes the vector with the $n^{\rm th}$ component being $1$ and the other components being zero; $P_{n}^{k01}=P_{kl\alpha\beta}$ with $\alpha=0, \beta=e_n$; $P^{k20}_{nm}=P_{kl\alpha\beta}$ with $\alpha=e_n+e_m, \beta=0$; $P^{k11}_{nm}=P_{kl\alpha\beta}$ with $\alpha=e_n, \beta=e_m$; $P^{k02}_{nm}=P_{kl\alpha\beta}$ with $\alpha=0, \beta=e_n+e_m.$ Where,$R_{[n]}^{k10}$,$R_{[n]}^{k01}$,$R_{[m][n]}^{k20}$,$R_{[m][n]}^{k02}$ and $R_{[m][n]}^{k11}$ are, respectively, $\sharp[n]\times 1,\sharp[n]\times 1,\sharp[m]\times\sharp[n],\sharp[m]\times\sharp[n],\sharp[m]\times\sharp[n]$ matrices $$R_{[n]}^{k10}=(P_{i}^{k10})_{i\in[n]},R_{[n]}^{k01}=(P_{i}^{k01})_{i\in[n]},|[n]|\leq K,$$ $$R_{[m][n]}^{k20}=(R_{ij}^{k20})_{i\in[m],j\in[n]}$$where if $|i+j|\leq K$, $R_{ij}^{k20}=P_{ij}^{k20}$; if $|i+j|> K$, $R_{ij}^{k20}=0$, $$R_{[m][n]}^{k02}=(R_{ij}^{k02})_{i\in[m],j\in[n]}$$where if $|i+j|\leq K$ ,$R_{ij}^{k02}=P_{ij}^{k02}$; if $|i+j|> K$ ,$R_{ij}^{k02}=0$, $$R_{[m][n]}^{k11}=(R_{ij}^{k11})_{i\in[m],j\in[n]}$$where if $|i-j|\leq K$ ,$R_{ij}^{k11}=P_{ij}^{k11}$; if $|i-j|> K$, $R_{ij}^{k11}=0$. Rewrite $H$ as $ H=N+{\cal B}+\bar{\cal B}+R+(P-R)$. By the choice of $s_+$ in (\ref{4.51}) and the definition of the norms, it follows immediately that \begin{equation} \label{4.9} \|X_R\|_{ D_\rho(r,s) ,\Cal O}\le \| X_P\|_{ D_\rho(r,s) ,\Cal O}\le \varepsilon. \end{equation} for any $\frac{r_0}{2}<\rho\leq r$. In the next, we prove that for $\frac{r_0}{2}<\rho\leq r_+$ $$\|H_{(P-R)}\|_{ D_\rho(r_+,s) ,\Cal O}<c\varepsilon_+$$ In fact, $P-R=P^*+h.o.t.$, where \begin{eqnarray*}P^*&=&\sum_{|n|>K}[P^{k10}_n(\theta)w_n+P^{k01}_n(\theta)\bar w_n]\\ &+&\sum_{|n+m|>K}[P^{k20}_{nm}(\theta)w_nw_m+P^{k02}_{nm}(\theta)\bar w_n\bar w_m]+\sum_{|n-m|>K}P^{k11}_{nm}(\theta)w_n\bar w_m \end{eqnarray*} be the linear and quadratic terms in the perturbation. By virtue of (\ref{4.51}), the decay property of $P$, $\|X_P\|_{ D_\rho(r,s) ,\Cal O}\leq\varepsilon$, and Cauchy estimates, one has that for $\rho\leq r_+$ \begin{eqnarray*} &&\|X_{P_*}\|_{ D_\rho(r_+,s) ,\Cal O}\\ &\leq &(r-r_+)^{-1}(\sum_{|n|>K}\varepsilon e^{-|n|r}e^{|n|\rho}+\sum_{|n+m|>K}\varepsilon e^{-|n+m|r}|\bar w_m|e^{|n+m|\rho}+\sum_{|n-m|>K}\varepsilon e^{-|n-m|r}|w_m|e^{|n-m|\rho})\\ &\leq &(r-r_+)^{-1}(\sum_{|n|>K}\varepsilon e^{-|n|r}e^{|n|\rho}+\sum_{|n|>K,m}\varepsilon e^{-|n|r}|w_m|e^{|n|\rho}e^{m\rho})\\ &\leq &(r-r_+)^{-1}\sum_{|n|>K}\varepsilon e^{-|n|(r-\rho)}\\ &\leq &(r-r_+)^{-1}\varepsilon e^{-K(r-\rho)}\\ &\leq &\varepsilon_+\\ \end{eqnarray*} Moreover, we take $s_+\ll s$ such that in a domain $D_\rho(r, s_+)$, \begin{equation} \label{4.10} \| X_{(P-R)}\|_ {D_\rho(r, s_+)} \lep \varepsilon_+. \end{equation} \sss In the following, we will look for an $F$, defined in a domain $D_+=D_\rho(r_+, s_+)$, such that the time one map $\phi^1_F$ of the Hamiltonian vector field $X_F$ defines a map from $D_+\to D$ and transforms $H$ into $H_+$. More precisely, by second order Taylor formula, we have \begin{eqnarray} H\circ \phi^1_F &=&(N+{\cal B}+\bar{\cal B}+ R)\circ \phi_F^1+(P-R)\circ \phi^1_F\nonumber\\ &=& N+{\cal B}+\bar{\cal B}+ \{N+{\cal B}+\bar{\cal B},F\}+R\nonumber\\ &+&\int_0^1 (1-t)\{\{N+{\cal B}+\bar{\cal B},F\},F\}\circ \phi_F^{t}dt\nonumber\\ &+&\int_0^1 \{R,F\}\circ \phi_F^{t}dt +(P-R)\circ \phi^1_F \label{4.11}\\ &=& N_++{\cal B}_++\bar{\cal B}_++P_+ +\{N+{\cal B}+\bar{\cal B},F\}+R\nonumber\\ & -&P_{0000}-\la\hat{\omega}, I\ra-\sum_{[n]}\la P_{[n][n]}^{011}z_{[n]},\bar z_{[n]}\ra-\sum_{n'\in {\cal L}_2}({P}_{n'n'}^{011}z_{n'}\bar z_{n'}+{P}_{m'm'}^{011}z_{m'}\bar z_{m'})-\hat{{\cal B}}-\hat{\bar{\cal B}},\nonumber \end{eqnarray} where \[ \hat{\omega}= \int\frac{\partial P}{\partial I}d\theta|_{ z=\bar z= 0, I=0}, \] $$\hat{{\cal B}}=\sum_{n'\in {\cal L}_2}{P}^{020}_{n'm'}z_{n'} z_{m'}$$ $$\hat{\bar{\cal B}}=\sum_{n'\in {\cal L}_2} {P}^{002}_{n'm'}\bar z_{n'} \bar z_{m'}$$ \beq\label{N_+} N_+= N+P_{0000}+\la\hat{\omega}, I\ra+ \sum_{[n]}\la P_{[n][n]}^{011}z_{[n]},\bar z_{[n]}\ra+\sum_{n'\in {\cal L}_2}({P}_{n'n'}^{011}z_{n'}\bar z_{n'}+{P}_{m'm'}^{011}z_{m'}\bar z_{m'}), \eeq \beq\label{B+} {\cal B}_+= {\cal B}+\hat{{\cal B}}, \eeq \beq\label{B1+} \bar{{\cal B}}_+= \bar{{\cal B}}+\hat{\bar{{\cal B}}}= \bar{{\cal B}}+\bar{\hat{{\cal B}}}, \eeq \beq \label{P_+} P_+=\int_0^1 (1-t)\{\{N+{\cal B}+\bar{\cal B},F\},F\}\circ \phi_F^{t}dt+\int_0^1 \{R,F\}\circ \phi_F^{t}dt +(P-R)\circ \phi^1_F. \eeq We shall find a function $F$: $$F(\theta, I, z,\bar z)=F_0+F_1+F_2$$ where $$F_0=\sum_{0<|k|\le K,|l|\le 1} F_{kl00}\kth I^l$$ \begin{eqnarray} F_1&=&\sum_{{|k|\le K,n'\in {\cal L}_2}} ({F}_{n'}^{k10}z_{n'}+{F}_{m'}^{k10} z_{m'}+{F}^{k01}_{n'}\bar z_{n'}+{F}^{k01}_{m'}\bar z_{m'}) \kth \nonumber\\ &+&\sum_{{|k|\le K,[n]}} (\la F_{[n]}^{k10}, z_{[n]}\ra+\la F^{k01}_{[n]},\bar z_{[n]} \ra) \kth \nonumber \end{eqnarray} \begin{eqnarray} F_2&=&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|n-n'|\neq0} ({F}^{k11}_{n'n}z_{n'} \bar{z}_{n} +{F}^{k11}_{nn'}z_{n}\bar{z}_{n'})\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|n-m'|\neq0} ({F}^{k11}_{m'n}z_{m'}\bar{z}_{n}+{F}^{k11}_{nm'}z_{n}\bar{z}_{m'} )\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|m-n'|\neq0} ({F}^{k11}_{n'm}z_{n'}\bar{z}_{m}+{F}^{k11}_{mn'}z_{m}\bar{z}_{n'} )\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|m-m'|\neq0} ({F}^{k11}_{m'm}z_{m'}\bar{z}_{m} +{F}^{k11}_{mm'}z_{m}\bar{z}_{m'} )\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|n-m'|\neq0(or)|k|+|n'-m|\neq0} ({F}^{k20}_{n'n}z_{n'} z_{n}+{F}^{k02}_{n'n}\bar{z}_{n'}\bar{z}_{n})\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|m-m'|\neq0(or)|k|+|n'-n|\neq0} ({F}^{k20}_{m'n}z_{m'}z_{n}+ {F}^{k02}_{m'n}\bar{z}_{m'}\bar{z}_{n})\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|m-m'|\neq0(or)|k|+|n'-n|\neq0} ({F}^{k20}_{n'm}z_{n'}z_{m}+{F}^{k02}_{n'm}\bar{z}_{n'}\bar{z}_{m})\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|n-m'|\neq0(or)|k|+|n'-m|\neq0} ({F}^{k20}_{m'm}z_{m'}z_{m}+{F}^{k02}_{m'm}\bar{z}_{m'}\bar{z}_{m})\kth\nonumber\\ &+&\sum_{|k|\le K,[n],[m]}(\la F^{k20}_{[m][n]}z_{[n]}, z_{[m]}\ra+\la F^{k02}_{[m][n]}\bar z_{[n]},\bar z_{[m]}\ra)\kth\nonumber\\ &+&\sum_{|k|\le K,[n],[m],|k|+ ||n|-|m|| \neq 0} \la F^{k11}_{[m][n]}z_{[n]},\bar z_{[m]}\ra\kth\nonumber\\ &+&\sum_{|k|\le K,n\in \Z^2_1\setminus{\cal L}_2,n'\in {\cal L}_2}({F}^{k20}_{nn'}z_nz_{n'}+{F}^{k20}_{nm'}z_nz_{m'})\kth\nonumber\\ &+&\sum_{|k|\le K,n\in \Z^2_1\setminus{\cal L}_2,n'\in {\cal L}_2}({F}^{k02}_{nn'}\bar{z}_n\bar{z}_{n'}+{F}^{k0 2}_{nm'}\bar{z}_n\bar{z}_{m'})\kth\nonumber\\ &+&\sum_{|k|\le K,n\in \Z^2_1\setminus{\cal L}_2,n'\in {\cal L}_2}({F}^{k11}_{nn'}z_n\bar{z}_{n'}+{F}^{k11}_{nm'}z_n\bar{z}_{m'}+{F}^{k11}_{n'n}z_{n'}\bar{z}_n+{F}^{k11}_{m'n}z_{m'}\bar{z}_n)\kth\nonumber \end{eqnarray} \noindent where $F_{[n]}^{k10}$, $F_{[n]}^{k01}$, $F_{[m][n]}^{k20}$, $F_{[m][n]}^{k02}$ and $F_{[m][n]}^{k11}$ are, respectively, $\sharp[n]\times 1, \sharp[n]\times 1, \sharp[m]\times\sharp[n], \sharp[m]\times\sharp[n], \sharp[m]\times\sharp[n]$ matrices $$F_{[n]}^{k10}=(F_{i}^{k10})_{i\in[n]},F_{[n]}^{k01}=(F_{i}^{k01})_{i\in[n]},|[n]|\leq K,$$ $$F_{[m][n]}^{k20}=(f_{ij}^{k20})_{i\in[m],j\in[n]}$$ where if $|i+j|\leq K$, $f_{ij}^{k20}=F_{ij}^{k20}$; if $|i+j|> K$,$f_{ij}^{k20}=0$, $$F_{[m][n]}^{k02}=(f_{ij}^{k02})_{i\in[m],j\in[n]}$$ where if $|i+j|\leq K$, $f_{ij}^{k02}=F_{ij}^{k02}$; if $|i+j|> K$,$f_{ij}^{k02}=0$, $$F_{[m][n]}^{k11}=(f_{ij}^{k11})_{i\in[m],j\in[n]}$$ where if $|i-j|\leq K$, $f_{ij}^{k11}=F_{ij}^{k11}$; if $|i-j|> K$,$f_{ij}^{k11}=0$. satisfying the equation \begin{equation}\label{4.13}\begin{array}{rlcl} &\{N+{\cal B}+\bar{\cal B},F\}+R-P_{0000}-\la\hat{\omega}, I\ra-\displaystyle\sum_{[n]}\la P_{[n][n]}^{011}z_{[n]},\bar z_{[n]}\ra-\hat{{\cal B}}-\hat{\bar{\cal B}}\\&-\displaystyle\sum_{n'\in {\cal L}_2}({P}_{n'n'}^{011}z_{n'}\bar z_{n'}+{P}_{m'm'}^{011}z_{m'}\bar z_{m'})=0. \end{array} \end{equation} \begin{Lemma}\label{Lem4.01} $F$ satisfies (\ref{4.13}) if the Fourier coefficients of $F_0,F_1$ are defined by the following equations \beq\label{4.14}\begin{array}{rlcl} &(\la k,\omega\ra )F_{kl00}&=& {\rm i} P_{kl00}, \quad |l|\le 1,0<|k|\le K, \\ &(\la k,\omega\ra I - A_{[n]})F^{k10}_{[n]}&=&{\rm i} P^{k10}_{[n]},\quad |k|\le K,n\in \Z^2_1\setminus{\cal L}_2,\\ &(\la k,\omega\ra I + A_{[n]})F^{k01}_{[n]}&=&{\rm i} R^{k01}_{[n]},\quad |k|\le K,n\in \Z^2_1\setminus{\cal L}_2,\\ &(\la k,\omega\ra I-{\cal A}_{n'})({F}_{n'}^{k10},{F}_{m'}^{k01})^T&=&i({P}_{n'}^{k10},{P}_{m'}^{k01})^T,|k|\le K,n'\in {\cal L}_2,\\ &(\la k,\omega\ra I+{\cal A}_{n'})({F}_{n'}^{k01},{F}_{m'}^{k10})^T&=&i({P}_{n'}^{k01},{P}_{m'}^{k10})^T,|k|\le K,n'\in {\cal L}_2. \end{array}\eeq \end{Lemma} The Fourier coefficients of $F_2$ are defined by the following Lemmas\\ {\bf Case 1: $n,m\in \Z^2_1\setminus{\cal L}_2$} \begin{Lemma}\label{Lem4.02} $F$ satisfies (\ref{4.13}) if the Fourier coefficients of $F_2$ are defined by the following equations \beq\label{4.14}\begin{array}{rlcl} &(\la k,\omega\ra I - A_{[m]})F^{k20}_{[m][n]}-F^{k20}_{[m][n]}A_{[n]}&=&{\rm i} R^{k20}_{[m][n]},\\ &(\la k,\omega\ra I - A_{[m]})F^{k11}_{[m][n]}+F^{k11}_{[m][n]}A_{[n]}&=&{\rm i} R^{k11}_{[m][n]},\quad |k|+ ||n|-|m|| \neq 0,\\ &(\la k,\omega\ra I + A_{[m]})F^{k02}_{[m][n]}+F^{k02}_{[m][n]}A_{[n]}&=&{\rm i} R^{k02}_{[m][n]}.\\ \end{array}\eeq \end{Lemma} {\bf Case 2: $n\in \Z^2_1\setminus{\cal L}_2,n'\in {\cal L}_2$} \begin{Lemma}\label{Lem4.03} $F$ satisfies (\ref{4.13}) if the Fourier coefficients of $F_2$ are defined by the following equations \beq\label{4.14}\begin{array}{rlcl} &[(\la k,\omega\ra-\Omega_n) I-{\cal A}_{n'}]({F}_{nn'}^{k20},{F}_{nm'}^{k11})^T&=&i({P}_{nn'}^{k20},{P}_{nm'}^{k11})^T,\\ &[(\la k,\omega\ra+\Omega_n) I+{\cal A}_{n'}]({F}_{nn'}^{k02},{F}_{m'n}^{k11})^T&=&i({P}_{nn'}^{k02},{P}_{m'n}^{k11})^T,\\ &[(\la k,\omega\ra-\Omega_n) I+{\cal A}_{n'}]({F}_{nn'}^{k11},{F}_{nm'}^{k20})^T&=&i({P}_{nn'}^{k11},{P}_{nm'}^{k20})^T,\\ &[(\la k,\omega\ra+\Omega_n) I-{\cal A}_{n'}]({F}_{n'n}^{k11},{F}_{m'n}^{k02})^T&=&i({P}_{n'n}^{k11},{P}_{m'n}^{k02})^T. \end{array}\eeq \end{Lemma} {\bf Case 3: $n,n'\in {\cal L}_2$} \begin{Lemma}\label{Lem4.04} $F$ satisfies (\ref{4.13}) if the Fourier coefficients of $F_2$ are defined by the following equations \[ \begin{array}{rlcl} &(\la k,\omega\ra I-{\cal A}_{n}\otimes I+I\otimes{\cal A}_{n'})({F}_{n n'}^{k11},{F}_{n m'}^{k20},{F}_{m n '}^{k02},{F}_{m'm }^{k11})^T&=&i({P}_{n n'}^{k11},{P}_{n m'}^{k20},{P}_{m n '}^{k02},{P}_{m 'm}^{k11})^T,\\ &(\la k,\omega\ra I+{\cal A}_{n}\otimes I-I\otimes{\cal A}_{n'})({F}_{n'n}^{k11},{F}_{m'n}^{k02},{F}_{n 'm}^{k20},{F}_{m m' }^{k11})^T&=&i({P}_{n'n}^{k11},{P}_{m'n}^{k02},{P}_{n 'm}^{k20},{P}_{mm'}^{k11})^T,\\ &(\la k,\omega\ra I-{\cal A}_{n}\otimes I-I\otimes{\cal A}_{n'})({F}_{n n'}^{k20},{F}_{n m'}^{k11},{F}_{n 'm}^{k11},{F}_{m m '}^{k02})^T&=&i({P}_{n n'}^{k20},{P}_{n m'}^{k11},{P}_{n 'm}^{k11},{P}_{m m '}^{k02})^T,\\ &(\la k,\omega\ra I+{\cal A}_{n}\otimes I+I\otimes{\cal A}_{n'})({F}_{n n'}^{k02},{F}_{m'n}^{k11},{F}_{m n '}^{k11},{F}_{m' m }^{k20})^T&=&i({P}_{n n'}^{k02},{P}_{m'n}^{k11},{P}_{m n '}^{k11},{P}_{m ' m}^{k20})^T. \end{array} \] \end{Lemma} In the following, we only give the proof for the most complicated case. \proof Inserting $F$ into (\ref{4.13}).By comparing the Fourier coefficients,more precisely,\\ if $({n'},{m'})$ is a resonant pair in ${\cal L}_2$, we have $$\sum_{\quad |k|\le K,n'\in {\cal L}_2}[\la k,\omega\ra-(\Omega_{n'}-\omega_{i'})]{F}^{k10}_{n'}z_{n'}\kth-\frac{1}{2\pi^2}\sqrt{\xi_{i'}\xi_{j'}}{F}^{k01}_{m'}z_{n'}\kth=i\sum_{\quad |k|\le K,{n'}\in {\cal L}_2} {P}^{k10}_{n'}z_{n'}\kth$$ $$\sum_{\quad |k|\le K,{n'}\in {\cal L}_2}[\la k,\omega\ra+(\Omega_{m'}-\omega_{j'})]{F}^{k01}_{m'}z_{m'}\kth-\frac{1}{2\pi^2}\sqrt{\xi_{i'}\xi_{j'}}F^{k10}_{n'}z_{m'}\kth=i\sum_{\quad |k|\le K,{n'}\in {\cal L}_2} P^{k01}_{m'}z_{m'}\kth$$ we rewrite in matrix form $$(\la k,\omega\ra I+{\cal A}_{n'})({F}_{n'}^{k01},{F}_{m'}^{k10})^T=i({P}_{n'}^{k01},{P}_{m'}^{k10})^T,|k|\le K,n'\in {\cal L}_2,$$ similarly,form $$(\la k,\omega\ra I-{\cal A}_{n'})({F}_{n'}^{k10},{F}_{m'}^{k01})^T=i({P}_{n'}^{k10},{P}_{m'}^{k01})^T,|k|\le K,n'\in {\cal L}_2,$$ If $({n},{m})$ and $({n'},{m'})$ are resonant pairs in ${\cal L}_2$, comparing the Fourier cofficients, we have that $({F}_{n n'}^{k11},{F}_{n m'}^{k20},{F}_{m n '}^{k02},{F}_{m'm }^{k11})^T$ satisfy \begin{eqnarray*} &&[\la k,\omega\ra-(\Omega_{n}-\omega_{i})+(\Omega_{n'}-\omega_{i'})]{F}^{k11}_{nn'}\kth-\frac{1}{2\pi^2}\sqrt{\xi_{i'}\xi_{j'}}{F}^{k20}_{nm'} \kth+\frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}{F}^{k02}_{mn'}\kth\\&&=i {P}^{k11}_{nn'}\kth \end{eqnarray*} similarly, \begin{eqnarray*} &&[\la k,\omega\ra-(\Omega_{n}-\omega_{i})-(\Omega_{m'}-\omega_{j'})]{F}^{k20}_{nm'}\kth+\frac{1}{2\pi^2}\sqrt{\xi_{i'}\xi_{j'}}{F}^{k11}_{nn'} \kth+\frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}{F}^{k11}_{m'm}\kth\\&&=i {P}^{k20}_{nm'}\kth \end{eqnarray*} \begin{eqnarray*} &&[\la k,\omega\ra+(\Omega_{m}-\omega_{j})+(\Omega_{n'}-\omega_{i'})]{F}^{k02}_{m n'}\kth-\frac{1}{2\pi^2}\sqrt{\xi_{i'}\xi_{j'}}{F}^{k11}_{m'm} \kth-\frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}{F}^{k11}_{nn'}\kth\\&&=i {P}^{k02}_{m n'}\kth \end{eqnarray*} \begin{eqnarray*} &&[\la k,\omega\ra+(\Omega_{m}-\omega_{j})-(\Omega_{m'}-\omega_{j'})]{F}^{k11}_{m' m}\kth+\frac{1}{2\pi^2}\sqrt{\xi_{i'}\xi_{j'}}{F}^{k02}_{m n'} \kth+\frac{1}{2\pi^2}\sqrt{\xi_{i}\xi_{j}}{F}^{k20}_{nm'}\kth\\&&=i {P}^{k11}_{m' m}\kth \end{eqnarray*} we rewrite them into matrix form \begin{eqnarray*} (\la k,\omega\ra I-{\cal A}_{n}\otimes I+I\otimes{\cal A}_{n'})({F}_{n n'}^{k11},{F}_{n m'}^{k20},{F}_{m n '}^{k02},{F}_{m'm }^{k11})^T=i({P}_{n n'}^{k11},{P}_{n m'}^{k20},{P}_{m n '}^{k02},{P}_{m 'm}^{k11})^T, |k|\le K,n ,n'\in {\cal L}_2\\ \end{eqnarray*} similarly, from \begin{eqnarray*} (\la k,\omega\ra I+{\cal A}_{n}\otimes I-I\otimes{\cal A}_{n'})({F}_{n'n}^{k11},{F}_{m'n}^{k02},{F}_{n 'm}^{k20},{F}_{m m' }^{k11})^T=i({P}_{n'n}^{k11},{P}_{m'n}^{k02},{P}_{n 'm}^{k20},{P}_{mm'}^{k11})^T, |k|\le K,n ,n'\in {\cal L}_2\\ (\la k,\omega\ra I-{\cal A}_{n}\otimes I-I\otimes{\cal A}_{n'})({F}_{n n'}^{k20},{F}_{n m'}^{k11},{F}_{n 'm}^{k11},{F}_{m m '}^{k02})^T=i({P}_{n n'}^{k20},{P}_{n m'}^{k11},{P}_{n 'm}^{k11},{P}_{m m '}^{k02})^T, |k|\le K,n ,n'\in {\cal L}_2\\ (\la k,\omega\ra I+{\cal A}_{n}\otimes I+I\otimes{\cal A}_{n'})({F}_{n n'}^{k02},{F}_{m'n}^{k11},{F}_{m n '}^{k11},{F}_{m' m }^{k20})^T=i({P}_{n n'}^{k02},{P}_{m'n}^{k11},{P}_{m n '}^{k11},{P}_{m ' m}^{k20})^T, |k|\le K,n ,n'\in {\cal L}_2\\ \end{eqnarray*} In other cases, the proof is similar, so we omit it. Thus these Lemmas are obtained. \qed \noindent{\bf Remark.} In the case that $({n},{m})$ and $({n'},{m'})$ are resonant pairs in ${\cal L}_2$, we have that $k,({n},{m}),({n'},{m'})$ satisfy \begin{eqnarray*} &&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|n-n'|\neq0} ({F}^{k11}_{n'n}z_{n'} \bar{z}_{n}+{F}^{k11}_{nn'}z_{n}\bar{z}_{n'} )\kth\nonumber\\ &+&\sum_{|k|\le K,n\in {\cal L}_2,n'\in {\cal L}_2,|k|+|n-m'|\neq0(or)|k|+|n'-m|\neq0} ({F}^{k20}_{n'n}z_{n'} z_{n}+{F}^{k02}_{n'm'}\bar{z}_{n'} \bar{z}_{n})\kth\nonumber\\ &+&\cdots\cdots\nonumber \end{eqnarray*} \\ Consider the equations $$Q^T_{[n]}(\la k,\omega\ra I-A_{[n]})F^{k10}_{[n]}=iQ^T_{[n]}R^{k10}_{[n]},|k|\leq K,$$ matrix $Q_{[n]}$ is the $A_{[n]}$'s orthogonal matrix $$(\la k,\omega\ra I-Q^T_{[n]}A_{[n]}Q_{[n]})Q^T_{[n]}F^{k10}_{[n]}=iQ^T_{[n]}R^{k10}_{[n]},|k|\leq K,$$ that is $$(\la k,\omega\ra I-\Lambda_{[n]})\hat{F}^{k10}_{[n]}=i\hat{R}^{k10}_{[n]},|k|\leq K.$$ Similarly, from \begin{eqnarray*} (\la k,\omega\ra I+\Lambda_{[n]})\hat{F}^{k01}_{[n]}&=&i\hat{R}^{k01}_{[n]},|k|\leq K,\\ (\la k,\omega\ra I-\Lambda_{[m]})\hat{F}^{k20}_{[m][n]}-\hat{F}^{k20}_{[m][n]}\Lambda_{[n]}&=&i\hat{R}^{k20}_{[m][n]},|k|\leq K,\\ (\la k,\omega\ra I-\Lambda_{[m]})\hat{F}^{k11}_{[m][n]}+\hat{F}^{k11}_{[m][n]}\Lambda_{[n]}&=&i\hat{R}^{k11}_{[m][n]},|k|\leq K,|k|+||n|-|m||\neq0,\\ (\la k,\omega\ra I+\Lambda_{[m]})\hat{F}^{k02}_{[m][n]}+\hat{F}^{k02}_{[m][n]}\Lambda_{[n]}&=&i\hat{R}^{k02}_{[m][n]},|k|\leq K. \end{eqnarray*} instead, where $A_{[n]}$ can be diagonalized by orthogonal matrix $Q_{[n]}$, that is $\Lambda_{[n]}=Q^T_{[n]}A_{[n]}Q_{[n]}$. \begin{eqnarray*} \hat{R}^{kx}_{[n]}&=&Q^T_{[n]}R^{kx}_{[n]},x=10,01\\ \hat{R}^{kx}_{[m][n]}&=&Q^T_{[m]}R^{kx}_{[m][n]}Q_{[n]},x=20,11,02. \end{eqnarray*} \begin{eqnarray*} \hat{F}^{kx}_{[n]}&=&Q^T_{[n]}F^{kx}_{[n]},x=10,01\\ \hat{F}^{kx}_{[m][n]}&=&Q^T_{[m]}F^{kx}_{[m][n]}Q_{[n]},x=20,11,02. \end{eqnarray*} Now we focus on the following equations \begin{eqnarray*} (\la k,\omega\ra-\widetilde{\lambda}_j)\hat{F}^{k10}_{[n],j}&=&i\hat{R}^{k10}_{[n],j},|k|\leq K,j\in [n],\\ (\la k,\omega\ra+\widetilde{\lambda}_j)\hat{F}^{k01}_{[n],j}&=&i\hat{R}^{k01}_{[n],j},|k|\leq K,j\in [n],\\ (\la k,\omega\ra-\widetilde{\lambda}_i-\widetilde{\lambda}_j)\hat{F}^{k20}_{[m][n],ij}&=&i\hat{R}^{k20}_{[m][n],ij},|k|\leq K,i\in [m],j\in [n],\\ (\la k,\omega\ra-\widetilde{\lambda}_i+\widetilde{\lambda}_j)\hat{F}^{k11}_{[m][n],ij}&=&i\hat{R}^{k11}_{[m][n],ij},|k|\leq K,|k|+||n|-|m||\neq0,i\in [m],j\in [n],\\ (\la k,\omega\ra+\widetilde{\lambda}_i+\widetilde{\lambda}_j)\hat{F}^{k02}_{[m][n],ij}&=&i\hat{R}^{k02}_{[m][n],ij},|k|\leq K,i\in [m],j\in [n]. \end{eqnarray*} In the other cases, the proof is similar, so we omit it. In order to solve the last three equations, we need the following elementary algebraic result from matrix theory. \begin{Lemma}\label{Lem4.2} Let $A,B,C$ be, respectively, $n\times n,m\times m,n\times m$ matrices, and let $X$ be an $n\times m$ unknown matrix. The matrix equation $$AX-XB=C,$$ is solvable if and only if $I_m\otimes A-B\otimes I_n$ is nonsingular. \end{Lemma} For a detailed proof, we refer the reader to the Appendix in \cite{YJ}.\\ \noindent{\bf Remark.} Taking the transpose of the fourth equation in Lemma \ref{Lem4.02}, one sees that $(F^{k20}_{[m][n]})^T$ satisfies the same equation as $(F^{k20}_{[n][m]})$. Then (by the uniqueness of the solution) it follows that $(F^{k02}_{[n][m]})=(F^{k02}_{[m][n]})^T$, $(F^{-k11}_{[n][m]})=\overline{(F^{k11}_{[m][n]})^T}$ \subsection{Estimation for coefficients of $F$}\label{4.2} Let us consider $F^{k20}_{[m][n]}$ for instance, and the other terms can be treated in an analogous way. By the construction above, one sees that $$F^{k20}_{[m][n],ij}=i\sum_{m_1,n_1}\frac{Q_{[m],im_1}\hat{R}^{k20}_{[m][n],m_1,n_1}Q^T_{[n],n_1j}}{\la k,\omega\ra-\widetilde{\lambda}_i-\widetilde{\lambda}_j}.$$ Then $$|F^{k20}_{[m][n],ij}|\leq c\varepsilon\frac{K^\tau}{\gamma}e^{K^{1+3\varepsilon}\rho}e^{-\rho|i+j|}e^{-|k|r},$$ where we used the factor $e^{K^{1+3\varepsilon}\rho}$ to recover the exponential decay under the assumption $$K^{1+3\varepsilon}\rho=1.$$ And $$\|F^{k20}_{[m][n]}\|\leq cK_{\nu}^{3\varepsilon}\varepsilon_{\nu+1}\frac{K_{\nu}^{5(\tau+1)}}{\gamma^{-5}}K_{\nu}^{3\varepsilon}\leq \varepsilon_{\nu+1}^{\frac13}$$ under the assumption $$\varepsilon_{\nu+1}=c\gamma^{-5}(r_{\nu}-r_{\nu+1})^{-c} K_{\nu}^{5(\tau+1)}\varepsilon_{\nu}^{\frac43}.$$ \subsection{Estimation on the coordinate transformation}\label{4.3} \sss We proceed to estimate $X_F$ and $\phi_F^1$. We start with the following \begin{Lemma}\label{Lem4.3} Let $D_i=D( r_++\frac{i}4 (r-r_+), \frac i4s)$, $0 <i \le 4$. Then \begin{equation}\label{4.20} \|X_F\|_{D_3, \Cal O}\le c\gamma^{-5}K^{5(\tau+1)}(r-r_+)^{-c}\varepsilon. \end{equation} \end{Lemma} In the next lemma, we give some estimates for $\phi_F^t$. The formula (\ref{4.26}) will be used to prove our coordinate transformation is well defined. Inequality (\ref{4.27}) will be used to check the convergence of the iteration. \begin{Lemma}\label{Lem4.4} Let $\eta=\varepsilon^{\frac 13}, D_{i\eta}= D(r_++\frac {i}4(r-r_+),\frac i4 \eta s), 0 <i \le 4$. If $\varepsilon\ll \frac 12\gamma^{\frac{15}{2}}K^{{\frac{15}{2}}(\tau+1)}(r-r_+)^c$, we then have \begin{equation} \phi_F^t: D_{2\eta} \to D_{3\eta} ,\ \ \ -1 \le t\le 1, \label{4.26} \end{equation} Moreover, \begin{equation} \|D\phi_F^t-Id\|_{D_{1\eta}}< c \gamma^{-5}K^{5(\tau+1)}(r-r_+)^{-c}\varepsilon. \label{4.27} \end{equation} \end{Lemma} \proof Let $$\|D^mF\|_{D,\Cal O} =\max \{ \|\frac{\partial^{|i|+|l|+|\alpha|+|\beta|}}{\partial \theta^{i}\partial I^{l} \partial z^\alpha\partial{\bar z^\beta}} F\|_{D, \Cal O}, |i|+|l|+|\alpha|+|\beta|=m\ge 2\}.$$ Notice that $F$ is a polynomial of degree 1 in $I$ and degree 2 in $z$, $\bar z$. From (\ref{2.6}), (\ref{4.20}) and the Cauchy inequality, it follows that \begin{equation}\|D^mF\|_{D_2, \Cal O }< c \gamma^{-5}K^{5(\tau+1)}(r-r_+)^{-c}\varepsilon,\label{4.28} \end{equation} for any $m\ge 2$. To get the estimates for $\phi_F^t$, we start from the integral equation, $$\phi_F^t=id+\int_0^tX_F\circ \phi_F^s\,ds$$ so that $\phi_F^t: D_{2\eta} \to D_{3\eta} ,\ \ \ -1\le t\le 1$, which follows directly from (\ref{4.28}). Since $$D\phi_F^t=Id+\int_0^t(DX_F) D\phi_F^s\,ds= Id+\int_0^t J(D^2F) D\phi_F^s\,ds,$$ \noindent where $J$ denotes the standard symplectic matrix $\left(\begin{array}{cc} 0&-I \\ I&0 \end{array}\right)$, it follows that \begin{equation}\|D\phi_F^t-Id\|\le 2\|D^2F\|< c \gamma^{-5}K^{5(\tau+1)}(r-r_+)^{-c}\varepsilon. \label{4.29} \end{equation} Consequently Lemma \ref{Lem4.4} follows. \qed \subsection{Estimation for the new normal form}\label{no4.4} The map $\phi_F^1$ defined above transforms $H$ into $H_+=N_++{\cal B}_++\bar{\cal B}_++P_+$(see (\ref{4.11}) and (\ref{4.13}))with the normal form $N_+$ \begin{eqnarray*} N_+&=& N+P_{0000}+\la\hat{\omega}, I\ra+\sum_{[n]}\la P_{[n][n]}^{011}z_{[n]},\bar z_{[n]}\ra+\sum_{n'\in {\cal L}_2}({P}_{n'n'}^{011}z_{n'}\bar z_{n'}+{P}_{m'm'}^{011}z_{m'}\bar z_{m'}) \nonumber\\ &=&\la\omega_+, I\ra + \sum_{[n]}\la A_{[n]}^+z_{[n]} ,\bar z_{[n]}\ra+\sum_{n'\in {\cal L}_2}[(\Omega_{n'}^+-\omega_{i'})z_{n'}\bar z_{n'}+(\Omega_{m'}^+-\omega_{j'})z_{m'}\bar z_{m'})] \end{eqnarray*} where \beq\label{frequenciesomega} \omega_+=\omega+P_{0l00} (|l|=1), \eeq $$A_{[n]}^+=A_{[n]}+R_{[n][n]}^{011}=A_{[n]}+(R_{ij}^{011})_{i\in [n],j\in [n]},|i-j|> K,R_{ij}^{011}=0;|i-j|\leq K,R_{ij}^{011}=P_{ij}^{011}$$ $$\Omega_{n'}^+=\Omega_{n'}+{P}_{n'n'}^{011},\Omega_{m'}^+=\Omega_{m'}+{P}_{m'm'}^{011},n'\in {\cal L}_2$$ Now we prove that $N_+$ shares the same properties as $N$. By the regularity of $X_P$ and by Cauchy estimates, then we have \begin{equation}\label{4.32} |\omega_{+}-\omega|<\varepsilon, \quad |P_{ij+}^{011}-P_{ij}^{011}|<\varepsilon e^{-|i-j|\rho}. \end{equation} It follows that for $|k|\le K$, $$|\la k,\omega+P_{0l00}\ra| \ge|\la k,\omega\ra|-\varepsilon K\ge \frac{\gamma}{K^\tau}-\varepsilon K\ge \frac{\gamma}{K_+^\tau}, $$ $$|\la k,\omega+P_{0l00}\ra +\widetilde{\lambda}^+_j| \geq |\la k,\omega\ra +\widetilde{\lambda}_j|-\varepsilon K\geq \frac{\gamma}{K^\tau}-\varepsilon K\geq\frac{\gamma}{K_+^\tau}, $$ Similarly,we have $$|\la k,\omega+P_{0l00}\ra +\widetilde{\lambda}^+_i\pm\widetilde{\lambda}^+_j| \geq\frac{\gamma}{K_+^\tau}.$$ In other cases, the proof is similar, so we omit it. This means that in the next KAM step, small denominator conditions are automatically satisfied for $|k|\le K$. The following bounds will be used for the measure eatimates: $$\sup_{\xi\in \Cal O}\max_{d\leq 4}\|\partial_{\xi}^d(A_{[n]}^+-A_{[n]})\|\leq c\varepsilon$$ $$\sup_{\xi\in \Cal O}\max_{d\leq 4}|\partial_{\xi}^d(\Omega_{n'}^+-\Omega_{n'})|\leq \varepsilon$$ $$\sup_{\xi\in \Cal O}\max_{d\leq4}|\partial_\xi^d(\omega_+-\omega)|\leq \varepsilon$$ and $$|P^{011}_{ij+}-P_{ij}^{011}|_{\Cal O}\leq\varepsilon e^{-|i-j|\rho}.$$ \subsection{Estimation for the new perturbation}\label{4.5} Since \begin{eqnarray*} P_+&=&\int_0^1 (1-t)\{\{N+{\cal B}+\bar{\cal B},F\},F\}\circ \phi_F^{t}dt+\int_0^1 \{R,F\}\circ \phi_F^{t}dt +(P-R)\circ \phi^1_F\\ &=&\int_0^1 \{R(t),F\}\circ \phi_F^{t}dt +(P-R)\circ \phi^1_F, \end{eqnarray*} where $R(t)=(1-t)(N_++{\cal B}_++\bar{\cal B}_+-N-{\cal B}-\bar{\cal B})+tR$. Hence $$ X_{P_+}=\int_0^1 (\phi_F^{t})^*X_{\{R(t),F\}} dt +(\phi^1_F)^*X_{(P-R)}. $$ According to Lemma \ref{Lem4.4}, $$\|D\phi_F^t-Id\|_{D_{1\eta}}< c \gamma^{-5}K^{5(\tau+1)}(r-r_+)^{-c}\varepsilon, \quad -1\le t\le 1, $$ thus $$\|D\phi_F^t\|_{D_{1\eta}}\le 1+\|D\phi_F^t-Id\|_{D_{1\eta}}\le 2, \quad -1\le t\le 1.$$ Due to Lemma \ref{Lem7.3}, $$ \|X_{\{R(t),F\}}\|_{D_{2\eta}}\le c \gamma^{-5} K^{5(\tau+1)}(r-r_+)^{-c} \eta^{-2} \varepsilon^2,$$ and $$ \|X_{(P-R)}\|_{D_{2\eta}}\le c \eta \varepsilon, $$ we have $$ \|X_{P_+}\|_{D_\rho(r_+,s_+)}\le c\eta \varepsilon + c \gamma^{-5} K^{5(\tau+1)}(r-r_+)^{-c}\eta^{-2} \varepsilon^2\le c\varepsilon_+. $$ \subsection{Verification of $(A5)$ after one step of KAM iteration}\label{4.5} Since \begin{eqnarray*} P_+&=&P-R+\{P,F\}+\frac{1}{2!}\{\{N+{\cal B}+\bar{\cal B},F\},F\}+\frac{1}{2!}\{\{P,F\},F\}\\ &&+\cdots+ \frac{1}{n!}\{\cdots\{N+{\cal B}+\bar{\cal B},\underbrace{F\}\cdots ,F}_n\}+\frac{1}{n!}\{\cdots\{P,\underbrace{F\}\cdots ,F}_n\}+\cdots \end{eqnarray*} then for a fixed $c\in\Z^2\setminus \{0\}$, and $|n-m|>K$ with $K\geq \frac{1}{\rho-\rho_+}\ln(\frac{\varepsilon}{\varepsilon_+})$, $$\|\frac{\partial^2(P-R)}{\partial z_{n+tc}\partial \bar z_{m+tc}}-\lim_{t\to\infty}\frac{\partial^2(P-R)} {\partial z_{n+tc}\partial \bar z_{m+tc}}\|\leq \frac{\varepsilon}{|t|}e^{-|n-m|\rho} \leq \frac{\varepsilon_+}{|t|}e^{-|n-m|\rho_+}.$$ That is to say, $P-R$ satisfies $(A5)$ with $K_+,\varepsilon_+,\rho_+$ in place of $K,\varepsilon,\rho$. The proof of the remaining terms satisfying $(A5)$ is composed by the following two lemmas. \begin{Lemma}\label{Ftoplitz} $F$ satisfies $(A5)$ with $\varepsilon^{\frac 23}$ in place of $\varepsilon$. \end{Lemma} For the proof see \cite{GXY}. \begin{Lemma}\label{toplitz} Assume that $P$ satisfies $(A5)$, $F$ satisfies $(A5)$ with $\varepsilon^{\frac 23}$ in place of $\varepsilon$ and $$\frac{\partial^2F}{\partial z_n\partial z_m}=0 (|n+m|>K), \frac{\partial^2F}{\partial z_n\partial \bar z_m}=0 (|n-m|>K),\frac{\partial^2F}{\partial \bar z_n\partial {\bar z}_m}=0 (|n+m|>K),$$ then $\{P,F\}$ satisfies $(A6)$ with $\varepsilon_+$ in place of $\varepsilon$. \end{Lemma} For the proof see \cite{GXY}.\\ A KAM-step cycle is now completed. \section{Iteration Lemma and Convergence} \noindent For any given $s,\varepsilon,r, \gamma$ and for all $\nu\ge 1$, we define the following sequences \[ r_{\nu+1}=r(1-\sum_{i=2}^{\nu+2}2^{-i}),\] \beq\varepsilon_{\nu+1}=c\gamma^{-5}(r_{\nu}-r_{\nu+1})^{-c} K_{\nu}^{5(\tau+1)}\varepsilon_{\nu}^{\frac43},\eeq \[\eta_{\nu+1}=\varepsilon_{\nu+1}^{\frac13}, L_{\nu+1}=L_{\nu}+\varepsilon_{\nu}\] \[s_{\nu+1}=2^{-2}\eta_{\nu}s_{\nu}=2^{-2{(\nu+1)}}(\prod_{i=0}^{\nu}\varepsilon_i)^{\frac13}s_0, \] $$K_{\nu+1}^{1+3\varepsilon}\rho_{\nu+1}=1$$ $$K_{\nu+1}=3K_\nu=3^{\nu+1}K_0$$ \[\Delta_{\nu+1}=K^3_\nu \] where $c$ is a constant,$\gamma=\varepsilon_0^{\frac{1}{50}}\gg \varepsilon_0,$ and the parameters $r_0,\varepsilon_0,s_0$ and $K_0$ are defined to be $r,\varepsilon,s$ and $K_0^2 e^{-K_0(r_0-r_1)}=\varepsilon_0^{\frac 13}$ respectively. \subsection{Iteration lemma} The preceding analysis can be summarized as follows. \begin{Lemma}\label{Lem5.1} Let $\varepsilon$ is small enough and $\nu\ge 0$. Suppose that \noindent (1). $N_\nu+{\cal B}_\nu+\bar{\cal B}_\nu$ is a normal form with parameters $\xi$ satisfying \[ |\langle k,\omega_\nu\rangle|\ge \frac{\gamma}{K_\nu^\tau}, k\neq 0,\] \[|\langle k,\omega_\nu\rangle \pm\widetilde{\lambda}_j^\nu|\ge \frac{\gamma}{K_\nu^\tau},j\in{[n]} \] \[|\langle k,\omega_\nu\rangle \pm \widetilde{\lambda}_i^\nu\pm \widetilde{\lambda}_j^\nu|\ge \frac{\gamma}{K_\nu^\tau},i\in{[m]},j\in{[n]}\] $$|det(\la k,\omega_\nu\ra I\pm{\cal A}_n^\nu\otimes I_2 \pm I_2\otimes {\cal A}_{n'}^\nu)|\geq \frac{\gamma}{K_\nu^\tau},k\neq 0,n,n'\in{\cal L}_2$$ \beq\label{nonresonanceconditions} \eeq \indent on a closed set $\Cal O_{\nu}$ of $\R^b$ for all $0<|k|\leq K_\nu$. Moreover,suppose that $\omega_\nu(\xi)$,$P^{011}_{ij\nu}(\xi)$,$A_{[n]}^{\nu}(\xi)$ are $C_W^4$ smooth and satisfy $$\sup_{\xi\in \Cal O_\nu}\max_{d\leq 4}\|\partial_{\xi}^d(A_{[n]}^\nu-A_{[n]}^{\nu-1})\|\leq c\varepsilon_{\nu-1}$$ $$\sup_{\xi\in \Cal O_\nu}\max_{d\leq 4}|\partial_{\xi}^d(\Omega_{n'}^\nu-\Omega_{n'}^{\nu-1})|\leq \varepsilon_{\nu-1}$$ $$\sup_{\xi\in \Cal O_\nu}\max_{d\leq4}|\partial_\xi^d(\omega_\nu-\omega_{\nu-1})|\leq \varepsilon_{\nu-1}$$ and $$|P^{011}_{ij\nu}-P_{ij(\nu-1)}^{011}|_{\Cal O_\nu}\leq\varepsilon_{\nu-1} e^{-|i-j|\rho}$$ in the sense of Whitney. \noindent (2). $N_\nu+{\cal B}_\nu+\bar{\cal B}_\nu+P_\nu$ satisfies $(A5)$ with $K_\nu,\varepsilon_\nu,\rho_\nu$ and \[\|X_{P_\nu}\|_{D(r_\nu, s_\nu),\Cal O_{\nu}}\le \varepsilon_\nu.\] Then there is a subset $\Cal O_{\nu+1}\subset\Cal O_{\nu}$, \[\Cal O_{\nu+1}=\Cal O_\nu\setminus(\Cal R_k^{\nu+1}), \] $$\Cal R^{\nu+1}=\bigcup_{K_{\nu}<|k|\le K_{\nu+1},[n],[m],n,n'}(\Cal R_k^{\nu+1}\bigcup \Cal R_{k[n]}^{\nu+1}\bigcup\Cal R_{k[n][m]}^{\nu+1}\bigcup{\Cal C^{\nu+1}_{knn'}(\gamma)}), $$ where $$ \Cal R_k^{\nu+1}=\{\xi\in \Cal O_{\nu}:|\langle k,\omega_{\nu+1}\rangle|< \frac{\gamma}{K_{\nu+1}^\tau}, k\neq 0\}$$ $$\Cal R_{k[n]}^{\nu+1}=\{\xi\in \Cal O_{\nu}:|\langle k,\omega_{\nu+1}\rangle \pm\widetilde{\lambda}_j^{\nu+1}|< \frac{\gamma}{K_{\nu+1}^\tau}\},$$ $$\Cal R_{k[n][m]}^{\nu+1}=\{\xi\in \Cal O_{\nu}:\ |\langle k,\omega_{\nu+1}\rangle \pm \widetilde{\lambda}_i^{\nu+1}\pm \widetilde{\lambda}_j^{\nu+1}|< \frac{\gamma}{K_{\nu+1}^\tau},i\in [m],j\in[n] \},$$ $$\Cal C^{\nu+1}_{knn'}=\{\xi\in \Cal O_{\nu}:\ |det(\la k,\omega_{\nu+1}\ra I\pm{\cal A}_n^{\nu+1}\otimes I_2 \pm I_2\otimes {\cal A}_{n'}^{\nu+1})|< \frac{\gamma}{K_{\nu+1}^\tau},k\neq 0,n,n'\in{\cal L}_2 \},$$ with $\omega_{\nu+1}=\omega_\nu+P_{0l00}^\nu$, and a symplectic transformation of variables \beq \Phi_\nu:D_{\rho_\nu}(r_{\nu+1},s_{\nu +1}) \times\Cal O_{\nu}\to D_{\rho_\nu}(r_{\nu},s_{\nu}),\eeq such that on $D_{\rho_{\nu+1}}(r_{\nu+1},s_{\nu +1})\times\Cal O_{\nu+1},$ $H_{\nu+1}=H_\nu\circ\Phi_\nu$ has the form \begin{eqnarray*} H_{\nu+1}&=&e_{\nu+1}+\la\omega_{\nu+1},I\ra+\sum_{[n]} \la A_{[n]}^{\nu+1}(\xi)z_{[n]},\bar z_{[n]}\ra\nonumber\\ &&+\sum_{n'\in {\cal L}_2}[(\Omega_{n'}^{\nu+1}-\omega_{i'})z_{n'}\bar z_{n'}+(\Omega_{m'}^{\nu+1}-\omega_{j'})z_{m'}\bar z_{m'}]+{\cal B}_{\nu+1}+\bar{\cal B}_{\nu+1}+P_{\nu+1}. \end{eqnarray*} with $$\sup_{\xi\in \Cal O_\nu}\max_{d\leq 4}\|\partial_{\xi}^d(A_{[n]}^{\nu+1}-A_{[n]}^{\nu})\|\leq c\varepsilon_{\nu}$$ $$\sup_{\xi\in \Cal O_\nu}\max_{d\leq 4}|\partial_{\xi}^d(\Omega_{n'}^{\nu+1}-\Omega_{n'}^{\nu})|\leq \varepsilon_{\nu}$$ $$\sup_{\xi\in \Cal O_\nu}\max_{d\leq4}|\partial_\xi^d(\omega_{\nu+1}-\omega_{\nu})|\leq \varepsilon_{\nu}$$ and $$|P^{011}_{ij(\nu+1)}-P_{ij\nu}^{011}|_{\Cal O_\nu}\leq\varepsilon_{\nu} e^{-|i-j|\rho}$$ in the sense of Whitney. And $$\|X_{P_{\nu+1}}\|_{D(r_{\nu+1}, s_{\nu+1}),\Cal O_{{\nu+1}}}\le \varepsilon_{\nu+1}.$$ \end{Lemma} \subsection{Convergence} Suppose that the assumptions of Theorem \ref{KAM} are satisfied to apply the iteration Lemma with $\nu=0$,recall that $$\varepsilon_0=\varepsilon,r_0=r, s_0=s,L_0=L, N_0=N,{\cal B}_0={\cal B}, P_0=P,\gamma=\varepsilon^{\frac{1}{50}}, K_0^2 e^{-K_0(r_0-r_1)}=\varepsilon_0^{\frac 13}\quad $$ $$ \Cal O_0= \left\{\xi\in \Cal O: \begin{array}{rcl} &&|\langle k,\omega\rangle|\ge \frac{\gamma}{K_0^\tau}, k\neq 0\\ &&|\langle k,\omega\rangle \pm\widetilde{\lambda}_j|\ge \frac{\gamma}{K_0^\tau},j\in{[n]}\\ &&|\langle k,\omega\rangle \pm \widetilde{\lambda}_i\pm \widetilde{\lambda}_j|\ge \frac{\gamma}{K_0^\tau},i\in{[m]},j\in{[n]}\\ &&|det(\la k,\omega\ra I\pm{\cal A}_n\otimes I_2 \pm I_2\otimes {\cal A}_{n'})|\geq \frac{\gamma}{K_0^\tau},n,n'\in{\cal L}_2 \end{array} \right\},$$ the assumptions of the iteration lemma are satisfied when $\nu=0$ if $\varepsilon_0$ and $\gamma$ are sufficiently small. Inductively, we obtain the following sequences: \[ \Cal O_{\nu+1}\subset\Cal O_\nu,\] \[\Psi^\nu=\Phi_0\circ\Phi_1\circ\cdots\circ\Phi_\nu:D_{\rho_\nu}(r_{\nu+1},s_{\nu+1})\times\Cal O_\nu\to D_{\rho_0}(r_0,s_0),\nu\ge 0, \] \[H\circ\Psi^\nu=H_{\nu+1}=N_{\nu+1}+{\cal B}_{\nu+1}+\bar{\cal B}_{\nu+1}+P_{\nu+1}.\] \indent Let $\tilde{\Cal O}=\cap_{\nu=0}^\infty \Cal O_\nu$. As in \cite{P1,P2}, thanks to Lemma {\ref{Lem4.4}}, it concludes that $N_\nu,\Psi^\nu,D\Psi^\nu,\omega_{\nu}$ converge uniformly on $D_{\frac 12r}(\frac 12r,0)\times\tilde{\Cal O}$ with \begin{eqnarray*} N_\infty+{\cal B}_\infty+\bar{\cal B}_\infty&=&e_\infty+\la\omega_\infty,I\ra+\sum_{[n]} \la A_{[n]}^\infty(\xi)z_{[n]},\bar z_{[n]}\ra\nonumber\\ &&+\sum_{n'\in {\cal L}_2}[(\Omega_{n'}^{\infty}-\omega_{i'})z_{n'}\bar z_{n'}+(\Omega_{m'}^{\infty}-\omega_{j'})z_{m'}\bar z_{m'}]+{\cal B}_\infty+\bar{\cal B}_\infty. \end{eqnarray*} \noindent Since $$ \varepsilon_{\nu+1}=c\gamma^{-5}K_{\nu}^{5(\tau+1)}(r_\nu-r_{\nu-1})^{-c}\varepsilon_\nu^{\frac43} , $$ it follows that $\varepsilon_{\nu+1}\to 0$ provided that $\varepsilon$ is sufficiently small. And we also have $\sum_{\nu=0}^{\infty}\varepsilon_{\nu}\leq2\varepsilon$. Let $\phi_H^t$ be the flow of $X_H$. Since $H\circ\Psi^\nu=H_{\nu+1}$, we have \beq\label{5.7} \phi_H^t\circ\Psi^\nu=\Psi^\nu\circ\phi_{H_{\nu+1}}^t. \eeq The uniform convergence of $\Psi^\nu,D\Psi^\nu,\omega_{\nu}$ and $X_{H_{\nu}}$ implies that the limits can be taken on both sides of (\ref{5.7}). Hence, on $D_{\frac 12r}(\frac 12r,0)\times\tilde{\Cal O}$ we get \beq\label{5.8} \phi_H^t\circ\Psi^\infty=\Psi^\infty\circ\phi_{H_{\infty}}^t\eeq and $$ \Psi^\infty:D_{\frac 12r}(\frac 12r,0)\times\tilde{\Cal O}\to D_\rho(r,s) \times \Cal O. $$ It follows from (\ref{5.8}) that $$ \phi_H^t(\Psi^\infty(\T^b\times \{\xi\}))=\Psi^\infty\phi_{N_\infty}^t(\T^b\times\{\xi\})=\Psi^\infty(\T^b\times\{\xi\}) $$ \noindent for $\xi\in\tilde{\Cal O}$. This means that $\Psi^\infty(\T^b\times\{\xi\})$ is an embedded torus which is invariant for the original perturbed Hamiltonian system at $\xi\in \tilde{\Cal O}$. We remark here that the frequencies $\omega_\infty(\xi)$ associated to $\Psi^\infty(\T^b\times\{\xi\})$ are slightly different from $\omega(\xi)$. The normal behavior of the invariant torus is governed by normal frequencies $A_{[n]}^\infty,\Omega_{n'}^\infty$.\qed \section{Measure Estimates} This section is the essential part for this paper. For notational convenience, let $\Cal O_{-1}=\Cal O$, $K_{-1}=0$. Then at $\nu^{\rm th}$ step of KAM iteration, we have to exclude the following resonant set $$\Cal R^{\nu+1}=\bigcup_{K_{\nu}<|k|\le K_{\nu+1},[n],[m],n,n'}(\Cal R_k^{\nu+1}\bigcup \Cal R_{k[n]}^{\nu+1}\bigcup\Cal R_{k[n][m]}^{\nu+1}\bigcup{\Cal C^{\nu+1}_{knn'}(\gamma)}), $$ where $$ \Cal R_k^{\nu+1}=\{\xi\in \Cal O_{\nu}:|\langle k,\omega_{\nu+1}\rangle|< \frac{\gamma}{K_{\nu+1}^\tau}, k\neq 0\}$$ $$\Cal R_{k[n]}^{\nu+1}=\{\xi\in \Cal O_{\nu}:|\langle k,\omega_{\nu+1}\rangle \pm\widetilde{\lambda}_j^{\nu+1}|< \frac{\gamma}{K_{\nu+1}^\tau}\},$$ $$\Cal R_{k[n][m]}^{\nu+1}=\{\xi\in \Cal O_{\nu}:\ |\langle k,\omega_{\nu+1}\rangle \pm \widetilde{\lambda}_i^{\nu+1}\pm \widetilde{\lambda}_j^{\nu+1}|< \frac{\gamma}{K_{\nu+1}^\tau},i\in [m],j\in[n] \},$$ $$\Cal C^{\nu+1}_{knn'}=\{\xi\in \Cal O_{\nu}:\ |det(\la k,\omega_{\nu+1}\ra I\pm{\cal A}_n^{\nu+1}\otimes I_2 \pm I_2\otimes {\cal A}_{n'}^{\nu+1})|< \frac{\gamma}{K_{\nu+1}^\tau},k\neq 0,n,n'\in{\cal L}_2 \},$$ recall that $ \omega_{\nu+1}(\xi)=\omega(\xi)+\sum_{j=0}^\nu P_{0l00}(\xi)$ with $ |\sum_{j=0}^\nu P_{0l00}^{j}(\xi)|_{\Cal O_\nu}<\varepsilon $,and $$\|A_{[n]}^{\nu +1}(\xi)-A_{[n]}(\xi)\|_{\Cal O_{\nu}}\leq \sum_{j=0}^\nu\|R_{[n][n]}^{011,j}\|\leq\varepsilon,$$ $$|\Omega_{n'}^{\nu +1}(\xi)-\Omega_{n'}(\xi)|_{\Cal O_{\nu}}\leq \sum_{j=0}^\nu |R_{n'n'}^{011,j}|\leq\varepsilon.$$ \noindent{\bf Remark.} From the section \ref{no4.4}, one has that at $(\nu+1)^{\rm th}$ step, small divisor conditions are automatically satisfied for $|k|\le K_{\nu}$. Hence, we only need to excise the above resonant set $\Cal R^{\nu+1}$. In the following, we only give the proof for the most complicated case $\{\xi\in \Cal O_{\nu}:|\langle k,\omega_{\nu+1}\rangle + \widetilde{\lambda}_n^{\nu+1}- \widetilde{\lambda}_{n'}^{\nu+1}|< \frac{\gamma}{K_{\nu+1}^\tau},n,n'\in{\cal L}_1\}$ and $\{\xi\in \Cal O_{\nu}:|det(\la k,\omega_{\nu+1}\ra I+{\cal A}_n^{\nu+1}\otimes I_2 - I_2\otimes {\cal A}_{n'}^{\nu+1})|< \frac{\gamma}{K_{\nu+1}^\tau},n,n'\in{\cal L}_2\}$. When $n\in {\cal L}_1,n'\in {\cal L}_2$, there will be no small divisors. In other cases, the proof is similar, so we omit it. For simplicity, set $M^{\nu+1}=|\langle k,\omega_{\nu+1}\rangle + \widetilde{\lambda}_n^{\nu+1}- \widetilde{\lambda}_{n'}^{\nu+1}|$ and $Y^{\nu+1}=\la k,\omega_{\nu+1}\ra I+{\cal A}_n^{\nu+1}\otimes I_2 - I_2\otimes {\cal A}_{n'}^{\nu+1}$,$Y^{\nu}=\la k,\omega_{\nu}\ra I+{\cal A}_n^{\nu}\otimes I_2 - I_2\otimes {\cal A}_{n'}^{\nu}$,then for $|k|\leq K_\nu$ \begin{eqnarray*} \|(Y^{\nu+1})^{-1}\|&=&\|(Y^\nu+(Y^{\nu+1}-Y^{\nu}))^{-1}\|\\ &=&\|(I+(Y^{\nu})^{-1}(Y^{\nu+1}-Y^{\nu}))^{-1}(Y^{\nu})^{-1}\|\\ &\leq&2\|(Y^{\nu})^{-1}\|\leq2\frac{K^\tau_{\nu}}{\gamma}\leq\frac{K^\tau_{\nu+1}}{\gamma}. \end{eqnarray*} \begin{Lemma} For any given $n, n'\in \Bbb Z_1^2$ with $|n-n'|\leq K_{\nu+1}$, either $|\langle k,\omega_{\nu+1}\rangle + \widetilde{\lambda}_n^{\nu+1}- \widetilde{\lambda}_{n'}^{\nu+1}|>1$ or there are $n_0, n'_0, c\in\Z^2$ with $|n_0|, |n'_0|, |c|\le 3K_{\nu+1}^2$ and $t\in \Z$, such that $n=n_0+tc$, $n'=n'_0+tc$. \end{Lemma} \proof Since $|n-n'|\leq K_{\nu+1}$, with an elementary calculation \begin{eqnarray*}|n|^2-|n'|^2=|n-n'|^2+2\langle n-n',n'\rangle \end{eqnarray*} If $|\langle n-n',n'\rangle|>K_{\nu+1}^2$, we have $|\langle k,\omega_{\nu+1}\rangle + \widetilde{\lambda}_n^{\nu+1}- \widetilde{\lambda}_{n'}^{\nu+1}|>1$, there will be no small divisor. In the case that $|\langle n-n',n'\rangle|\le K_{\nu+1}^2 $, clearly $n-n'=0$ is trivial. Assume $n-n'\neq 0$, without loss of generality, we assume that the first component $(n-n')_1$ of $n-n'$ is not zero. Let $$c=(-(n-n')_2,(n-n')_1)$$ Then $$c\perp (n-n')$$ and $c\in\Z^2\setminus\{0\}$ with $|c|\le |n-n'|\le K_{\nu+1}$. Clearly, $ c, n-n'$ are linearly independent, hence there exist $x_1, x_2\in \R$ such that $$n'=x_1c+x_2(n-n').$$ Set (here $[\cdot]$ denotes the integer part of $\cdot$) $$t=[x_1]$$ then $t\in\Z$ and $|n'-tc|\le 2K_{\nu+1}^2$. Take $n'_0= n'-tc\in\Z^2$ and $n_0=n'_0+n-n'\in\Z^2$. We have $|n'_0|\le 2K_{\nu+1}^2$ and \[ |n_0|\le |n'_0|+|n-n'|\le 3K_{\nu+1}^2.\] \qed \begin{Lemma}\[\cup_{n,n'\in {\cal L}_1} \Cal R^{\nu+1}_{k[n][n']}\subset \cup_{{n_0,n'_0,c}\in \Bbb Z^2, t\in\Z} \Cal R_{k, n_0+tc, n'_0+tc}^{\nu+1}\] where $|n_0|, |n'_0|, |c|\le 3K_{\nu+1}^2$. \end{Lemma} \proof If $|\langle n-n',n'\rangle|> K_{\nu+1}^2$, $\Cal R^{\nu+1}_{k[n][n']}=\emptyset.$ If $|\langle n-n',n' \rangle|\le K_{\nu+1}^2$, there exist $n_0, n'_0, c\in \Bbb Z^2, t\in\Z$ with $|n_0|, |n'_0|, |c|\le 3K_{\nu+1}^2$ such that $n=n_0+tc$, $n'=n'_0+tc$. Hence \[\cup_{n,n'\in {\cal L}_1} \Cal R^{\nu+1}_{k[n][n']}\subset \cup_{{n_0,n'_0,c}\in \Bbb Z^2, t\in\Z} \Cal R_{k, n_0+tc, n'_0+tc}^{\nu+1}\] where $|n_0|, |n'_0|, |c|\le 3K_{\nu+1}^2$.\qed \begin{Lemma} For fixed $k,n_0, n'_0, c$, one has $$ {\rm meas}(\cup_{t\in\Z} \Cal R_{k, n_0+tc, n'_0+tc}^{\nu+1})< c\frac{\gamma}{K_{\nu+1}^{\tau\over{2} }}. $$ \end{Lemma} \proof Due to T\"oplitz-Lipschitz property of $N_{\nu}+{\cal B}_{\nu}+\bar{\cal B}_{\nu}+P_{\nu}$, then $$|M^{\nu+1}(t)-\lim_{t\to\infty}M^{\nu+1}(t) |< \frac {\varepsilon_0}{|t|}.$$ We define resonant set \beq\label{resonant3} \Cal R_{kn_0n'_0c\infty}^{\nu+1}=\{\xi\in \Cal O_{\nu}:|\lim_{t\to\infty}M^{\nu+1}(t))|<\frac {\gamma}{K_{\nu+1}^{\tau\over{2}}}\}\nonumber\eeq For fixed $k,n_0, n'_0, c$, $${\rm meas}(\Cal R_{kn_0n'_0c\infty}^{\nu+1})< \frac{\gamma}{K_{\nu+1}^{\tau\over{2}}}. $$ Then for $\xi\in\Cal O_{\nu}\backslash \Cal R_{kn_0n'_0c\infty}^{\nu+1}$, we have $$|\lim_{t\to\infty}M^{\nu+1}(t))|\ge \frac{\gamma}{K_{\nu+1}^{\tau\over{2}}}.$$ Case 1: When $|t|>K_{\nu+1}^{\tau\over{2}}$, for $\xi\in\Cal O_{\nu}\backslash \Cal R_{kn_0n'_0c\infty}^{\nu+1}$, we have \begin{eqnarray*} &&|M^{\nu+1}(t)|\\ &\geq&|\lim_{t\to\infty}M^{\nu+1}(t)|-\frac {\varepsilon_0}{|t|}\\ &\geq& \frac {\gamma}{K_{\nu+1}^{\tau\over{2}}}-\frac {\varepsilon_0}{K_{\nu+1}^{\tau\over{2}}}\\ &\geq& \frac {\gamma}{2K_{\nu+1}^{\tau\over{2}}}.\end{eqnarray*} Case 2: When $|t_{1}|\le K_{\nu+1}^{\tau\over{2}}$, we define resonant set \beq \Cal R_{kn_0n'_0ct}^{\nu+1}=\{\xi\in \Cal O_{\nu}:|M^{\nu+1}(t)|<\frac {\gamma}{K_{\nu+1}^{\tau}}\}\nonumber\eeq For fixed $k,n_0, n'_0, c,t$, $${\rm meas}(\Cal R_{kn_0n'_0ct}^{\nu+1})< \frac{\gamma}{K_{\nu+1}^{\tau}}, $$ then $${\rm meas}\{\cup_{|t|\le K_{\nu+1}^{\tau\over{2}}}\Cal R_{kn_0n'_0ct}^{\nu+1}\}<K_{\nu+1}^{{\tau\over{2}}}\frac {\gamma}{K_{\nu+1}^{\tau}}\le \frac {\gamma}{K_{\nu+1}^{\tau\over{2}}}.$$ As a consequence, $$ {\rm meas}(\cup_{t\in\Z} \Cal R_{k, n_0+tc, n'_0+tc}^{\nu+1})< c\frac{\gamma}{K_{\nu+1}^{\tau\over{2} }}. $$\qed For $K_\nu<|k|\leq K_{\nu+1}$,we consider $n,n'\in {\cal L}_2$ as an example,the other cases can be proved analogously.Assume that $(n,m)$ and $(n',m')$ are resonant pairs in ${\cal L}_2$,then \begin{Lemma} For any given $n, n'\in \Bbb Z_1^2$ with $|n-n'|\leq K_{\nu+1}$, either $|det(\la k,\omega_{\nu+1}\ra I+{\cal A}_n^{\nu+1}\otimes I_2 - I_2\otimes {\cal A}_{n'}^{\nu+1})|>1$ or there are $n_0, n'_0, c\in\Z^2$ with $|n_0|, |n'_0|, |c|\le 3K_{\nu+1}^2$ and $t\in \Z$, such that $n=n_0+tc$, $n'=n'_0+tc$. \end{Lemma} \begin{Lemma} \[\cup_{n,n'\in \Bbb Z_1^2} \Cal C_{knn'}^{\nu+1}\subset \cup_{{n_0,n'_0,c}\in \Bbb Z^2, t\in\Z} \Cal C_{k, n_0+tc, n'_0+tc}^{\nu+1}\] where $|n_0|, |n'_0|, |c|\le 3K_{\nu+1}^2$. \end{Lemma} \begin{Lemma} For fixed $k,n_0, n'_0, c$, one has $$ {\rm meas}(\cup_{t\in\Z} \Cal C_{k, n_0+tc, n'_0+tc}^{\nu+1})< c\frac{\gamma^{\frac14}}{K_{\nu+1}^{\frac{\tau}{20}}}. $$ \end{Lemma} \proof Due to the analysis above and T\"{o}plitz-Lipschitz property of $N+{\cal B}+\bar{\cal B}+P$,the coefficient matrix $Y^{\nu+1}(t)$ has a limit as $t\rightarrow\infty$, $$\|Y^{\nu+1}(t)-\lim_{t\rightarrow\infty}Y^{\nu+1}(t)\|\leq \frac{\varepsilon_0}{t}.$$ We define resonant set $$\Cal C_{kn_0n'_0c\infty}^{\nu+1}=\left\{\xi\in \Cal O_\nu:|det\lim_{t\rightarrow\infty}Y^{\nu+1}(t)|< \frac{\gamma}{K_{\nu+1}^{\frac{\tau}{5}}}\right\}.$$ Then for $\xi\in\Cal O_{\nu}\backslash \Cal C_{kn_0n'_0c\infty}^{\nu+1}$, we have $$\|(\lim_{t\rightarrow\infty}Y^{\nu+1}(t))^{-1}\|\leq \frac{K_{\nu+1}^{\frac{\tau}{5}}}{\gamma}.$$ Since $$\|Y^{\nu+1}(t)-\lim_{t\rightarrow\infty}Y^{\nu+1}(t)\|\leq \frac{\varepsilon_0}{t},$$ for $|t|>K_{\nu+1}^{\frac{\tau}{5}}$,we have $$\|(Y^{\nu+1}(t))^{-1}\|\leq 2\frac{K_{\nu+1}^{\frac{\tau}{5}}}{\gamma}\leq\frac{K_{\nu+1}^{\tau}}{\gamma}.$$ For $|t|\leq K_{\nu+1}^{\frac{\tau}{5}}$, we define resonant set \beq \Cal C_{kn_0n'_0ct}^{\nu+1}=\{\xi\in \Cal O_{\nu}:|detY^{\nu+1}(t)|< \frac{\gamma}{K_{\nu+1}^{\tau}}\}.\nonumber\eeq In addition $$\inf_{\xi\in \Cal O}\max_{0<d\leq 4}|\partial^d_\xi(detY^{\nu+1}(t))|\geq\frac12|k|\geq\frac12K.$$ For fixed $k,n_0, n'_0, c,t$, $${\rm meas}(\Cal C_{kn_0n'_0ct}^{\nu+1})< (\frac{\gamma}{K_{\nu+1}^{\tau}})^{\frac14}, $$ then $${\rm meas}\{\cup_{|t|\le K_{\nu+1}^{\tau\over{5}}}\Cal C_{kn_0n'_0ct}^{\nu+1}\}<K_{\nu+1}^{{\tau\over{5}}}(\frac {\gamma}{K_{\nu+1}^{\tau}})^{\frac14}\le \frac {\gamma^{\frac14}}{K_{\nu+1}^{{\tau\over{20}}}}.$$ As a consequence, $$ {\rm meas}(\cup_{t\in\Z} \Cal C_{k, n_0+tc, n'_0+tc}^{\nu+1})< c\frac {\gamma^{\frac14}}{K_{\nu+1}^{{\tau\over{20}}}}. $$\qed \begin{Lemma}\label{Lem6.1} $${\rm meas}(\bigcup_{K_{\nu}<|k|\le K_{\nu+1}} R_k^{\nu+1})\leq cK_{\nu+1}^{b}\frac {\gamma}{K_{\nu+1}^{\tau}}=c\frac {\gamma}{K_{\nu+1}^{\tau-b}}$$ $${\rm meas}(\bigcup_{K_{\nu}<|k|\le K_{\nu+1},[n]} R_{k[n]}^\nu)\leq cK_{\nu+1}^{2+b}\frac {\gamma}{K_{\nu+1}^{\tau}}=c\frac {\gamma}{K_{\nu+1}^{\tau-2-b}}$$ $${\rm meas}(\bigcup_{K_{\nu}<|k|\le K_{\nu+1},[n],[ m]} R_{k[n][m]}^{\nu+1})\leq c\frac {\gamma}{K_{\nu+1}^{{\tau\over{2}}-12-b}}$$ $${\rm meas}(\bigcup_{K_{\nu}<|k|\le K_{\nu+1},n,n'} \Cal C_{knn'}^\nu)\leq c\frac {\gamma^{\frac14}}{K_{\nu+1}^{{\frac{\tau}{20}}-12-b}}$$ \end{Lemma} \begin{Lemma}\label{Lem6.2} Let $\tau>20(12+b+1)$, then the total measure need to exclude along the KAM iteration is \begin{eqnarray*} &&{\rm meas}(\bigcup_{\nu\ge 0}\Cal R^{\nu+1})\\ &=&{\rm meas}[\bigcup_{\nu\ge 0}\bigcup_{K_{\nu}<|k|\le K_{\nu+1},[n],[m],n,n'}(\Cal R_k^{\nu+1}\bigcup \Cal R_{k[n]}^{\nu+1}\bigcup\Cal R_{k[n][m]}^{\nu+1}\bigcup{\Cal C^{\nu+1}_{knn'}(\gamma)})]\\ &\le&c\sum_{\nu\ge 0}\frac{\gamma^{\frac14}}{K_{\nu+1}}\le c\gamma^{\frac14}. \end{eqnarray*} \end{Lemma} \section{Appendix} \sss \begin{Lemma}\label{Lem7.1} $$\|FG\|_{ D_\rho(r,s),\Cal O }\le \|F\|_{ D_\rho(r,s),\Cal O }\|G\|_{ D_\rho(r,s),\Cal O }.$$\end{Lemma} \proof Since $(FG)_{kl\alpha\beta}=\sum_{k',l',\alpha',\beta'}F_{k-k',l-l',\alpha-\alpha',\beta-\beta'} G_{k'l'\alpha'\beta'}$, we have \begin{eqnarray*} \|FG\|_{ D_\rho(r,s),\Cal O }&=&\sup_{D_\rho(r,s)}\sum_{k,l,\alpha,\beta}|(FG)_{kl\alpha\beta}|_{\Cal O}|I^{l}||z^{\alpha}| |\bar z^{\beta}|e^{|k||{\rm Im}\theta|}\\ &\le&\sup_{D_\rho(r,s)} \sum_{k,l,\alpha,\beta}\sum_{k',l',\alpha',\beta'}|F_{k-k',l-l',\alpha-\alpha',\beta-\beta' }G_{k'l'\alpha'\beta'}|_{\Cal O}|I^{l}||z^{\alpha}| |\bar z^{\beta}|e^{|k||{\rm Im}\theta|}\\ &\le&\|F\|_{ D_\rho(r,s),\Cal O }\|G\|_{ D_\rho(r,s),\Cal O } \end{eqnarray*} and the proof is finished.\qed \begin{Lemma}\label{Lem7.2} (Generalized Cauchy inequalities) $$ \|F_{\theta}\|_{D_\rho(r-\sigma,s),\Cal O}\le \frac{c}{\sigma}\|F\|_{ D_\rho(r,s),\Cal O },$$ $$ \|F_{I}\|_{D_\rho(r,\frac 12 s),\Cal O}\le \frac {c}{s^2}\|F\|_{ D_\rho(r,s),\Cal O },$$ and $$ \|F_{z}\|_{D_{\rho}(r,\frac 12 s),\Cal O}\le \frac{c}{s}\|F\|_{ D_\rho(r,s),\Cal O },$$ $$ \|F_{\bar z}\|_{D_{\rho}(r,\frac 12 s),\Cal O}\le \frac{c}{s}\|F\|_{ D_\rho(r,s),\Cal O }.$$ \end{Lemma} \proof We only prove the third inequality, the others can be proved similarly. Let $w\neq 0$, then $f(t)=F(z+tw)$ is an analytic map from the complex disc $|t|<\frac{s}{\|w\|_\rho}$ in $\C$ into $D_\rho(r,s)$. Hence $$\|f'(0)\|_{D_\rho(r,\frac 12 s),\Cal O}=\| F_zw\|_{D_\rho(r,\frac 12 s),\Cal O}\leq \frac{c}{s}\|F\|_{ D_\rho(r,s),\Cal O }\cdot \|w\|_\rho,$$ by the usual Cauchy inequality. Since $w\neq 0$, so $$\frac{\|F_zw\|_{D_\rho(r,\frac 12 s),\Cal O}}{\|w\|_\rho}\leq \frac{c}{s}\|F\|_{ D_\rho(r,s),\Cal O },$$ thus $$\|F_{z}\|_{{D_\rho(r,\frac 12 s),\Cal O}}=\sup_{w\neq 0}\frac{\|F_zw\|_{D_\rho(r,\frac 12 s),\Cal O}}{\|w\|_\rho}\leq \frac{c}{s}\|F\|_{ D_\rho(r,s),\Cal O }.$$ \qed \indent Let $\{\cdot,\cdot\}$ denote the Poisson bracket of smooth functions, i.e., \[ \{F,G\}=\la\frac{\partial F}{\partial I}, \frac{\partial G}{\partial \theta}\ra-\la \frac{\partial F}{ \partial \theta},\frac{\partial G}{\partial I}\ra+{\rm i}( \langle\frac{\partial F}{\partial z},\frac{\partial G} {\partial {\bar z}}\rangle-\langle \frac{\partial F}{\partial {\bar z}},\frac{\partial G} {\partial { z}}\rangle),\] then we have the following lemma: \begin{Lemma}\label{Lem7.3} If $$ \|X_F\|_{ D_\rho(r,s),\Cal O }< \varepsilon',\ \|X_G\|_{ D_\rho(r,s),\Cal O }< \varepsilon'', $$ then $$ \|X_{\{F,G\}}\|_{D_\rho(r-\sigma,\eta s),\Cal O}<c\sigma^{-1}\eta^{-2}\varepsilon'\varepsilon'',\ \eta\ll 1.$$ In particular, if $\eta\sim\varepsilon^{\frac 13}$, $\varepsilon', \varepsilon''\sim \varepsilon$, we have $\|X_{\{F,G\}}\|_{D_\rho(r-\sigma,\eta s),\Cal O}\sim \varepsilon^{\frac {4}{3}}$. \end{Lemma} \proof By Lemma \ref{Lem7.1} and Lemma \ref{Lem7.2}, \begin{eqnarray*} \|\frac{\partial^2F}{\partial I\partial I}\frac{\partial G}{\partial \theta}\|_{D_\rho(r-\sigma,\frac{1}{2}s)}&<& c\sigma^{-1}s^{-2}\|\frac{\partial F}{\partial I}\|_{D_\rho(r,s)}\cdot\|\frac{\partial G}{\partial \theta}\|_{D_\rho(r,s)},\\ \|\frac{\partial^2F}{\partial I\partial \theta}\frac{\partial G}{\partial \theta}\|_{D_\rho(r-\sigma,\frac{1}{2}s)}&<& c\sigma^{-1}\|\frac{\partial F}{\partial I}\|_{D_\rho(r,s)}\cdot\|\frac{\partial G}{\partial \theta}\|_{D_\rho(r,s)},\\ \|\frac{\partial^2F}{\partial I\partial z}\frac{\partial G}{\partial \theta}\|_{D_\rho(r-\sigma,\frac{1}{2}s)}&<& c\sigma^{-1}s^{-1}\|\frac{\partial F}{\partial I}\|_{D_\rho(r,s)}\cdot\|\frac{\partial G}{\partial \theta}\|_{D_\rho(r,s)},\\ \|\frac{\partial^2F}{\partial I\partial \bar z}\frac{\partial G}{\partial \theta}\|_{D_\rho(r-\sigma,\frac{1}{2}s)}&<& c\sigma^{-1}s^{-1}\|\frac{\partial F}{\partial I}\|_{D_\rho(r,s)}\cdot\|\frac{\partial G}{\partial \theta}\|_{D_\rho(r,s)},\\ \|\frac{\partial^2F}{\partial z\partial I}\frac{\partial G}{\partial \bar z}\|_{D_\rho(r-\sigma,\frac{1}{2}s)}&<& c\sigma^{-1}s^{-2}\|\frac{\partial F}{\partial z}\|_{D_\rho(r,s)}\cdot\|\frac{\partial G}{\partial \bar z}\|_{D_\rho(r,s)},\\ \|\frac{\partial^2F}{\partial z\partial \theta}\frac{\partial G}{\partial \bar z}\|_{D_\rho(r-\sigma,\frac{1}{2}s)}&<& c\sigma^{-1}\|\frac{\partial F}{\partial z}\|_{D_\rho(r,s)}\cdot\|\frac{\partial G}{\partial \bar z}\|_{D_\rho(r,s)},\\ \|\frac{\partial^2F}{\partial z\partial z}\frac{\partial G}{\partial \bar z}\|_{D_\rho(r-\sigma,\frac{1}{2}s)}&<& c\sigma^{-1}s^{-1}\|\frac{\partial F}{\partial z}\|_{D_\rho(r,s)}\cdot\|\frac{\partial G}{\partial \bar z}\|_{D_\rho(r,s)},\\ \|\frac{\partial^2F}{\partial z\partial \bar z}\frac{\partial G}{\partial \bar z}\|_{D_\rho(r-\sigma,\frac{1}{2}s)}&<& c\sigma^{-1}s^{-1}\|\frac{\partial F}{\partial z}\|_{D_\rho(r,s)}\cdot\|\frac{\partial G}{\partial \bar z}\|_{D_\rho(r,s)}. \end{eqnarray*} The other cases can be obtained analogously, hence $$ \|X_{\{F,G\}}\|_{D_\rho(r-\sigma,\eta s),\Cal O}<c\sigma^{-1}\eta^{-2}\varepsilon'\varepsilon''. $$\qed \ \
1,108,101,563,720
arxiv
\section{Introduction} The Minimal Supersymmetric Standard Model (MSSM)\cite{rf:SUSY} is one of the most promising candidates of the models beyond the Standard Model (SM). It predicts the existence of superpartners of SM particles below a few TeV to remove quadratic divergence which appears in radiative corrections of the SM Higgs sector; thus the model is free from the so--called hierarchy problem of GUT models. It should be noted that the gauge couplings unify very precisely at high energy scale in MSSM SUSY SU(5) GUT predictions. The supersymmetry is not an exact symmetry of the model, instead it should be somehow broken to give the mass differences between a particle and its superpartner. Various attempts have been made to explain the existence of the soft SUSY breaking\cite{rf:SUGRA,rf:DNNS}. Those different models of SUSY breaking have different predictions for the relation between the soft breaking mass parameters at some high scale $M_{SB}$; $m_i$ (scalar masses), $M_i$(gaugino masses), $A_i$(trilinear couplings) and $B$( Higgsino soft breaking mass parameter). Evolving the mass parameters by the RGE of the model from $M_{SB}$ to $M_{\rm weak}$, one gets the prediction of the mass spectrum of superpartners at the weak scale. Therefore, the precise measurement of masses and interactions of superpartners will be one of the most important physics targets once they are discovered. This might enable us to discriminate the models of even higher energy scale responsible for the SUSY breaking if the experiment reaches certain sensitivity. Notice that to claim a new particle as a superpartner also requires careful investigations of the interaction of the particle which should agree with the expectations of supersymmetry. Proposed Linear Colliders at $\sqrt{s}=500$ GeV are expected to have high luminosity---${\cal L}=30 fb^{-1} $/year\cite{rf:JLC1,rf:LC}. The background from $W$ boson production can be suppressed drastically thanks to the highly polarized electron beam; Current technology already archived $P_{e^-}=80\%$ at SLC, and $P_{e^-} =95\%$ is proposed at future LC's. Under this clean environment, precision study of the mass and interaction become possible. Studies of accelerator technology for the future LC's are on going in several institutes such as SLAC, KEK, DESY and CERN.\cite{rf:LC} The potential impact of a LC to the supersymmetric models have been pointed out by several groups already\cite{rf:TSUKA,rf:FLC}. For example, the predictions of the Minimal Supergravity(MSUGRA) model for $M_1/M_2$ and $m_{\tilde{e}}/$ $m_{\tilde{\mu}}$ have been shown to be proven up to ${\cal O}(1\%\sim 10\%)$. Ino-lepton-slepton coupling also can be measured to check the prediction of supersymmetry. Those analyses have been done for $\tilde{e},\tilde{\mu}$ and $\tilde{\chi}^+$ pair production modes. In the following, I will talk about our MC study of the production and decay of $\tilde{\tau}$ at a future LC\cite{rf:NO,rf:WIP}. The decay of $\tilde{\tau}$ involves a $\tau$ lepton, which decays further in the detector. It makes analysis rather complicated, and therefor MC study of the process have not been done previously. However, the physics coming out from the study turns out to be fruitful, due to the unique nature of $\tilde{\tau}$ interaction through Planck scale to the weak scale. In Sec. 2.1, we briefly describe the reduction of $m_{\tilde{\tau}_{L,R}}$ by the GUT scale Yukawa interaction in MSUGRA-GUT model, which has been pointed out recently by Barbieri and Hall\cite{rf:BH}. $\tilde{\tau}$ would be found earlier than the other SUSY particles in the model as the $\tilde{\tau}$ mass is expected to be much lighter than other sleptons. It is also stressed measurement of $m_{\tilde{\tau}_{L,R}}$ provides clear cuts to distinguish the MSUGRA-GUT from other models. Blow the GUT scale, the interaction of $\tau$ lepton is still different to the other sleptons as it has a non-negligible Yukawa coupling $Y_{\tau}\propto m_{\tau}/\cos\beta$; Here $\tan\beta$ is the ratio of vacuum expectation values of the two neutral Higgs boson in MSSM. The Yukawa coupling is enhanced linearly $\propto\tan\beta$ for large value of $\tan\beta$. A consequence of the large Yukawa coupling is existence of left-right mixing of $\tilde{\tau}$; The lighter mass eigenstate of $\tilde{\tau}$ would be lighter than the other sleptons even if mass parameter of $\tilde{\tau}$ is equal to that of $\tilde{e}$ and $\tilde{\mu}$. The feasibility of determination of the mass and mixing angle at a future LC is checked by MC simulation in Sec. 2.2. The same $Y_{\tau}$ appears as a non-negligible $\tau\tilde{\tau} \tilde{H}^0_1$ coupling, where $\tilde{H}^0_1$ is a neutral higgsino. The ratio of the couplings involving higgsino component and gaugino component of neutralino $\chi^0$, where the neutralino is a mixture of higgsinos and gauginos, can be determined through the measurement of the polarization of $\tau$ lepton($P_{\tau}$) from $\tilde{\tau}$ decay into a neutralino and $\tau$. The strong sensitivity of $P_{\tau}$ to $\tan\beta$ helps to determine $\tan\beta$, by combining the information from the other modes. The performance of LC experiment on the determination of $P_{\tau}$ will be found in Sec. 2.3. Sec. 3 is devoted for conclusion and discussions. \section{Study of Scalar Tau Lepton at LC} \subsection{ Mass of Scalar Tau and Models of Supersymmetry breaking} $\tilde{\tau}_{L(R)}$ is the superpartner of $\tau_{L(R)}$, the third generation lepton. This makes $\tilde{\tau}$ an unique object in the context of the SUGRA-GUT model\cite{rf:SUGRA}. In the supergravity model, the SUSY breaking in the hidden sector gives the soft breaking mass through gravitational interaction at Planck scale $M_{pl}$. The resulting scalar mass is universal at $M_{pl}$, leading to approximate universality of $m_{l_{L,(R)}}$ if their interaction is equal from $M_{pl}$ to $M_{\rm weak}$. However, in simple grand unified models such as SO(10) or SU(5), the $\tau$ superfield is in the same multiplet with the top quark superfield above the GUT scale $M_{GUT}$. Thus from $M_{pl}$ to $M_{GUT}$, the $\tau$ supermultiplet obeys the same Yukawa interaction as that of top quark. The large top Yukawa interaction is anticipated by the top mass measurement by CDF or D0\cite{rf:TOP}, and this reduces the masses of $\tilde{\tau}_R$ ( or $\tilde{\tau}_{L(R)}$) at $M_{GUT}$ compared to its value at $M_{pl}$ for SU(5) (or SO(10)) GUT model. This is pointe out in Ref.\citen{rf:BH} and they claimed that $m_{\tilde{\tau}}$ can be as light as a half of $m_{\tilde{e}}$. $m_{\tilde{\tau}}$ may even be the second lightest SUSY particle in this model. I should stress that there exists a model which predicts totally different mass spectrum. Dine-Nelson-Nir-Shirman\cite{rf:DNNS} recently constructed relatively simple models which break SUSY at an intermediate scale [ $\sim10^{6\sim 7}$ GeV] dynamically(DNNS model). The breaking is then transformed to our sector by $U(1)$ gauge interaction, which is called a messenger sector. The scale where the gauge interaction breaks($M_{m}$) is $O(10^4)$ GeV. Due to the nature of the gauge interaction, the resulting scalar masses of sleptons are common for ($l_L, \nu_l$) and $l_R$ at $M_{m}$ respectively. Unlike SUGRA-GUT model, they remain roughly equal at $M_{weak}$, as $M_{m}$ is considerably close to $M_{weak}$ and there is no strong Yukawa interaction involved between the scales. Therefore, determination of $m_{\tilde{\tau}_{L,R}}$ would give us a good handle to distinguish the scale of SUSY breaking below or above the GUT scale. \subsection{ Determination of $\tilde{\tau}$ mass matrix at LC} To determine $m_{\tilde{\tau}_{L,R}}$, one has to know $\tilde{\tau}$ interaction. This is because neither $\tilde{\tau}_L$ nor $\tilde{\tau}_R$ is a mass eigenstate, but they generally mix to make the mass eigenstates $\tilde{\tau}_{1(2)}$; The mass matrix is expressed as \begin{subequations}\label{eq:1} \begin{equation} {\cal M}^2_{\tilde{\tau}}=\left(\begin{array}{cc}m_{LL}^2 & m_{LR}^2\\ m_{LR}^2& m_{RR}^2\end{array}\right) =\left( \begin{array}{cc} m_L^2 + m_{\tau}^2 + 0.27 D & -m_{\tau}(A_{\tau} + \mu \tan\beta)\\ -m_{\tau}(A_\tau + \mu \tan\beta)& m_R^2 +m_{\tau}^2 + 0.23D \end{array}\right),\\ \label{eq:1a} \end{equation} and the mass eigenstates are expressed as \begin{equation} \left(\begin{array}{c} \sti\\\stii\end{array}\right) =\left(\begin{array}{cc}\cos\theta_{\tau} &\sin\theta_{\tau}\\ -\sin\theta_{\tau}&\cos\theta_{\tau}\end{array}\right) \left(\begin{array}{c} \tilde{\tau}_L\\ \tilde{\tau}_R\end{array}\right). \end{equation} \end{subequations} Here $\mu$ is Higgsino mass parameter, $\tan\beta\equiv\langle H_1^0\rangle /\langle H_{2}^0\rangle$ is the ratio of vacuum expectation values, and $A_{\tau}$ is the coefficient of the soft breaking term proportional to $\tau_R$-$\tau_L$-$H_1$, and $D$ corresponds to $D$-term. The mixing makes the lighter mass eigenvalue $m_{\tilde{\tau}_1}$ lighter than diagonal mass terms, thus even in the model with the common soft breaking scaler mass, $m_{\sti}$ may be lighter than $m_{\tilde{e}}$\cite{rf:DN}. At the same time, one has to know $\theta_{\tau}$ together with $m_{\tilde{\tau}_1}$ and $m_{\tilde{\tau}_2}$ to determine $m_{\tilde{\tau}_L}$ and $\mstr$. Notice that it is interesting to observe the non-zero $\theta_{\tau}$(mod $\pi$) as this proves the existence of the off-diagonal element of the $\sti$ mass matrix; this depends on the term proportional to $\mu\cdot\tan\beta$ which is required from supersymmetry, while $A_\tau$ is the coefficient of the trilinear soft breaking term. Both terms are strongly motivated by the supersymmetric theory. If the electron beam is polarized, the mixing angle $\theta_{\tau}$ will be determined from the measurement of the production cross section $e^+e^-\rightarrow\sti^+\sti^-$\cite{rf:NO}. This can be easily explained by taking the limit where $m_Z\ll\sqrt{s}$ and $P_e=1$. In the limit, the production of $\tilde{\tau}$ solely proceed through $U(1)$ gauge interaction that carries hypercharge. The hypercharge is $-1/2$($-1$) for $\tilde{\tau}_{L(R)}$, thus $\sigma(\tilde{\tau}_R)$ $\sim 4 \sigma(\tilde{\tau}_L)$. The cross section also depends of $m_{\tilde{\tau}_1}$, however this would be extracted from the energy distribution of $\tilde{\tau}$ decay products, as we will see later. To show the feasibility of the measurement of $m_{\tilde{\tau}_1}$ and $\theta_{\tau}$ at a future $e^+e^-$ collider, we did MC simulation for the JLC1 detector \cite{rf:JLC1}. We took $\sqrt{s}=500$ GeV and $P_e=95\%$, and analysed the process where $\sti$ decays into $\chi_1^0\tau$ exclusively; here $\chi_1^0$ is the lightest neutralino and we assumed $\chi_1^0$ to be the lightest SUSY particle and stable, and we denote it by $\chi$ hereafter . Due to the simple 2 body kinematics, the energy distribution of $\tau$ leptons is flat between $E_{min}$ to $E_{max}$ , which contains the information about $m_{\chi}$ and $m_{\tilde{\tau}_1}$. Actually for the process $e^+e^-\rightarrow \tilde{e}^+ \tilde{e}^-$ and $\tilde{e}\rightarrow \chi e$, the energy distribution of the electrons was used to determine $m_{\tilde{e}}$ and $m_{\chi}$\cite{rf:TSUKA}. However the $\tau$ lepton decays further into $\pi,\rho,a_1, e$ and $\mu$ etc.. The decay distribution depends not only on the $E_{max(min)}$ but also on the decay modes of $\tau$ lepton we reconstructed $\rho$ and $a_1$ whenever it is possible.\footnote{ For Monte Carlo simulation, we used TAUORA ver2.4\cite{rf:TAUORA}. See next subsection to our cuts to identify $\rho$ and $a_1$.} We require both of the $\tau$ decays hadronically as signal to avoid relatively large background from $eeZ^0$ and $e\nu W$. We also include backgrounds from $W^+W^-$, $Z^0Z^0$, $e^+e^-W^+W^-$ and $\nu\nu Z^0$ productions. Cuts like $E_{vis}>10$ GeV and $\theta_{accop}>30^{\circ}$ are applied to reduce the backgrounds too. The resulting signal of $\tau$ production is characterized as 2 jets of low hadron multiplicity with missing $P_T$. Those selected MC samples are then used to `measure' $m_{\chi}$ and $m_{\tilde{\tau}_1}$ by fitting the energy distribution of the MC sample. In figure 1, we show the results of the mass fit for the sample identified by a tau leptons decaying into $\rho$. Here we generated 10,000 $\tilde{\tau}$ pairs with mass $m_{\tilde{\tau}_1}=150$ GeV which decayed into $\chi\tau$ with $m_{\chi}=100$ GeV. We also included backgrounds consistent with $\int L=100fb^{-1}$. About 1700 events of $\rho$ are obtained for the signal after the cut, while 93 events remained as the backgrounds. Contours of constant $\chi^2=-1/2 \log L$ are shown in Fig. 1. We show the result in the $m_{\chi}$ and $m_{\sti}$ plane fixing the other parameters \footnote{The results are obtained by normalizing the total number of the events of the fitting curve of the signal and background by the event number obtained by MC. $P_{\tau}=1$ both for MC and the theoretical distribution. The determination of $P_{\tau}$ is discussed in the next subsection}. $m_{\tilde{\tau}_1}$ is determined with the error of 3.5 GeV. The error of the cross section at the best fit point is 2.5\%. The errors corresponding to 5000 events are shown in fig. 2 schematically in $m_{\sti}$ and $\sin\theta_{\tau}$ plane(scaled statistically), where the contours of constant $\sigma_{\sti}$(=$50fb$ dotted line, $50\pm 1.25 fb$ solid lines) are shown simultaneously. $\delta\theta_{\tau}=\pm 4.5^{\circ}$ is read from the figure. We showed the measurement of the $\sti$ production and decay can determine two of the three parameters of $\tilde{\tau}$ mass matrix. Discovery of $\stii$ would specify the remaining degree of freedom. \subsection{ The Yukawa sector of MSSM and $P_{\tau}$} The study of $\tilde{\tau}$ may play important role in exploring the Yukawa sector of MSSM\cite{rf:NO}. Let's consider the decay of $\sti\rightarrow \tau\chi_1^0$ again. The $\chi_1^0$ is the mixture of gauginos ($\tilde{B},\tilde{W}$) and Higgsinos ($\tilde{H}_{1(2)}$ ). The interaction involving gaugino component ($\tilde{B}(\tilde{W})$-$\tau$-$\tilde{\tau}$ coupling) is proportional to gauge couplings and the interaction involving Higgsino component ($\tilde{H}_1$-$\tau$-$\tilde{\tau}$ coupling) is proportional to $\tau$ Yukawa coupling $Y_{\tau}\sim m_{\tau}/\cos\beta$. The latter may not be too small compared to the former when $\tan\beta$ is large or $\chi_1^0$ has the large higgsino component(See Fig. 3). Those two interactions are different not only in the couplings, but also in the chirality of the (s)fermion. The (super-) gauge interaction is chirality conserving, while the (super-) Yukawa interaction flips it. ( In Fig.3, the arrows next to the $\tilde{\tau}$ and $\tau$ lines show the direction of chirality.) Thus the polarization of $\tau$ lepton ($P_{\tau}$) from $\sti$ decays depends on the ratio of the chirality flipping and concerning interactions. $P_{\tau}(\sti\rightarrow\tau\chi^0_1)$ depends on $\tan\beta$ strongly compared to other quantities. To demonstrate this, we show various quantities in Fig.4 a)-d) fixing $m_{\chi^0_1}=100$ GeV and varying $M_1$($\tilde{B}$ mass parameter) and $\tan\beta$. Fig.4 a)-c) show little dependence on $\tan\beta$. Especially the pair production of $\tilde{e}_R$ can be used as the mode to determine $M_1$ as in Ref.\citen{rf:TSUKA}. On the other hand, $P(\tilde{\tau}_R\rightarrow\tau\chi^0_1)$ depends on $\tan\beta$ sensitively if $M_1$ is sufficiently larger than $m_{\chi^0_1}$ or $\tan\beta$ is large. This is because the chirality flipping higgsino interaction becomes comparable to the chirality conserving gaugino interaction either if the lightest neutralino is dominantly Higgsino or if $Y_{\tau}$ is large \footnote{ The neutralino sector is parametrized by $M_{1(2)}$, $\mu$, $\tan\beta$. When $\mu\ll(\gg )M_1$, $\chi^0_1$ is dominantly higgsino(gaugino) and $m_\chi\sim\mu (M_1)$. In Fig.4, $\chi^0_1$ becomes dominantly higgsino as $M_1$ becomes larger. }. In such a situation, one can determine $\tan\beta$ by using the value of $M_1$ obtained by the other production processes.\footnote{One can also determine $\tan\beta$ from forward-backward asymmetry of chargino production cross section. It is sensitive to $\tan\beta$ when $ M_2\sim \mu$ \cite{rf:FLEP,rf:FLC}} The measurement of $P_{\tau}$ would be carried out through the energy distribution of decay products from the polarized $\tau$ lepton. The $\tau$ lepton decays into $A\nu_{\tau}$ where $A=e, \mu, \pi, \rho, a_1$... For the each decay channel, the momentum distribution of the hadronic decay products ($\pi^{\pm}$, $\rho^{\pm}\rightarrow\pi^{\pm}\pi^0$...) differs significantly depending on $P_{\tau}$. If the $\tau$ lepton is relativistic, $P_{\tau}$ can be determined from the energy distribution of the decay products\cite{rf:HAGI}. Being more specific, let us consider the decay of a polarized $\tau$ lepton into $\rho$. The $\rho$ from right(left) handed $\tau$ lepton is longitudinally(transversally) polarized. The $\rho$ meson then decays into $\pi^{\pm}\pi^0 \rightarrow \pi^{\pm}2\gamma$. The energy fraction $ E_{\pi^{\pm}} /E_{\rho}$, where $E_{\rho}$ is the total energy of jets which the $\pi^{\pm}$ belongs to, depends on $\rho$ polarization in a very simple form in the collinear limit( $E_{\tau}\gg m_{\tau}$); \begin{equation}\label{eq:2} \frac{d\Gamma(\rho_T\rightarrow 2\pi)}{dz} \sim 2z(1-z)-\frac{2m_{\pi}^2}{m_{\rho}^2},\ \ \frac{d\Gamma(\rho_L\rightarrow 2\pi)}{dz} \sim (2z-1)^2, \end{equation} where $z=E_{\pi}/E_{\rho}$. Notice that decay of the $\tau$ lepton into the heavier meson $a_1$ may be misidentified as decay into $\rho$ if the energy and momentum resolution of the detector is poor. The decay $\rho\rightarrow\pi^+ 2\gamma$ will be identified as a jet of a $\pi^+$ and one or two photon candidates. On the other hand, the decay $a_1^{\pm}\rightarrow \pi^{\pm}\pi^0\pi^0\rightarrow \pi^{\pm}4 \gamma$ is also occasionally misidentified as $\pi^{\pm} 2\gamma$ or $\pi^{\pm}\gamma$ which contaminates $\rho$ signals. We applied the cuts $m_j<0.95$ GeV for events with one photon candidate and $m_j<0.95$ GeV and $m_{2\gamma}<0.25$ GeV for events with two photon candidates to reduce the contamination from $a_1$ decay; after the cut the contamination is less than a few \% for $m_{\sti}=150$ GeV $m_{\chi}=100$ GeV. However, the purity of the sample crucially depends on the assumed performance of the JLC 1 detector\cite{rf:JLC1}. If this is not achieved, one may have to look up the decay mode into $\tau^{\pm}\rightarrow \pi^{\pm}\nu$ or $a_1^{\pm}\rightarrow \pi^{\pm}\pi^{\pm}\pi^{\mp}$. The branching ratio into those modes are small compared to the one into $\rho$. Fig.5. shows the $z$ distribution of MC events and the fit for the same parameter with Fig 1. The best fit value of $P_{\tau}$ is $0.95 (-0.92)$ for $P_{\tau}=1(-1)$ respectively and the estimated error is $\pm 0.07$ \footnote{For the analysis we took the event where $0.08<z<0.92$ to avoid detector effects. We used the events $E_j>20 $ GeV as the events below the cut does not have the sensitivity to $P_{\tau}$} \section{Conclusion} I presented in this talk our study of the production and decay of the lighter scaler tau lepton $\sti$ at a future LC. The study of $\tilde{\tau}$ is important because $\tilde{\tau}$ may be lighter than the other sleptons, thus would be found earlier. The light $\sti$ is well motivated in MSUGRA-GUT model, and it is not excluded in other models, if there is large $\tilde{\tau}_L$-$\tilde{\tau}_R$ mixing. We discussed that the mass matrix of $\tilde{\tau}$ provides a clue to distinguish SUGRA-GUT model and DNNS model, and the polarization of $\tau$ lepton $P_{\tau}$ from decaying $\sti$ is sensitive to the value of $\tan\beta$ through its dependence to the $\tau$ Yukawa coupling; $\tan\beta$ is one of the important parameters to determine the Higgs sector of the MSSM. The feasibility of the study of those parameters at the LC have been checked by MC. The error of $m_{\tilde{\tau}_1}$ and $\sigma(\sti\sti)$ (which in turn used to determine the mass matrix of $\sti$) and $P_{\tau}$ are 3.5 GeV, 2.5 \% and 0.07 respectively for a representative parameters we have chosen. We have not included several potentially important background such as $\gamma\gamma\rightarrow \tau\tau$ and production and decay of heavier superpartners. However, we believe final results will not be too much different to the ones we have presented here. In near future, LEPII and LHC are scheduled to operate at $\sqrt{s}=180$ GeV and $\sqrt{s}=14$ TeV respectively. However, their ability to determine the soft breaking mass parameters is rather poor. For LEPII, integrated luminosity is ${\cal O}(100 pb^{-1})$, while the production cross sections are typically ${\cal O}$(10pb) for chargino and $0.3$ pb for $\tilde{\mu}$ with $m_{\tilde{\mu}}= 60$ GeV. The production cross section of a slepton is too small to go beyond discovery physics. Feng and Strasslar showed that precise study of chargino interactions is possible\cite{rf:FLEP}, however, one has to still fight over the enormous background coming from $W^+W^-$ production. SUSY study at LHC(Large Hadron Collider) suffers from the high QCD background although strongly interacting superpartners will be copiously produced at LHC. Expected signals of SUSY particle production are also very complicated, as decay patterns of squarks and gluino change drastically depending on the mass spectrum of SUSY particles. Implications of MSUGRA model at LEPII and LHC have been discussed and studied in quite a few papers. Those are mostly about the reduction of the number of free parameters of the model (which tighten the phenomenological constraint), ``theoretical upper bound'' of sparticle masses, and (therefore) when and how they would be discovered. However, it is becoming recognized that we can go beyond that if a next generation LC is actually built. Namely, the experiment at the LC will make it possible to measure the parameters of MSSM once a superpartner is discovered, and this enablesy us to check the predictions of the models of SUSY breaking. I presented in this talk that discovery of $\tilde{\tau}$ at the LC provides us a clear cut to understand the origin of SUSY breaking, and I hope I have convinced the audiences that the LC is necessary to achieve that. \section*{Acknowledgements} We would like to thank Y. Okada and B. K. Bullock for careful reading of manuscripts.
1,108,101,563,721
arxiv
\section{Introduction} Social media such as Twitter and Wikipedia contains considerable amount of location-related text data. In this paper, we develop a model that learns to predict spatial probabilities from free text. Given a query sentence, the model outputs a discrete probability distribution over the surface earth, by assigning each geographical cell a likelihood that the input text relates to the location inside said cell. The resulting model is capable of localizing a large variety of sentences. Viewing the task as a hierarchical classification problem allows the model to express its uncertainty in the location associated with the text. The resulting model can be used for resolving ambiguity of the location references in the text. This capability is central to the success of finding exact location from free text. For example, \textit{Paris} can refer to more than one possible location. In a context such as: \textit{The International Olympic Committee confirmed the city chosen to host the Olympic Games in 2024. The Games will be held in Paris}, geocoding models like the one proposed in this paper can help in the resolution of the correct location. This work introduces the following contributions: \begin{itemize} \item Synthesizing a dataset for supervised learning, including adaptive cell partitioning. \item Formulating the geocoding problem as a sequence-to-sequence problem. \item Training an end-to-end geocoding model using said formulation. \item Publicly releasing the curated dataset, a Rest-based application and the T5 geocoding model. \end{itemize} \section{Related works} The geolocation prediction from free text was extensively addressed in the literature. The authors of \cite{kulkarni2020spatial} presented a multi-level geocoding model that learns to associate texts to geographic locations. The downstream task was formulated as a multi-level classification problem based on multi-level S2 \cite{s2} cells as the output space from a multi-headed model. In \cite{weyand2016planet}, the surface of the earth was subdivided into thousands of multi-scale geographic cells. The authors trained a deep neural network using millions of geo-tagged images. We adopt the adaptive partition approach introduced in this paper. The authors of \cite{kinsella2011m} created language models of locations using coordinates extracted from geo-tagged Twitter data. The locations were modeled at varying levels of granularity, from zip code to the country level. In \cite{radford2021regressing}, an end-to-end probabilistic model for geocoding text data was presented. In addition, the authors collected a novel data set for evaluating the performance of geocoding systems. The model-based solution, called ELECTRo-map was compared to the open source system available at the time of publication for geocoding texts for event data. An algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach was provided in \cite{hays2008im2gps}. The aforementioned approaches to geocoding can be viewed as sequential tasks of token classification (which is assumed to be solved perfectly) and geocoding. Unfortunately, this approach suffers from several drawbacks. First, it cannot handle correctly inputs that contain several descriptions of locations (for example: "We live in country X city Y"). Second, it lacks the context of free text (for example: "Country south of France"). \section{Text Geolocation with Transformer} We pose the task of text geolocation as a sequence-to-sequence problem. The model translates input text into hierarchy set of geographic cells which represent probability distribution over the surface of the earth. The model output is a sequence encoding of hierarchical cell representation of the surface of the earth. \subsection{Adaptive Cell Partitioning} We use Google’s open source S2 Geometry library \cite{s2} to partition the surface of the earth into non-overlapping cells that define the target of our model. The S2 library defines hierarchical partitioning of the surface of a sphere by projecting the surfaces of an enclosing cube on it. The six sides of the cube are subdivided hierarchically by six quad-trees. A node in a quad-tree defines a region on the sphere called an S2 cell. \figref{fig:s2geometry} illustrates the S2 cells in several resolutions. Tab. \ref{s2geo table} shows the resolution and number of cells for each S2 Geometry level. \begin{figure} [!ht] \centering \includegraphics[width=12cm]{s2cells.png} \caption{S2 Geometry hierarchical partitioning of the earth.} \label{fig:s2geometry} \end{figure} \begin{table}[!ht] \caption{S2 Geometry levels.}\label{s2geo table} \centering \begin{tabular}{llll} \hline \hline Level& Average area& Number of cells& \\\hline 00& 85M \(km^2\)& 6\\ 01& 21M \(km^2\)& 24\\ 02& 5M \(km^2\)& 96\\ 03& 1.3M \(km^2\)& 384\\ 04& 330K \(km^2\)& 1536\\ 05& 83K \(km^2\)& 6K\\ 06& 20K \(km^2\)& 24K\\ 07& 5K \(km^2\)& 98K\\ 08& 1297 \(km^2\)& 393K\\ 09& 324 \(km^2\)& 1573\\ 10& 81 \(km^2\)& 6M\\ ..&..&..\\ 29& 2.95 \(cm^2\)& \(1729*10^{15}\)\\ 30& 0.74 \(cm^2\)& \(7*10^{18}\)\\ \hline \hline \end{tabular} \end{table} \begin{figure}[!htb] \centering \includegraphics[width=13cm]{bokeh_plot.png} \caption{Data points distribution.} \label{fig:dataset} \end{figure} There are several reasons for choosing this subdivision scheme over a simple subdivision of latitude/longitude coordinates. Firstly, the latitude/longitude regions get elongated near the poles while S2 cells keep a close-to-quadratic shape, and secondly, S2 cells are mostly uniform in size (the ratio between the largest and smallest S2 cell is 2.08). A naive approach to define a tiling of the earth would be to use all S2 cells at a certain fixed depth in the hierarchy, resulting in a set of roughly equally sized cells. However, this would produce a very imbalanced class distribution since the geographical distribution of wikidata items \cite{vrandevcic} which was adopted in this paper, has strong peaks in densely populated areas see \figref{fig:dataset}. We therefore perform adaptive subdivision, based on the dataset item location: starting at the roots, we recursively descend each quad-tree and subdivide cells until no cell contains more than a certain fixed number (max cell samples parameter) of data points. By using this approach, sparsely populated areas are covered by larger cells and densely populated areas are covered by finer cells. This adaptive tiling has several advantages over a uniform one: (i) training classes are more balanced and (ii) it makes effective use of the parameter space because more model capacity is spent on densely populated areas. \figref{fig:adaptive-partition} demonstrates the S2 partitioning for our dataset. \begin{figure} [!ht] \centering \includegraphics[width=12cm]{adaptive-partition1.png} \caption{S2 Geometry adaptive partitioning of our dataset.} \label{fig:adaptive-partition} \end{figure} \newpage \subsection{Dataset} This section describes the dataset used for training the model and for the evaluation experiments. The dataset was constructed from the wikidata archive \cite{vrandevcic2014wikidata}. The archive was filtered as to select all records with location label (approximately 8M records). See Tab. \ref{const_mult_dec:prob} for samples of wikidata records. \begin{table}[!ht] \caption{Wikipedia Geo Data samples.} \label{const_mult_dec:prob} \centering \begin{tabular}{llll} \hline \hline Id&Latitude&Longitude&Text\\\hline 01& 53.96& -1.08& historic county of England\\ 02& 51.0& 10.0& country in Central Europe\\ 03& 35.88& 14.5& sovereign state in Southern Europe\\ 04& 57.30& -6.36& whisky distillery in Highland, Scotland, UK\\ 05& 47.39& 0.69& city and commune in Indre-et-Loire, Centre-Val de Loire, France\\ 06& -33.0& -71.0& sovereign state in South America\\ \hline \hline \end{tabular} \end{table} \subsection{Data Labeling} \label{subsection Data Labeling} Using the adaptive cells, we label each data sample with the cell id containing the sample location. In order to keep the hierarchical nature of the cells in the label we use the following cell id encoding: the first digit represents the cell cube face with a digit between \textit{0} to \textit{5}. The next digits represent for each level the corresponding node in the quad tree with a digit between \textit{0} to \textit{3}. See Tab. \ref{cell encoding} for a cell encoding example. Note that the label is a sequence of digits with a variant length between \textit{1} and \textit{1+max level}. \begin{table}[ht!] \caption{Cell encoding.}\label{cell encoding} \centering \begin{tabular}{llll} \hline \hline Cell description & Cell representation& \\\hline Face cell 2& 2\\ Subcell 2 of face cell 1& 12\\ Subcell 1 of subcell 3 of face 4& 431\\ \hline \hline \end{tabular} \end{table} \section{Model} \subsection{Train} The model presented in this paper is based on the T5-base (220M) pre-trained sequence-to-sequence model \cite{raffel2020exploring}. The model was fine-tuned on the wikidata dataset with the text records as input and the location cell encoding as the output target sequence. We chose to utilize the standard cross-entropy loss in the fine-tuning process. Evaluation of level-based weighting in the loss function is left for future work. We trained for 2M steps over 5 epochs with batch size of 12. We used the AdamW optimizer with learning rate 1.5e-5 and linear decay. We train the model on 80\% of the wiki-data dataset and use the other 20\% for in-domain evaluation. A diagram of our text-to-location framework with a few input/output examples is shown in \figref{fig:training diagram}. \begin{figure}[!ht] \centering \includegraphics[width=12cm]{t5_geolocation.png} \caption{A diagram of our text-to-location framework.} \label{fig:training diagram} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=12cm]{haifa.png} \caption{Prediction results for the text "Haifa". The predicted S2 cell in blue and its ancestor cells in white.} \label{fig:inference diagram} \end{figure} \subsection{Evaluation} \subsubsection{Inference} The inference of our resulting model is as follows: given a sentence we predict the output sequence. Naturally it wold be 1 digit between 0 to 5 representing the predicted cube face, followed by up to 9 digits between 0-3 representing the predicted s2cell on each level. This sequence can easily converted to a s2 cell, which represents the probability distribution over the earth for the input text location. We used a beam search with size of 10. See \figref{fig:inference diagram} for inference example. \subsubsection{Evaluation Metric} \label{subsubEvaluation Metric} The most straightforward classification metric is the "accuracy measure" where only predictions with a full match between the model's output and the true label counts as a successful prediction. We call this metric "Flat Accuracy" (as oppose to "Hierarchical Accuracy"). This metric fails to capture the inherent hierarchical nature of the label. Another measure is the "Mean distance error". Mean distance error averages the distances between the predicted location (center of the predicted S2 cell in our case) and true location of the target text. It too fails to capture the hierarchical nature of the label. For this reason we prefer a hierarchical classification metric described in Hierarchy-Specific Variations on the Regular Classification Metrics \cite{Silla2010ASO}. Those are variations of the well-known precision, recall and f-score metrics, specifically adapted to fit hierarchical classification: \newline hierarchical precision (hP): \begin{equation}\label{eq:1} hP=\frac {\sum_{i} | P_{i} \cap T_{i}|} {\sum_{i} | P_{i} |}, \end{equation} hierarchical recall (hR): \begin{equation}\label{eq:2} hR=\frac {\sum_{i} | P_{i} \cap T_{i}|} {\sum_{i} | T_{i} |}, \end{equation} and hierarchical f-measure (hF): \begin{equation}\label{eq:3} hF=\frac {2*hP*hR} {hP+hR} \end{equation} where $P_i$ is the set consisting of the most specific class predicted for each test example $i$, and all of its ancestor classes. $T_i$ is the set consisting of the true most specific class of test example $i$, and all its ancestor classes. Each summation is computed over all of the test set examples. \newline \section{Results} In this section, the performance of the developed geocoding model is presented and analyzed. To demonstrate the performance of the model, let us first present several inference examples of the model. The examples of the predicted samples are given in Tab. \ref{example table 1}. \begin{table}[ht!] \caption{Inference examples - true and predicted labels.}\label{example table 1} \centering \begin{tabular}{llll} \hline \hline Text&Predicted Label&True Label\\\hline townland in Drummaan, County Clare, Ireland& 21002321& 21002321\\ lake in Eksjö Municipality, Sweden& 20302303& 20302303\\ ancient monument in Denmark (2976)& 20331122& 20331122\\ school in Cheshire West and Chester, UK& 210033112& 210033113\\ mountain in Iran& 1333313& 133302\\ railway stop in Harburg, Germany& 20331203& 20331022\\ \hline \hline \end{tabular} \end{table} It is difficult to compare two sequences due to the fact that two adjacent cells can have very different sequences in terms of string comparison. This results from the fact that two adjacent cells can originate from different parent cells. To visualize the results, a web demo application powered by the fune-tuned model was constructed. This demo application can facilitate an intuitive comparison between the predicted and the true label. Inference examples using this application are given in \figref{fig:inference psacific}, \figref{fig:inference south}, and \figref{fig:inference paris}. \begin{figure}[ht!] \centering \includegraphics[width=12cm]{pacific.png} \caption{Prediction results for the text "Pacific ocean".} \label{fig:inference psacific} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=12cm]{South.png} \caption{Prediction results for the text "South". We can assume large amount of South African samples biased the prediction.} \label{fig:inference south} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=12cm]{Paris.png} \caption{Prediction results for the text "Paris".} \label{fig:inference paris} \end{figure} Evaluation results on the wikidata test split using the metrics described in section \ref{subsubEvaluation Metric} are given in Tab. \ref{results table}. \begin{table}[ht!] \caption{Evaluation results.}\label{results table} \centering \begin{tabular}{llll} \hline \hline Evaluation metric & Results \\\hline Flat accuracy & 0.51547\\ Hierarchy accuracy & 0.791\\ \hline \hline \end{tabular} \end{table} \section{Conclusions} In this paper we formulated the problem of geocoding as a sequence-to-sequence problem and trained a transformer model for geocoding. To leverage the capabilities of a language model, a pre-trained T5 model was used for fine-tuning. For the sequence representation of the geolocation label, an adaptive cell partitioning was used. The free text and its corresponding geolocation were obtained from wikidata. The evaluation of the model on both hierarchical and non-hierarchical metrics demonstrated the validity of the proposed approach for geolocation prediction. The free text at inference time sometimes only hints at a location. It is our intuition that by combining a huge decoder such as GPT3 \cite{brown2020language} to produce more location-coherent text from an obscure location reference and then feeding it to T5 could improve the model's performance. This evaluation is left for future work. In addition, we leave for future work the evaluation of our approach on a benchmark such as Wikipedia Toponym Retrieval \cite{gritta2018s}. \bibliographystyle{unsrt}
1,108,101,563,722
arxiv
\section*{Introduction} This paper was motivated by the problem of studying the linear-to\-po\-lo\-gi\-cal structure of the space $SC_p(X)$ of scatteredly continuous real-valued functions on a topological space $X$, addressed in \cite{BK1,BK2}. A function $f:X\to Y$ between two topological spaces is called {\em scatteredly continuous} if for each non-empty subspace $A\subset X$ the restriction $f|A:A\to Y$ has a point of continuity. Scatteredly continuous functions were introduced in \cite{AB} (as almost continuous functions) and studied in details in \cite{BM}, \cite{BB} and \cite{BK3}. If a topological space $Y$ is regular, then the scattered continuity of a function $f:X\to Y$ is equivalent to the weak discontinuity of $f$; see \cite{AB}, \cite[4.4]{BB}. We recall that a function $f:X\to Y$ is {\em weakly discontinuous} if each subspace $A\subset X$ contains an open dense subspace $U\subset A$ such that the restriction $f|U:U\to Y$ is continuous. For a topological space $X$ by $SC_p(X)\subset\IR^X$ we denote the linear space of all scatteredly continuous (equivalently, weakly discontinuous) functions on $X$, endowed with the topology of pointwise convergence. It is clear that the space $SC_p(X)$ contains the linear subspace $C_p(X)$ of all continuous real-valued functions on $X$. Topological properties of the function spaces $C_p(X)$ were intensively studied by topologists, see \cite{Arh}. In particular, they studied the interplay between topological invariants of topological space $X$ and its function space $C_p(X)$. Let us recall \cite{En,Juh} that for a topological space $X$ its \begin{itemize} \item {\em weight} $w(X)$ is the smallest cardinality of a base of the topology of $X$; \item {\em network weight} $w(X)$ is the smallest cardinality of a network of the topology of $X$; \item {\em tightness} $t(X)$ is the smallest infinite cardinal $\kappa$ such that for each subset $A\subset X$ and a point $a\in \bar A$ in its closure there is a subset $B\subset A$ of cardinality $|B|\le\kappa$ such that $a\in \bar B$; \item {\em Lindel\"of number} $l(X)$ is the smallest infinite cardinal $\kappa$ such that each open cover of $X$ has a subcover of cardinality $\le\kappa$; \item {\em hereditary Lindel\"of number} $hl(X)=\sup\{l(Z):Z\subset X\}$; \item {\em density $d(X)$} if the smallest cardinality of a dense subset of $X$; \item {\em the hereditary density} $hd(X)=\sup\{d(Z):Z\subset X\}$; \item {\em spread} $s(X)=\sup\{|D|:D$ is a discrete subspace of $X\}$. \end{itemize} By \cite[\S I.1]{Arh}, for each Tychonoff space $X$ the function space $C_p(X)$ has weight $w(C_p(X))=|X|$ and network weight $nw(SC_p(X))=nw(X)$. For the function space $SC_p(X)$ the situation is a bit different. \begin{proposition} For any $T_1$-space $X$ we have $$s\big(SC_p(X)\big)=nw\big(SC_p(X)\big)=w\big(SC_p(X)\big)=|X|.$$ \end{proposition} \begin{proof} It is clear that $s\big(SC_p(X)\big)\le nw\big(SC_p(X)\big)\le w\big(SC_p(X)\big)\le w(\IR^X)=|X|$. To see that $|X|\le s\big(SC_p(X)\big)$, observe that for each point $a\in X$ the characteristic function $$\delta_a:X\to\IR=\begin{cases}1,&\mbox{if $x=a$}\\ 0,&\mbox{otherwise} \end{cases} $$ of the singleton $\{a\}$ is scatteredly continuous, and the subspace $\mathcal D=\{\delta_a:a\in X\}\subset SC_p(X)$ has cardinality $|X|$ and is discrete in $SC_p(X)$. \end{proof} The deviation of a subset $\F\subset SC_p(X)$ from being a subset of $C_p(X)$ can be measured with help of the cardinal number $\Dec(\F)$ called the {\em decomposition number} of $\F$. It is defined as the smallest cardinality $|\mathcal{C}|$ of a cover $\mathcal C$ of $X$ such that for each $C\in\mathcal{C}$ and $f\in\F$ the restriction $f|C$ is continuous. If the function family $\F$ consists of a single function $f$, then the decomposition number $\Dec(\F)=\Dec(\{f\})$ coincides with the decomposition number $\dec(f)$ of the function $f$, studied in \cite{Sol}. It is clear that $\Dec\big(C_p(X)\big)=1$. \begin{proposition} For a $T_1$ topological space $X$ the decomposition number $\Dec(SC_p(X))$ is equal to the decomposition number $\Dec(\mathcal D)$ of the subset $\mathcal D=\{\delta_a:a\in X\}\subset SC_p(X)$ and is equal to the smallest cardinality $\ddec(X)$ of a cover of $X$ by discrete subspaces. \end{proposition} \begin{proof} It is clear that $\Dec(\mathcal D)\le\Dec(SC_p(X))\le \ddec(X)$. To prove that $\Dec(\mathcal D)\ge\ddec(X)$, take a cover $\mathcal{C}$ of $X$ of cardinality $|\mathcal{C}|=\Dec(\mathcal D)$ such that for each $C\in\mathcal{C}$ and each characteristic function $\delta_a\in\mathcal D$ the restriction $\delta_a|C$ is continuous. We claim that each space $C\in\mathcal{C}$ is discrete. Assuming conversely that $C$ contains a non-isolated point $c\in C$, observe that for the characteristic function $\delta_c$ of the singleton $\{c\}$ the restriction $\delta_c|C$ is not continuous. But this contradicts the choice of the cover $\mathcal{C}$. Therefore the cover $\mathcal{C}$ consists of discrete subspaces of $X$ and $\ddec(X)\le|\mathcal{C}|=\Dec(\mathcal D)$. \end{proof} In contrast to the whole function space $SC_p(X)$ which has large decomposition number $\Dec(SC_p(X))$, its $\sigma$-convex subsets have decomposition numbers bounded from above by the hereditary Lindel\"of number of $X$. Following \cite{Al} and \cite{TUZ}, we define a subset $C$ of a linear topological space $L$ to be {\em $\sigma$-convex} if for any sequence of points $(x_n)_{n\in\w}$ in $C$ and any sequence of positive real numbers $(t_n)_{n\in\w}$ with $\displaystyle\sum_{n=0}^\infty t_n=1$ the series $\displaystyle\sum_{n=0}^\infty t_nx_n$ converges to some point $c\in C$. It is easy to see that each compact convex subset $K\subset L$ is $\sigma$-convex. On the other hand, each $\sigma$-convex subset of a linear topological space $L$ is necessarily convex and bounded in $L$. The main result of this paper is the following: \begin{theorem}\label{main} For any topological space $X$ of countable tightness, each $\sigma$-convex subset $\F\subset SC_p(X)$ has decomposition number $\Dec(\F)\le hl(X)$. \end{theorem} This theorem will be proved in Section~\ref{s:pf-t2}. Now we derive some simple corollaries of this theorem. \begin{corollary}\label{c1} For any topological space $X$ of countable tightness, each $\sigma$-convex subset $\F\subset SC_p(X)$ has network weight $nw(\F)\le nw(X)$. Moreover, $$nw(X)=\max\{nw(\F):\mbox{$\F$ is a $\sigma$-convex subset of $SC_p(X)$}\}$$provided the space $X$ is Tychonoff. \end{corollary} \begin{proof} By Theorem~\ref{main}, each $\sigma$-convex subset $\F\subset SC_p(X)$ has decomposition number $\Dec(\F)\le hl(X)$. Consequently, we can find a disjoint cover $\mathcal{C}$ of $X$ of cardinality $|\mathcal{C}|=\Dec(\F)\le hl(X)$ such that for each $C\in\mathcal{C}$ and $f\in\F$ the restriction $f|C$ is continuous. Let $Z=\oplus \mathcal{C}=\{(x,C)\in X\times\mathcal{C}:x\in C\}\subset X\times \mathcal{C}$ be the topological sum of the family $\mathcal{C}$, and $\pi:Z\to X$, $\pi:(x,C)\mapsto x$, be the natural projection of $Z$ onto $X$. Since the cover $\mathcal{C}$ is disjoint, the map $\pi:Z\to X$ is bijective and hence induces a topological isomorphism $\pi^*:\IR^X\to \IR^Z$, $\pi^*:f\mapsto f\circ \pi$. The choice of the cover $\mathcal{C}$ guarantees that $\pi^*(\F)\subset C_p(Z)$. By (the proof of) Theorem~I.1.3 of \cite{Arh}, $nw(C_p(Z))\le nw(Z)$ and hence $$ \begin{aligned} nw(\F)&=nw(\pi^*(\F))\le nw(C_p(Z))\le nw(Z)\le\\ &\le |\mathcal{C}|\cdot nw(X)\le hl(X)\cdot nw(X)=nw(X). \end{aligned}$$ If the space $X$ is Tychonoff, then the ``closed unit ball'' $$\mathcal B=\{f\in C_p(X):\displaystyle\sup_{x\in X}|f(x)|\le 1\}\subset C_p(X)$$ is $\sigma$-convex and has network weight $nw(\mathcal B)=nw(X)$ according to Theorem I.1.3 of \cite{Arh}. So, $$nw(X)=\max\{nw(\F):\mbox{$\F$ is a $\sigma$-convex subset of $SC_p(X)$}\}.$$ \end{proof} In the same way we can derive some bounds on the weight of compact convex subsets in function spaces $SC_p(X)$. \begin{corollary}\label{c2} For any topological space $X$ of countable tightness, each compact convex subset $\K\subset SC_p(X)$ has weight $w(\K)\le \max\{hl(X),hd(X)\}$. Moreover, $$ \begin{aligned} hl(X)\le \sup\{w(\K):&\mbox{ $\K$ is a compact convex subset of } SC_p(X)\}\le\\ &\le\max\{hl(X),hd(X)\}. \end{aligned} $$ \end{corollary} \begin{proof} Given a compact convex subset $\K\subset SC_p(X)$, use Theorem~\ref{main} to find a disjoint cover $\mathcal{C}$ of $X$ of cardinality $|\mathcal{C}|=\Dec(\K)\le hl(X)$ such that for each $C\in\mathcal{C}$ and $f\in\K$ the restriction $f|C$ is continuous. Let $Z=\oplus \mathcal{C}$ and $\pi:\oplus \mathcal{C}\to X$ be the natural projection, which induces a linear topological isomorphism $\pi^*:\IR^X\to \IR^Z$, $\pi^*:f\mapsto f\circ\pi$, with $\pi^*(\K)\subset C_p(Z)$. It follows that the topological sum $Z=\oplus\mathcal{C}$ has density $d(Z)\le\displaystyle\sum_{C\in\mathcal{C}}d(C)\le|\mathcal{C}|\cdot hd(X)\le\max\{hl(X),hd(X)\}$, and so we can fix a dense subset $D\subset Z$ of cardinality $|D|=d(Z)\le\max\{hl(X),hd(X)\}$. Since the restriction operator $R:C_p(Z)\to C_p(D)$, $R:f\mapsto f|D$, is injective and continuous, we conclude that $$ \begin{aligned} w(\K)&=w(\pi^*(\K))=w(R\circ \pi^*(\K))\le w (\IR^D)=\\ &=|D|\cdot\aleph_0\le\max\{hl(X),hd(X)\}. \end{aligned} $$ Next, we show that $hl(X)\le\tau$ where $$\tau=\sup\{w(\K):\mbox{$\K$ is a compact convex subset of $SC_p(X)$}\}.$$ Assuming conversely that $hl(X)>\tau$ and using the equality $hl(X)=\sup\{|Z|:Z\subset X$ is scattered$\}$ established in \cite{Juh}, we can find a scattered subspace $Z\subset X$ of cardinality $|Z|>\tau$. It is easy to check that each function $f:X\to[0,1]$ with $f(X\setminus Z)\subset\{0\}$ is scatteredly continuous, which implies that the subset $$\K_Z=\big\{f\in SC_p(X):f(Z)\subset[0,1],\;f(X\setminus Z)\subset\{0\}\big\}$$ is compact, convex and homeomorphic to the Tychonoff cube $[0,1]^Z$. Then $\tau\ge w(\K_Z)=w([0,1]^Z)=|Z|>\tau$ and this is a desired contradiction that completes the proof.\end{proof} Corollaries~\ref{c1} or \ref{c2} imply: \begin{corollary}\label{c3} For a metrizable separable space $X$, each compact convex subspace $\K\subset SC_p(X)$ is metrizable. \end{corollary} Finally, let us observe that Corollary~\ref{c1} implies: \begin{corollary}\label{c4} If for Tychonoff spaces $X,Y$ with countable tightness the linear topological spaces $SC_p(X)$ and $SC_p(Y)$ are topologically isomorphic, then $nw(X)=nw(Y)$. \end{corollary} \section{Weakly discontinuous families of functions} In this section we shall generalize the notions of scattered continuity and weak discontinuity to function families. A family of functions $\F\subset Y^X$ from a topological space $X$ to a topological space $Y$ is called \begin{itemize} \item {\em scatteredly continuous} if each non-empty subset $A\subset X$ contains a point $a\in A$ at which each function $f|A:A\to Y$, $f\in\F$ is continuous; \item {\em weakly discontinuous} if each subset $A\subset X$ contains an open dense subspace $U\subset A$ such that each function $f|U:U\to Y$, $f\in\F$ is continuous. \end{itemize} The following simple characterization can be derived from the corresponding definitions and Theorem~4.4 of \cite{BB} (saying that each scatteredly continuous function with values in a regular topological space is weakly discontinuous). \begin{proposition}\label{p3} A function family $\F\subset Y^X$ is scatteredly continuous (resp. weakly discontinuous) if and only if so is the function $\Delta\F:X\to Y^\F$, $\Delta\F:x\mapsto(f(x))_{f\in\F}$. Consequently, for a regular topological space $Y$, a function family $\F\subset Y^X$ is scatteredly continuous if and only if it is weakly discontinuous. \end{proposition} Propositions 4.7 and 4.8 \cite{BB} imply that each weakly discontinuous function $f:X\to Y$ has decomposition number $\dec(f)\le hl(X)$. This fact combined with Proposition~\ref{p3} yields: \begin{corollary}\label{c5} For any topological spaces $X,Y$, each weakly discontinuous function family $\F\subset Y^X$ has decomposition number $\Dec(\F)\le hl(X)$. \end{corollary} \section{Weak discontinuity of $\sigma$-convex sets in function spaces} For a topological space $X$ by $SC_p^*(X)$ we denote the space of all {\em bounded} scatteredly continuous real-valued functions on $X$. It is a subspace of the function space $SC_p(X)\subset \IR^X$. Each function $f\in SC_p^*(X)$ has finite norm $\|f\|=\displaystyle\sup_{x\in X}|f(x)|$. \begin{theorem}\label{wd} For any topological space $X$ with countable tightness, each $\sigma$-convex subset $\F\subset SC^*_p(X)$ is weakly discontinuous. \end{theorem} \begin{proof} By Proposition~\ref{p3}, the weak discontinuity of the function family $\F$ is equivalent to the scattered continuity of the function $\Delta\F:X\to\IR^\F$, $\Delta\F:x\mapsto(f(x))_{f\in\F}$. Since the space $X$ has countable tightness, the scattered continuity of $\Delta\F$ will follow from Proposition~2.3 of \cite{BB} as soon as we check that for each countable subset $Q=\{x_n\}_{n=1}^\infty\subset X$ the restriction $\Delta\F|Q:Q\to\IR^\F$ has a continuity point. Assuming the converse, for each point $x_n\in Q$ we can choose a function $f_n\in \F$ such that the restriction $f_n|Q$ is discontinuous at $x_n$. Observe that a function $f:Q\to\IR$ is discontinuous at a point $q\in Q$ if and only if it has strictly positive oscillation $$\osc_q(f)=\inf_{O_q}\sup\{|f(x)-f(y)|:x,y\in O_q\}$$at the point $q$. In this definition the infimum is taken over all neighborhoods $O_q$ of $q$ in $Q$. We shall inductively construct a sequence $(t_n)_{n=1}^\infty$ of positive real numbers such that for every $n\in\IN$ the following conditions are satisfied: \begin{enumerate} \item[1)] $t_1\le \frac12$, $t_{n+1}\le \frac12t_n$, and $t_{n+1}\cdot\|f_{n+1}\|\le \frac12t_n\cdot\|f_n\|$, \item[2)] the function $s_n=\displaystyle\sum_{k=1}^nt_kf_k$ restricted to $Q$ is discontinuous at $x_n$, \item[3)] $t_{n+1}\cdot\|f_{n+1}\|\le\frac18\osc_{x_n}(s_n|Q)$. \end{enumerate} We start the inductive construction letting $t_1=1/2$. Then the function $s_1|Q=t_1\cdot f_1|Q$ is discontinuous at $x_1$ by the choice of the function $f_1$. Now assume that for some $n\in\IN$ positive numbers $t_1\dots,t_n$ has been chosen so that the function $s_n=\displaystyle\sum_{k=1}^nt_kf_k$ restricted to $Q$ is discontinuous at $x_n$. Choose any positive number $\tilde t_{n+1}$ such that $$\tilde t_{n+1}\le \frac12t_n,\;\;\tilde t_{n+1}\cdot\|f_{n+1}\|\le\tfrac 12t_n\cdot\|f_{n}\|\mbox{ \ and \ } \tilde t_{n+1}\cdot\|f_{n+1}\|\le\tfrac18\osc_{x_n}(s_n|Q),$$ and consider the function $\tilde s_{n+1}=s_n+\tilde t_{n+1}f_{n+1}$. If the restriction of this function to $Q$ is discontinuous at the point $x_{n+1}$, then put $t_{n+1}=\tilde t_{n+1}$ and finish the inductive step. If $\tilde s_{n+1}|Q$ is continuous at $x_{n+1}$, then put $t_{n+1}=\frac12\tilde t_{n+1}$ and observe that the restriction of the function $$s_{n+1}=\displaystyle\sum_{k=1}^{n+1} t_kf_k=s_n+\tfrac12\tilde t_{n+1}f_{n+1}= \tilde s_{n+1}-\tfrac12\tilde t_{n+1}f_{n+1}$$ to $Q$ is discontinuous at $x_{n+1}$. This completes the inductive construction. \smallskip The condition (1) guarantees that $\displaystyle\sum_{n=1}^\infty t_n\le 1$ and hence the number $t_0=1-\displaystyle\sum_{n=1}^\infty t_n$ is non-negative. Now take any function $f_0\in\F$ and consider the function $$s=\displaystyle\sum_{n=0}^\infty t_nf_n$$ which is well-defined and belongs to $\F$ by the $\sigma$-convexity of $\F$. The functions $f_0,s\in\F\subset SC_p(X)$ are weakly discontinuous and hence for some open dense subset $U\subset Q$ the restrictions $s|U$ and $f_0|U$ are continuous. Pick any point $x_n\in U$. Observe that $$s=t_0f_0+s_n+\displaystyle\sum_{k=n+1}^\infty t_{k}f_k$$and hence $$s_n=s-t_0f_0-\displaystyle\sum_{k=n+1}^\infty t_kf_k=s-t_0f_0-u_n,$$ where $u_n=\displaystyle\sum_{k=n+1}^\infty t_kf_k$. The conditions (1) and (3) of the inductive construction guarantee that the function $u_n$ has norm $$\|u_n\|\le\displaystyle\sum_{k=n+1}^\infty t_k\|f_k\|\le 2 t_{n+1}\|f_{n+1}\|\le \frac14\osc_{x_n}(s_n|Q).$$ Since $s_n=s-t_0f_0-u_n$, the triangle inequality implies that $$0<\osc_{x_n}(s_n|Q)\le \osc_{x_n}(s|Q)+\osc_{x_n}(t_0f_0|Q)+ \osc_{x_n}(u_n)\le$$ $$\le 0+0+2\|u_n\|\le\frac12\osc_{x_n}(s_n|Q)$$ which is a desired contradiction, which shows that the restriction $\Delta\F|Q$ has a point of continuity and the family $\F$ is weakly discontinuous. \end{proof} \section{Proof of Theorem~\ref{main}}\label{s:pf-t2} Let $X$ be a topological space with countable tightness and $\F$ be a $\sigma$-convex subset in the function space $SC_p(X)$. The $\sigma$-convexity of $\F$ implies that for each point $x\in X$ the subset $\{f(x):f\in\F\}\subset\IR$ is bounded (in the opposite case we could find sequences $(f_n)_{n\in\w}\in\F^\w$ and $(t_n)_{n\in\w}\in[0,1]^\w$ with $\displaystyle\sum_{n=0}^\infty t_n=1$ such that the series $\displaystyle\sum_{n=1}^\infty t_nf_n(x)$ is divergent). Then $X=\displaystyle\bigcup_{n=1}^\infty X_n$ where $X_n=\{x\in X:n\le \displaystyle\sup_{f\in\F}|f(x)|<n+1\}$ for $n\in\w$. It follows that for every $n\in\w$ the family $\F|X_n=\{f|X_n:f\in \F\}$ is a $\sigma$-convex subset of the function space $SC_p^*(X_n)$. By Theorem~\ref{wd}, the function family $\F|X_n$ is weakly discontinuous and by Corollary~\ref{c5}, $\Dec(\F|X_n)\le hl(X_n)$. Then $\Dec(\F)\le\displaystyle\sum_{n=0}^\infty \Dec(\F|X_n)\le\displaystyle\sum_{n=0}^\infty hl(X_n)\le hl(X)$. \section{Some Open Problems} The presence of the condition of countable tightness in Theorem~\ref{main} and its corollaries suggests the following open problem. \begin{problem} Is it true $w(\K)\le nw(X)$ for each topological space $X$ and each compact convex subset $\K\subset SC_p(X)$? \end{problem} By Theorem~\ref{wd}, for each topological space $X$ of countable tightness, each compact convex subset $\K\subset SC_p^*(X)$ is weakly discontinuous. \begin{problem} For which topological spaces $X$ each compact convex subset $\K\subset SC_p(X)$ is weakly discontinuous? \end{problem} According to Corollary~\ref{c3}, each compact convex subset $\K\subset SC_p(\w^\w)$ is metrizable. \begin{problem} Is a compact subset $\K\subset SC_p(\w^\w)$ metrizable if $K$ is homeomorphic to a compact convex subset of $\IR^{\mathfrak c}$. \end{problem} Let us recall that a topological space $K$ is {\em Rosenthal compact} if $K$ is homeomorphic to a compact subspace of the space $\mathcal B_1(X)\subset\IR^X$ of functions of the first Baire class on a Polish space $X$. In this definition the space $X$ can be assumed to be equal to the space $\w^\w$ of irrationals. \begin{problem}\label{pr4} Is each Rosenthal compact space homeomorphic to a compact subset of the function space $SC_p(\w^\w)$? \end{problem} This problem has affirmative solution in the realm of zero-dimensional separable Rosenthal compacta. \begin{theorem} Each zero-dimensional separable Rosenthal compact space $K$ is homeomorphic to a compact subset of the function space $SC_p(\w^\w)$. \end{theorem} \begin{proof} Let $D\subset K$ be a countable dense subset in $K$. Let $A=C_D(K,2)$ be the space of continuous functions $f:K\to 2=\{0,1\}$ endowed with the smallest topology making the restriction operator $R:C_D(K,2)\to 2^D$, $R:f\mapsto f|D$, continuous. By the characterization of separable Rosenthal compacta \cite{God}, the space $A$ is analytic, i.e., $A$ is the image of the Polish space $X=\w^\w$ under a continuous map $\pi:X\to A$. Now consider the map $\delta:K\to 2^A$, $\delta:x\mapsto (f(x))_{f\in A}$. This map is continuous and injective by the zero-dimensionality of $K$. The map $\pi:X\to A$ induces a homeomorphism $\pi^*:2^A\to 2^{X}$, $\pi^*:f\mapsto f\circ\pi$. Then $\pi^*\circ\delta:K\to 2^{X}$ is a topological embedding. We claim that $\pi^*\circ\delta(K)\subset SC_p(X)\cap 2^X$. Given a point $x\in K$, we need to check that the function $\pi^*\circ \delta(x)\in 2^X$ is scatteredly continuous. It will be convenient to denote the function $\delta(x)\in 2^A$ by $\delta_x$. This function assigns to each $f\in A=C_D(K)$ the number $\delta_x(f)=f(x)\in 2$. By \cite{Ros,BFT}, the Rosenthal compact space $K$ is Fr\'echet-Urysohn, so there is a sequence $(x_n)_{n\in\w}\in D^\w$ with $\displaystyle\lim_{n\to\infty}x_n=x$. Then the function $\delta_x:A\to 2$, $\delta_x:f\mapsto f(x)$, is the pointwise limit of the continuous functions $\delta_{x_n}$, which implies that $\delta_x$ is a function of the first Baire class on $A$ and $\delta_x\circ \pi:X\to 2$ is a function of the first Baire class on the Polish space $X$. Since this function has discrete range, it is scatteredly continuous by Theorem 8.1 of \cite{BB}. Consequently, $\pi^*\circ\delta(x)\in SC_p(X)$ and $K$ is homeomorphic to the compact subset $\pi^*\circ\delta(K)\subset SC_p(X)$. \end{proof} A particularly interesting instance of Problem~\ref{pr4} concerns non-metrizable convex Rosenthal compacta. One of the simples spaces of this sort is the Helly space. We recall that the {\em Helly space} is the subspace of $B_1(I)$ consisting of all non-decreasing functions $f:I\to I$ of the unit interval $I=[0,1]$. \begin{problem} Is the Helly space homeomorphic to a compact subset of the function space $SC_p(\w^\w)$? \end{problem}
1,108,101,563,723
arxiv
\section{Introduction} A vortex state is one of important features for applications of superconductors. Vortex states depend on an external magnetic field and an external current. It is known that a magnetic field from a ferromagnet affects the vortex state of the superconductor \cite{F/S_Review}. For example, a ferromagnet / superconductor hybrid structure enhances superconductivity, especially its critical magnetic field and its critical current \cite{F/S_dot,F/S_bilayer}. Magnetic materials like a ferromagnet affect a superconductor. One of magnetic materials that attracts attention recently is a chiral helimagnet(CHM). The CHM has an interesting magnetic structure. The CHM consists of spins that form the helical rotation along one direction. A spin structure of the CHM comes from two interactions between nearest neighbor spins; a ferromagnetic exchange interaction and the Dzyaloshinsky-Moriya (DM) interaction \cite{Dzyaloshinsky}. Two nearest neighbor spins tend to be parallel due to the former interaction but tend to be perpendicular to each other due to the latter interaction. Due to a competition between two interactions, nearest neighbor spins slightly deviate from each other. This leads to a formation of the helically rotated arrangement. When a weak magnetic field is applied, the magnetic structure forms a soliton lattice. These magnetic structures have been observed experimentally \cite{Togawa_CSL}. In the previous study, we found that the chiral helimagnet affects a vortex configuration in a superconductor \cite{ISS2014}. Vortices form a periodically modulated triangular lattice under an applied large magnetic field. In this paper, we study effects of a chiral helimagnet on a vortex configuration in a superconductor in more detail. We compare vortex configurations with/without the magnetic field from the CHM under the applied homogeneous magnetic field. In order to investigate vortex configurations, we solve the Ginzburg-Landau equations with the finite element method. \begin{figure}[t] \begin{center} \includegraphics[scale=0.4]{Fig1.eps} \caption{The chiral helimagnet/ superconductor bilayer system.} \label{Fig1} \end{center} \end{figure} \section{Method} We consider a chiral helimagnet/superconductor bilayer system in Fig.\ref{Fig1}. We assume that the effect of the chiral helimagnet on the superconductor is given by an external magnetic field $H_{\rm CHM}$ but effects of the superconductor on the chiral helimagnet are neglected. In this study, the superconducting layer is considered as a two-dimensional system and only the perpendicular component of the magnetic field $H_{\rm CHM}$ is taken into account. In order to investigate vortex configurations in this bilayer system, we solve the Ginzburg-Landau equations, \begin{equation} \alpha |\psi|^2 + \beta |\psi|^2 \psi + \frac{1}{2m^\ast} \left( \frac{\hbar}{i}\nabla - \frac{e^\ast}{c}\mbox{\boldmath $A$} \right)\psi = 0, \label{gl-1} \end{equation} \begin{eqnarray} && {\rm curl}\left( {\rm curl}\mbox{\boldmath $A$} - \mbox{\boldmath $H$}_{\rm ext} \right) = \frac{4\pi}{c}\mbox{\boldmath $J$} \nonumber \\ && = \frac{4\pi}{c}\left\{ \frac{e^\ast \hbar}{2m^\ast i} \left( \psi^\ast \nabla \psi - \psi \nabla \psi^\ast \right) \right. \nonumber \\ && \left. - \frac{e^{\ast2}}{m^\ast c} \psi^\ast \psi \mbox{\boldmath $A$} \right\}, \label{gl-2} \end{eqnarray} where $\alpha=\alpha_0(T-T_c)$, $T$ is a temperature, $T_c$ is a critical temperature, $\alpha_0 (>0)$ and $\beta (>0)$ are coefficients, $\psi$ is an superconducting order parameter, $m^\ast$ is an effective mass, $e^\ast$ is an effective charge, $\mbox{\boldmath $A$}$ is a magnetic vector potential, $\mbox{\boldmath $H$}_{\rm ext}$ is an external magnetic field, and $\mbox{\boldmath $J$}$ is a supercurrent density. In the second equation, the Maxwell equation is included. The magnetic field from the chiral helimagnet $H_{\rm CHM}$ is included in the external magnetic field $H_{\rm ext}$. $H_{\rm CHM}$ is obtained from a Hamiltonian for chiral helimagnet \cite{kishine}. This Hamiltonian consists of a ferromagnetic exchange interaction, the Dzyaloshinsky-Moriya interaction, and the Zeeman energy; \begin{eqnarray} \mathcal{H} &=& -J \sum_n \mbox{\boldmath $S$}_n \cdot \mbox{\boldmath $S$}_{n+1} + \mbox{\boldmath $D$} \cdot \sum_n \mbox{\boldmath $S$}_n \times \mbox{\boldmath $S$}_{n+1} \nonumber \\ & & + 2\mu_BH_z \sum_n \mbox{\boldmath $S$}_n^z, \label{hamiltonian} \end{eqnarray} where $\mbox{\boldmath $S$}$ is a spin in the chiral helimagnet, $J$ is an exchange coefficient, $\mbox{\boldmath $D$}$ is a DM vector. From this Hamiltonian, we obtain the perpendicular component of the magnetic field $(\mbox{\boldmath $H$}_{\rm ext})_z$; \begin{equation} \left(\mbox{\boldmath $H$}_{{\rm ext}}\right)_z (x) = H_0 \cos{\theta} + H_{{\rm appl}}. \label{external_field} \end{equation} The first term is a magnetic field from the chiral helimagnet $H_{\rm CHM}$, where \begin{equation} \theta = 2 {\rm sin}^{-1} \left[{\rm sn} \left( \frac{\sqrt{H^\ast}}{k}x | k \right) \right] + \pi. \end{equation} $H_0$ is strength of the magnetic field from the CHM. The second term is an applied magnetic field $H_{\rm appl}$. $H^\ast$ is a normalized magnetic field, \begin{equation} H^\ast = \frac{2\mu_B H_{\rm appl}}{a^2 S^2 \sqrt{J^2+D^2}}, \label{norm_H} \end{equation} where $a$ is a lattice constant. $k$ $(0 \leq k \leq 1)$ is a modulus of the Jacobi's elliptic function ${\rm sn}(x|k)$ and determined by, \begin{equation} \frac{\pi \phi}{4\sqrt{H}^\ast} = \frac{E(k)}{k}. \label{k} \end{equation} $\phi = \tan^{-1}{\left(D/J \right)}$ and $E(k)$ is the complete elliptic integral of the second kind. A relation between $k$ and $H^\ast$ is shown in Fig.\ref{Fig2}. \begin{figure} \begin{center} \includegraphics[scale=0.4]{Fig2.eps} \caption{The relation between the modulus of the Jacobi's elliptic function $k$ and the applied magnetic field $H_{\rm appl}$ for $D/J = 0.16$} \label{Fig2} \end{center} \end{figure} A period of the helical rotation $L'$ is given by, \begin{equation} L' = \frac{2kK(k)}{\sqrt{H^\ast}}, \label{period} \end{equation} where $K(k)$ is the complete elliptic integral of the first kind. A relation between $L$ and $H^\ast$ is shown in Fig.\ref{Fig3}. \begin{figure} \begin{center} \includegraphics[scale=0.33]{Fig3.eps} \caption{The relation between the helical period and the applied magnetic field for $D/J = 0.16$.} \label{Fig3} \end{center} \end{figure} We obtain stable states using the Ginzburg-Landau equations, which is solved by the finite element method \cite{d-dot}. \section{Result} We show vortex configurations in a two-dimensional superconductor system under the $H_{\rm CHM}$ and $H_{\rm appl}$. We take the Ginzburg-Landau parameter $\kappa = \lambda_0/\xi_0=10$ ($\lambda_0$ and $\xi_0$ are a penetration length and coherence length at $T=0$, respectively), the temperature $T=0.3T_c$, and the ratio between two interactions $D/J = 0.16$, which is taken from the experimental data for Cr$_{1/3}$NbS$_2$ \cite{D/J}. The system sizes are $5.0L'\xi_0 \times 40\xi_0$. We take boundary conditions: (a) $\mbox{\boldmath $A$} \cdot \mbox{\boldmath $n$} = 0$,where $\mbox{\boldmath $n$}$ is a normal vector to the surface, (b) edges in this system are free. \begin{figure} \begin{center} \includegraphics[scale=0.5]{Fig4.eps} \caption{(a) Distributions of order parameters, (b) phases, and (c) magnetic fields, for $H_0/(\Phi_0/\xi_0^2)=0.000$ and $H_{{\rm appl}}/(\Phi_0/\xi_0^2)=0.0050$.} \label{Fig4} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{Fig5.eps} \caption{(a) Distributions of order parameters, (b) phases, and (c) magnetic fields, for $H_0/(\Phi_0/\xi_0^2)=0.0025$ and $H_{{\rm appl}}/(\Phi_0/\xi_0^2)=0.0050$.} \label{Fig5} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{Fig6.eps} \caption{(a) Distributions of order parameters, (b) phases, and (c) magnetic fields, for $H_0/(\Phi_0/\xi_0^2)=0.0050$ and $H_{{\rm appl}}/(\Phi_0/\xi_0^2)=0.0050$.} \label{Fig6} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{Fig7.eps} \caption{(a) Distributions of order parameters, (b) phases, and (c) magnetic fields, for $H_0/(\Phi_0/\xi_0^2)=0.0150$ and $H_{{\rm appl}}/(\Phi_0/\xi_0^2)=0.0050$.} \label{Fig7} \end{center} \end{figure} We show vortex configurations under the applied magnetic field in Fig.\ref{Fig4}-\ref{Fig7} for $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.0050$, where $\Phi_0$ is an quantum flux. In Figs.\ref{Fig4} (a)-(c), their figures represent a distribution of the order parameter, the phase, and the magnetic field for $H_0=0.0$ and $H_{\rm appl}/(\Phi_0/\xi_0^2) =0.0050$. In the case Figs.\ref{Fig4}, the Abrikosov lattice forms. When the magnetic field from the chiral helimagnet $H_{\rm CHM}$ is given, vortex configurations change to Fig.\ref{Fig5}-\ref{Fig7}, where $H_0/(\Phi_0/\xi_0^2) $ are $0.0025$ (Fig.\ref{Fig5}), $0.0050$ (Fig.\ref{Fig6}), and $0.0150$ (Fig.\ref{Fig7}) and $H_{\rm appl}/(\Phi_0/\xi_0^2) $ are fixed to $0.0050$. Under $H_{\rm CHM}$, triangular lattices are modulated. We can explain this result by a following reason; this modulation comes from the magnetic field from the chiral helimagnet $H_{\rm CHM}$. The chiral helimagnet has a helical magnetic structure, so $z$-component of the magnetic field $(\mbox{\boldmath $H$}_{\rm CHM})_z$ oscillates spatially. For $H_0/(\Phi_0/\xi_0^2) = 0.0025$ (Fig.\ref{Fig5}), $H_{\rm CHM}/(\Phi_0/\xi_0^2)$ changes from $-0.0025$ to $0.0025$ and $H_{\rm ext}/(\Phi_0/\xi_0^2) =H_{\rm CHM}/(\Phi_0/\xi_0^2) + H_{\rm appl}/(\Phi_0/\xi_0^2) $ changes from $0.0025$ to $0.0075$ for $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.0050$. Vortices tend to appear in the large magnetic field region and avoid in the small magnetic field region due to an interaction between the vortex and the magnetic field. The latter interaction $E_{VF}$ is given by, \begin{equation} E_{VF} = -\frac{1}{4\pi} \mbox{\boldmath $\Phi$}_0 \cdot \mbox{\boldmath $H$}_{\rm ext}. \label{vh_interaction} \end{equation} Therefore, a periodic modulated triangular lattice is formed. Here, we discuss a dynamics of a vortex under the applied magnetic field and the magnetic field from the chiral helimagnet. When a current flows along the $y$-direction, vortices move along the $x$-direction. However, under $H_{\rm CHM}$, vortices is difficult to move through the region where the magnetic field is low or negative. Therefore, a movement of a vortex is restricted and a pinning effect of the vortex due to the chiral helimagnet is expected. This leads to the increase of the critical current. \section{Summary} We have investigated the effect of the chiral helimagnet on the vortex configuration. Under the applied magnetic field and the magnetic field from the CHM, vortices form the periodically modulated triangular lattice. It is expected that this vortex configuration leads to a pinning effect and the increase of the critical current. \section*{Acknowledgements} This work was supported by JPSJ KAKENHI Grant Number 26400367. The authors thank V.V. Moshchalkov, Y. Kato, T. Nojima, T. Ishida, S. Okuma, S. Mori, Y. Ishii, Y. Higashi, N. Fujita, M. Umeda, M. Kashiwagi for useful discussions.
1,108,101,563,724
arxiv
\section{Introduction}\label{sec:int} Mathematical models of scientific systems necessarily include simplifications about the actual system they aim to represent. In some cases, these simplifications do not preclude the use of the model to understand, investigate, and make decisions and predictions about the system. The quintessential example of this comes from the domain of classical mechanics: Newtonian mechanics ignores quantum and relativistic effects but, over a wide domain of masses and energies, provides a completely adequate model to describe the motion of macroscopic objects. Outside this domain, however, quantum or relativistic effects are no longer negligible, and Newtonian mechanics fails. Just as classical mechanics is insufficient to describe a quantum system, the simplifications of a modern mathematical model may yield a discrepancy between the model and the system at hand too great to be ignored. This discrepancy is revealed during \emph{model validation}, a process by which we check that the mathematical model is a reliable representation of reality. Without accounting for the discrepancy, one cannot trust the model output, much less use it to make predictions or decisions. In this case, there are two immediate options: (1) Improve the model directly, i.e., from first principles or by including additional information; and (2) Represent the model discrepancy itself. While option 1 is usually desirable, it may not be feasible due to computational constraints, or because we in fact lack the knowledge to directly improve the model. Then we are left with option 2---represent the model discrepancy. A common approach to account for model discrepancy is through a \emph{response discrepancy function} \cite{kennedy2001bayesian}, also called a bias function. A response discrepancy function corrects model output (or response) to data. Typically, an additive function on the model output is calibrated to data, either point-wise or with a parametric form. An advantage of this approach is that it can be implemented even if the model is a black box, that is, one only needs access to model output, not the model itself. There are also disadvantages. In essence, a response discrepancy function builds a better interpolation to a single dataset, over the range of usable data. Thus, this approach provides no basis for extrapolation, to, for example, make a prediction about the probability of an epidemic next year. Furthermore, the action of this bias function is not interpretable, as it lies outside the model equations. Instead, in this paper, we show how to modify equations directly to account for the model error with an \emph{embedded discrepancy operator}. The advantages of this approach are threefold: \begin{enumerate} \item \textbf{Interpretability:} As the embedded operator appears within the model equations, and acts on state variables, the action of this operator is interpretable. \item \textbf{(Domain-)Consistency:} Information or constraints about the system can be incorporated into the discrepancy operator. \item \textbf{Robustness:} Discrepancy parameters can, and should, be calibrated over all available data. This can include data from multiple scenarios or initial conditions. \end{enumerate} These three properties---interpretability, consistency, and robustness---are designed to allow for decisions or extrapolative predictions. The inclusion of the embedded discrepancy operator into the original, or reduced model, yields an \emph{enriched model}. In essence, the enriched model takes advantage of both mechanistic and statistical modeling: it retains the reduced mechanistic model, and incorporates a general, statistically calibrated discrepancy model. Of course, this intrusive approach highly depends on the context. Here, we investigate the value of an embedded discrepancy operator in the context of epidemiology modeling. Mathematical modeling of disease spread and outbreaks has a long and rich history; see \cite{martcheva2015introduction, frauenthal2012mathematical,pfeiffer2008spatial, nelson2014infectious}, to name just a few. One of the most common classes of these models consists of coupled ordinary differential equations (ODEs), whose state variables include populations of the host (here, humans) and the disease carrier, or vector. These populations are further specified as either \textbf{S}usceptible, \textbf{E}xposed, \textbf{I}nfected, or \textbf{R}ecovered, leading to thus-named SEIR models.\footnote{Commonly, the model name will specify which sub-populations are included for both species. For example, an SEIR-SEI model includes host sub-populations of the susceptible, exposed, infected, and recovered, and vector sub-populations of susceptible, exposed, and infected.} These models are relatively simple to implement and understand. Model parameters allow for the specification of transmission rates, incubation times, etc. In particular, we investigate the model discrepancy of a well-studied SEIR-SEI model of the Zika outbreak in Brazil in 2016 \cite{dantas2018calibration}. In previous work, after calibration of model parameters, the reduced model captured major tendencies of the outbreak. This was a major improvement compared to the reduced model with parameter values as suggested by current literature, which bore almost no usable resemblance to the real epidemic data. However, the calibrated model was still insufficient to precisely capture the dynamical behavior of the Zika outbreak. The current work extends previous works of embedded model discrepancy, used in the contexts of combustion \cite{morrison2018representing} and ecological models \cite{morrison2019embedded}, to the current domain of epidemiology. The discrepancy model is embedded within the coupled model differential equations, and the introduced discrepancy parameters are calibrated to available data. The enriched model is shown to greatly outperform the original model. To differentiate the current article from\,\, \cite{morrison2019embedded}, note that that study was primarily a numerical study over a constrained set of scenarios. The interaction matrices (determining reduced and true model coefficients) follow a number of assumptions, such as negative-definiteness, yielding highly well-behaved models. In addition, the actual discrepancy was known exactly between each reduced model and the corresponding data-generating model; some of that information was used to further constrain the discrepancy model parameters. Thus, the previous paper provided no guarantee that this method would work in a highly applied, real-world model scenario without those strong assumptions. Although the current modeling scheme only describes a single outbreak \footnote{Note that no outbreak of Zika was reported in the years following 2016, and thus there is no useful data.}, and thus is not suitable to describe multiple incidences of the disease, this type of model is useful for guiding decision and policy makers. For example,\,\, \cite{marnore2016} describes common questions faced by decision makers, such as how many total people will be infected, or even how to slow or prevent the outbreak from occurring. The objective of the present article is to reproduce with reasonable precision the data of a real epidemic. An appropriate enrichment and calibration of the model can then yield useful predictions about the dynamical system. A final complication of this field is that new disease cases are often not reported, causing the outbreak numbers to appear artificially low. A study from 2018 estimates that as much as 90\% of the cases are not reported \cite{bastos2018estimating}. However, as we will see shortly, the issue of faulty data is insufficient to account for discrepancy between the original model and observations. At the same time, the issue of under-reported cases does obviously play an important role during the model validation process. We try to disentangle the two problems---observational error and model error---by first considering only model error, and later allowing for both model error and significant under-reporting. We consider how the enriched model performs in different possible under-reporting scenarios, such as 10 or 50\% percent. The rest of the paper is organized as follows. Section \ref{sec:zika} describes the specific model of the 2016 Brazilian Zika outbreak, and reconstructs previous results as a reference. Section~\ref{sec:edo} presents the formulation and calibration of the embedded discrepancy operator, and also corresponding numerical results. Section~\ref{sec:rep} explores the issue of under-reporting and how well the enriched model might perform given some sample under-reporting scenarios. A brief concluding discussion is given in Section~\ref{sec:con}. \section{Zika disease modeling}\label{sec:zika} As mentioned in \S~\ref{sec:int}, a typical approach to model the spread of infectious disease is with a set of coupled ordinary differential equations. Here we consider the well-known $SEIR-SEI$ model, which describes coupled growth rates of species of interest, namely, susceptible, exposed, infected, and recovered humans, as well as susceptible, exposed, and infected vector. In the case of Zika (also, Dengue and Yellow fever) in Brazil, this vector is the \textit{Aedes aegypti} mosquito. In this section and the next, we assume that the data represents the actual truth. That is, the modeling objective is to achieve a model consistent with the given data. \subsection{Model specification} We follow the SEIR-SEI model discussed by Dantas, Tosin, and Cunha \cite{dantas2018calibration}, which included species $S_h, E_h, I_h, R_h, S_v, E_v, I_v$. Subscripts $h$ and $v$ indicate human and vector, respectively. The model also includes a state variable $C(t)$, which counts cumulative new cases over time. Then, the eight coupled equations are: \begin{subequations} \begin{align} \frac{dS_h}{dt} &= -\beta_h S_h I_v /N_v\label{eq:z1}\\ \frac{dE_h}{dt} &= \beta_h S_h I_v /N_v - \alpha_h E_h\\ \frac{dI_h}{dt} &= \alpha_h E_h - \gamma I_h\\ \frac{dR_h}{dt} &= \gamma I_h\\ \frac{dS_v}{dt} &= \delta N_v - \beta_v S_v I_h/N_h - \delta S_v\\ \frac{dE_v}{dt} &= \beta_v S_v I_h/N_h - (\alpha_v + \delta) E_v\\ \frac{dI_v}{dt} &= \alpha_v E_v - \delta I_v\\ \frac{dC}{dt} &= \alpha_h E_h\label{eq:z8}, \end{align}\label{eq:z} \end{subequations} where $N_h$ represents Brazil's human population and $N_v$ represents the vector population. Nominal values of the interaction rates were determined by a careful literature study. These rates are: \begin{subequations} \begin{align} \text{Extrinsic incubation period:} \quad \frac{1}{\alpha_v} &= 9.1\\ \text{Intrinsic incubation period:} \quad \frac{1}{\alpha_h} &= 5.9\\ \text{Human infectious period:} \quad \frac{1}{\gamma} &= 7.9\\ \text{Vector lifespan:} \quad \frac{1}{\delta} &= 11\\ \text{Mosquito to human infection time:} \quad \frac{1}{\beta_h} &= 11.3\\ \text{Human to mosquito infection time:} \quad \frac{1}{\beta_v} &= 8.6. \end{align} \end{subequations} Let us collect these model parameters into the vector $\theta$, and let $\theta_n$ refer to the nominal values given above. To fully specify this model, it remains to provide initial conditions. These are:\footnote{Although it might appear above as though some initial conditions are defined in terms of undefined quantities, this is just an ordering issue. The initial conditions are properly specified in the following order: $C$, $R_h$, $E_h$, $S_h$, $I_v$, $E_v$, $S_v$.} \begin{subequations} \begin{align} S_h(0) &= N_h - E_h(0) - I_h(0) - R_h(0) \label{eq:ic1}\\ E_h(0)&= I_h(0)\\ I_h(0)&= C(0)\\ R_h(0)&= 29,639\\ S_v(0)&= N_v - E_v(0) - I_v(0)\\ E_v(0)&= I_v(0)\\ I_v(0)&= 2.2 \times 10^{-4}\\ C(0) &= 8,201\label{eq:ic8}. \end{align} \end{subequations} Finally, let us call the above model $\mathcal{Z}$ and the state vector $x$, where $x$ is ordered in the same way as equations~\ref{eq:z1}-\ref{eq:z8} ($x_1 = S_h$, $x_2= E_h$, and so on). Then we may refer to the above model as: \begin{equation} \frac{dx}{dt} = \mathcal{Z}(x; \theta). \end{equation} We may also refer to this model as the original model, or \emph{reduced} model. \subsection{Previous results compared to data}\label{ssec:prev} The work in\,\, \cite{dantas2018calibration} presents a detailed approach to calibrate this model to data. The data is made available by the Brazilian Ministry of Health \cite{SVS2017} and is available as supplementary material in\,\, \cite{dantas2018calibration}. Each data point $d_i, i=1,\dots,52$, gives the recorded cumulative number of Zika cases at epidemiological week $i$ of the year 2016. In this section, we re-plot the results from that paper to serve as an immediate reference and comparison. First, Figure~\ref{fig:nom} compares the model output to data, using the above model with nominal parameters. \begin{figure} \includegraphics[width=.4\textwidth]{figs/zika-nom} \caption{\label{fig:nom} Outbreak data and reduced model response using nominal parameter values.} \end{figure} The model with nominal parameter values, $\mathcal{Z}(x; \theta = \theta_n)$, severely underestimates the outbreak. Note that under-reporting cannot explain the observed discrepancy: higher reporting rates would only increase this discrepancy. \begin{figure} \includegraphics[width=.4\textwidth]{figs/zika-cal} \caption{\label{fig:cal} Outbreak data and reduced model response using TRR calibrated parameter values.} \end{figure} Clearly the reduced model, given $\theta_n$ parameter values, is a poor representation of reality. After observing such a discrepancy, the authors of\,\,\cite{dantas2018calibration} performed a sophisticated calibration of the model parameters $\theta$, using the Trust-Region-Reflective (TRR) method \cite{coleman1996interior, conn2000trust}. Following that method, and using the public data to calibrate, two slightly different results are obtained by imposing a different set of constraints on possible parameters values. The model outputs after both calibrations are shown in Figure~\ref{fig:cal}. Although the model output is now much closer to the data, still, a detectable inconsistency persists. To be specific, note that after about week 30, the difference is tens of thousands of new cases of humans infected by Zika. From a modeling perspective, the salient point is that the model is unable to capture the dynamical behavior of the outbreak, even after calibration. Assuming (as we are for now), that the given data is correct, this suggests that the problem lies with model itself. Indeed, there are several possible sources of model error, that may impact not only this specific Zika model but many other epidemiological models. First, these SEIR models are not built from first principles, but rather from assumptions about interactive behavior, empirical information, and domain scientists' intuition and experience. Second, these models provide a continuous, deterministic description of discrete interactions, which naturally involve some stochasticity \cite{forgoston2009accurate}. With large enough populations, though, this should not be a problem. Third, diseases do not spread in a closed system of host and vector. Rather, the spread of a disease involves other species such as livestock and non-human primates \cite{childs2019mosquito}. Fourth, other modes of transmission are possible, such as sexual interaction \cite{coelho2016higher, petersen2016update} and blood transfusion \cite{motta2016evidence}. Finally, there could certainly be additional time- or spatially-dependent effects, such as migrations and local dynamics \cite{chang2020cross,gong2018epidemic}, collective behavior and time-delayed synchronization dynamics \cite{sun2017behavioral}. Some modelers assume power-law dynamics of the networks \cite{silva2018activation}, while others use fractional derivatives to describe relevant dynamics \cite{ghanbari2019analysis}. Instead, this model assumes time-independent parameters, and only models populations over time, not space. In summary, the spread of a contagious disease is an incredibly complex problem, and it remains unclear what is critically missing from the model or how to best improve it directly from epidemiological information. Here, then, we can turn to the field of model discrepancy to help. \section{Embedded discrepancy operator}\label{sec:edo} Before describing the embedded discrepancy operator, we illustrate the overall relationship between the different models considered in this paper. A schematic diagram is shown in Figure~\ref{fig:flow}. \begin{figure} \centering \input{setupFlow} \caption{A schematic diagram of the different models and their relationships considered in this paper.\label{fig:flow}} \end{figure} As seen in the previous section, after calibrating the model parameters to data, there is still a significant discrepancy between the model output and the data. That is, the answer to Q2 (and Q1) in Figure~\ref{fig:flow} is ``No,'' and so we move to state (M3): we model this discrepancy with the goal of reaching states (U1) and ultimately (E3). \subsection{Proposed approach: Embedded discrepancy operator} Previous work has shown that missing dynamics on the right-hand side (RHS) of differential equations can be approximated with ``extra'' information about the existing state variables \cite{morrison2019exact, givon2004extracting, hernandez2019algebraic}, such as memory or derivative information. Exploiting this, we pose the following enriched model: \begin{equation} \frac{dx}{dt} = \mathcal{Z}(x; \theta) + \Delta\left(x, \frac{dx}{dt}, \phi\right) \end{equation} where \begin{subequations} \begin{align} \Delta_{Sh} &= \kappa_1 S_h + \lambda_1 \frac{dS_h}{dt}\\ \Delta_{Eh} &= \kappa_2 E_h + \lambda_2 \frac{dE_h}{dt}\\ \Delta_{Ih} &= \kappa_3 I_h + \lambda_3 \frac{dI_h}{dt}\\ \Delta_{Rh} &= \kappa_4 R_h + \lambda_4 \frac{dR_h}{dt}\\ \Delta_{Sv} &= \kappa_5 S_v + \lambda_5 \frac{dS_v}{dt}\\ \Delta_{Ev} &= \kappa_6 E_v + \lambda_6 \frac{dE_v}{dt}\\ \Delta_{Iv} &= \kappa_7 I_v + \lambda_7 \frac{dI_v}{dt}\\ \Delta_{C} &= 0, \end{align} \end{subequations} with $\kappa = (\kappa_1, \dots, \kappa_7)$ and $\lambda = (\lambda_1, \dots, \lambda_7)$. That is, the differential equation for $x_i, i = 1,\dots,7$ in the reduced model is modified by two additional terms, one linear in $x_i$ and the other linear in $dx_i/dt$. The discrepancy parameters are collected into the vector $\phi$: \begin{equation}\phi = (\kappa, \lambda) = (\kappa_1, \dots, \kappa_7, \lambda_1, \dots, \lambda_7).\end{equation} Note, the RHS for $dC(t)/dt$ is not modified because this function simply counts the exposed cases as given by the model. A change here would be analogous to modifying model output itself, not interpretable, and not reliable for any type of decision or prediction. As mentioned in Section~\ref{sec:int}, this type of discrepancy model can be constrained to available information about the system. For example, the discrepancy operator for a combustion reaction in\,\,\cite{morrison2018representing} is constrained to satisfy conservation of atoms and conservation of energy. In this scenario we do not have such strict constraints; see\,\,\cite{morrison2019embedded} for constrained operators in similar Lotka-Volterra models. All together, the enriched model is \begin{subequations} \begin{align} \frac{dS_h}{dt} &= -\beta_h S_h I_v /N_v + \Delta_{Sh}\\ \frac{dE_h}{dt} &= \beta_h S_h I_v /N_v - \alpha_h E_h + \Delta_{Eh}\\ \frac{dI_h}{dt} &= \alpha_h E_h - \gamma I_h + \Delta_{Ih} \\ \frac{dR_h}{dt} &= \gamma I_h + \Delta_{Rh}\\ \frac{dS_v}{dt} &= \delta N_v - \beta_v S_v I_h/N_h - \delta S_v + \Delta_{Sv}\\ \frac{dE_v}{dt} &= \beta_v S_v I_h/N_h - (\alpha_v + \delta) E_v + \Delta_{Ev}\\ \frac{dI_v}{dt} &= \alpha_v E_v - \delta I_v + \Delta_{Iv}\\ \frac{dC}{dt} &= \alpha_h E_h. \end{align} \end{subequations} Denote the enriched model as $\mathcal{E}(x, dx/dt; \theta, \phi)$, i.e., \begin{align} \frac{dx}{dt} &= \mathcal{E}\left(x, \frac{dx}{dt}; \theta, \phi\right)\\ & = \mathcal{Z}\left(x; \theta\right) + \Delta\left(x, \frac{dx}{dt}; \phi\right). \end{align} We set $\theta = \theta_n$ and use the same initial conditions as in equations~\ref{eq:ic1}-\ref{eq:ic8}. The final step to fully specify the enriched model is to calibrate the discrepancy parameters $\phi$; this step is explained in the following subsection. \subsection{Calibration details} In contrast with the calibration process of\,\,\cite{dantas2018calibration} described in \S~\ref{ssec:prev}, here we use a Bayesian framework\cite{howson2006scientific, jaynes2003probability} to calibrate the discrepancy parameters $\phi$ given the data $d$. This allows for the representation of uncertainty about these parameters, and also how this uncertainty propagates to model output. Recall that the observations are cumulative cases at each epidemiological week, $d = \{d_i\}, i=1,\dots,52$. For each $d_i$, let $y_i$ be the corresponding model output. We assume that the measurements are independent and that the measurement error is additive and Gaussian as such: \begin{equation} d_i = y_i + \epsilon, \quad \epsilon \sim \mathcal{N}(0, \sigma_\epsilon^2)\label{eq:eps} \end{equation} with standard deviation $\sigma_\epsilon = 5\times10^3$. This standard deviation value seems reasonable as the uncertainty in reported values is high, and because the observations are on the order of tens to hundreds of thousands. In the Bayesian framework, the conditional probability density of $\phi$ given the data $d$, $p_{\text{po}}(\phi|d)$, is called the \emph{posterior} and given as: \begin{equation} p_{\text{po}}(\phi | d) = \frac{p_{\text{li}}(d | \phi) p_{\text{pr}}(\phi)}{p_{\text{ev}}(d)}.\label{eq:bayes} \end{equation} We specify each term on the RHS above: \begin{itemize} \item \emph{Prior:} The prior density $p_{\text{pr}}(\phi)$ collects the prior knowledge we have about the parameters. Specifically, these parameters are assumed independent and uniform in the prior, where each $p_{\text{pr}}(\phi_i) = \mathcal{U}(-0.3, 0.15)$, and so \begin{equation} p_{\text{pr}}(\phi) = \prod_{i=1}^{14}p_{\text{pr}}(\phi_i) .\end{equation} \item \emph{Likelihood:} The likelihood $p_{\text{li}}(d | \phi)$ tells us how likely it is to observe $d$, given a particular value of $\phi$. The measurement error model in Eq.~\eqref{eq:eps} yields the likelihood function \begin{equation} p_{\text{li}}(d|\theta) = \frac{1}{\sqrt{2\pi |\Sigma|}} \exp{\left( -\frac{1}{2}(d-y)^T \Sigma^{-1} (d-y) \right)},\end{equation} where $\Sigma = \sigma_\epsilon^2 I$. \item \emph{Evidence:} The evidence $p_{\text{ev}}(d) = \int p_{\text{li}}(d|\phi)p_{\text{pr}}(\phi) d\phi$ gives the probability of observing the data $d$. This is typically difficult to compute, but note that it is not a function of $\phi$. With a Markov chain Monte Carlo (McMC) approach, the posterior is found by computing ratios of the RHS in equation~\ref{eq:bayes} (for different values of $\phi$), and so fortunately this term cancels. \end{itemize} Under this framework, the discrepancy parameters $\phi$ are calibrated using the \textsc{DRAM} method, developed by \cite{haario2006dram} and implemented through the library \textsc{QUESO} \cite{prudencio2012parallel}. The complete code for this project is available here: \texttt{https://github.com/rebeccaem/zika} \cite{morrison2020zikacode}. Table~\ref{tab:phi} presents posterior means and standard deviations for each of the fourteen marginal posterior densities of the discrepancy parameters (as histograms). \begin{table} \caption{\label{tab:phi}Information from the parameter posterior $p_{\text{po}}(\phi | d)$.} \begin{tabular}{|c|c|c|} \toprule Parameter & Posterior & Posterior\\ &mean &standard deviation\\ \midrule $\kappa_1$ & -0.04 & 0.005\\ $\kappa_2$& -0.26 & 0.02\\ $\kappa_3$& 0.10& 0.02\\ $\kappa_4$ & -0.04 & 0.13\\ $\kappa_5$& 0.07 & 0.01\\ $\kappa_6$& 0.11 & 0.02\\ $\kappa_7$& 0.10 & 0.01\\ $\lambda_1$& 0.00 & 0.09\\ $\lambda_2$& -0.15 & 0.08\\ $\lambda_3$& -0.18 & 0.08\\ $\lambda_4$& -0.15 & 0.09\\ $\lambda_5$& -0.02 & 0.10\\ $\lambda_6$& -0.15 & 0.10\\ $\lambda_7$& -0.06 & 0.09\\ \bottomrule \end{tabular} \end{table} \subsection{Numerical results}\label{sec:res} Figure~\ref{fig:enr} shows the enriched model response compared to the data. Uncertainty in discrepancy parameters $\phi$ is propagated through to model output: the thick center line shows the median response, the darker band shows the 50\% confidence interval, and the lighter band the 95\% confidence interval (CI). Importantly, all observations are in fact captured by the 95\% CI. \begin{figure} \includegraphics[width=.4\textwidth]{figs/zika-enr} \caption{\label{fig:enr} Outbreak data and enriched model response.} \end{figure} For comparison's sake, Figure~\ref{fig:all} presents at once all model responses considered in this paper, and Figure~\ref{fig:allzoom} shows the same, but zoomed into weeks 12-30. (For visualization purposes, only the median line is shown for the enriched model.) The enriched model is clearly an improvement. \begin{figure} \begin{subfigure}{.4\textwidth} \includegraphics[width=\textwidth]{figs/zika-all} \caption{\label{fig:all}} \end{subfigure} \begin{subfigure}{.4\textwidth} \includegraphics[width=\textwidth]{figs/zika-all-zoom} \caption{\label{fig:allzoom}} \end{subfigure} \caption{Outbreak data compared to all model responses. Figure (b) is zoomed into weeks 12-30.} \end{figure} \subsection{Interpretation}\label{ssec:interp} The embedded discrepancy operator can be interpreted from two different points of view. The first is a more mathematical lens, although not disconnected from the physics: we interpret the discrepancy operator as a linear feedback signal. The second, in contrast, relies on an epidemiological basis: here we interpret the corrections made by the discrepancy operator as effects due to causes of biological origin. This second point of view is especially interesting for the goal of elucidating potential deficiencies in the baseline model. As a wide range of issues must yet be explored to obtain a consistent epidemiological interpretation, this line will not be addressed in this manuscript, but will be the topic of future work. Instead, the first perspective is explored further below. In light of theory of systems with linear feedback, the discrepancy operator is a linear combination of the system state and its first order time-derivative, thus defining a signal that feeds the original nonlinear system with information from the present state and its rate of change. Roughly speaking, the parameters of the enrichment can be seen as ``gains'' that adjust to drive the epidemic curve generated by the model towards the real observational curve.  These parameters are identified via Bayesian inference, with prior distributions that admit negative and positive values: thus the gains can define both negative and positive feedbacks. The realizations of the enriched dynamical system may then admit a superposition between negative and positive feedbacks, generating a kind of competition between the model's stimuli signals. This competition stabilizes some coordinates of the state vector and destabilizes others, while the global effect materializes in the corrected (enriched) epidemic curve. To understand more deeply why this competition between corrective signals produces such an effective correction, consider the injection and removal of information (i.e., energy) in the system, as well as its flow between the different coordinates of the state (groups of human and mosquito populations). To make an analogy with the dynamics of mechanical oscillators, the feedback effects proportional to the state derivative produce a kind of ``viscous force,'' which introduces (via positive feedback) or removes (via negative feedback) information into or from the epidemiological state variables. Furthermore, the terms proportional to the system state correspond to a kind of ``restoring force,'' which redistributes information among the different population groups. The intensity of this additional information flow between the different coordinates of the system state is controlled by the new time scales induced by the feedback signals, which are nonlinear functions of the gains $\kappa_i$ and $\lambda_i$, $i = 1,\dots, 7$. It should be noted that the approach presented in the paper is not control theory in the literal sense, since no information from the biological system is obtained in real-time, nor is an action signal sent to the real system to adjust its epidemic curve trajectory. Therefore, the observability and controllability issues related to the real system are not taken into account. The notion of duality between parameters and gains is explored above only as a way to pave an initial reasonable interpretation of how the discrepancy operator acts to correct the model's response, by promoting additional information flows between the different compartments of the populations. \section{Effects of under-reporting}\label{sec:rep} Now let us also consider the scenario that the data is in fact under-reported. First, suppose 10\% of cases are not reported, so that $d_i = .9 d_i^*$, where $d_i^*$ represents the value of observations we would expect without under-reporting. (This is not claiming $d_i^*$ is the exact true value, as we still expect unbiased measurement error.) The discrepancy parameters are re-calibrated, and the corresponding model response is shown in Figure~\ref{fig:rep10p}. Again, all (modified) observations are captured by the enriched model response. \begin{figure} \includegraphics[width=.4\textwidth]{figs/zika-rep10p} \caption{\label{fig:rep10p} Modified outbreak data, assuming 10\% under-reporting, and enriched model response.} \end{figure} Finally, we suppose that only 50\% of cases are reported, so that $d_i = 0.5 d_i^*$. These results are shown in Figure~\ref{fig:rep50p}. Even here, the enriched model adapts to this scenario and covers the dynamical behavior of the outbreak in this highly under-reported scenario. \begin{figure} \includegraphics[width=.4\textwidth]{figs/zika-rep50p} \caption{\label{fig:rep50p} Modified outbreak data, assuming 50\% under-reporting, and enriched model response.} \end{figure} \section{Conclusion}\label{sec:con} This work presents an initial endeavor to represent the model discrepancy of an epidemiological system, namely, the 2016 Brazilian Zika outbreak. Preliminary results are promising---compared to the original model, the embedded discrepancy operator greatly improves the consistency between model output and available observations. The general applicability of this method to other epidemiological models is best understood in two parts. One one hand, the formulation of the enriched model equations is immediately applicable to another model, if that model is also comprised of a set of ODEs. That is, nothing prevents a modeler from testing the proposed model enrichment framework in that case. On the other hand, the particular details of the calibration, and whether or not this approach is in fact able to capture the discrepancy between the original model and the data may depend on domain specific information. Future studies will test this approach in other outbreak data sets. Many other open questions remain. In Section~\ref{ssec:interp}, we discuss two possible interpretations of the embedded discrepancy operator. First, the linear terms added into the differential equations resemble a type of linear feedback, with one term proportional to the state variable, and one a differential ``control'' term. In this case, the discrepancy operator is not actually controlling the real system itself, but driving the model system to the target (epidemic data). Second, and perhaps more importantly, would be a physiological interpretation of the discrepancy terms. While beyond the scope of this paper, a deep exploration of interpretability, connections to linear feedback theory, and an explanation of these discrepancy terms in a physiological sense will be the subject of immediate future work. Related to the point above, we would also like to understand what the calibrated discrepancy operator implies about the missing dynamics of the reduced model. That is, can we use the learned discrepancy model to infer what the reduced model is most critically lacking? This question is currently under study, also in the context of ecological models (which have a similar structure, as sets of coupled ODEs). Doing so would allow the use of these embedded operators to function as a type of modeling tool, as opposed to only a model correction. Finally, this study would be perhaps more convincing with more trust-worthy data. How to achieve this, though, is just as complex a problem as the epidemiological system itself, as it involves accessibility to healthcare in remote regions, public awareness of mandatory reporting policies, and incentives and rewards for timely reporting of a communicable disease. \begin{acknowledgments} The authors acknowledge Prof. Davi Ant\^{o}nio dos Santos (ITA Brazil) for very helpful discussions, especially regarding the linear feedback aspect of this work. The second author acknowledges the financial support given by the Brazilian agencies Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{\i}vel Superior - Brasil (CAPES) - Finance Code 001, and the Carlos Chagas Filho Research Foundation of Rio de Janeiro State (FAPERJ) under grants 210.021/2018 and 211.037/2019. \end{acknowledgments} \section*{Data availability statement} The data are available in a database maintained by the Brazilian Ministry of Health \cite{SVS2017} and also available as supplementary material in\,\, \cite{dantas2018calibration}. All data (and code) needed to reproduce results in this paper are also included in the code base\,\, \cite{morrison2020zikacode} with \texttt{doi:10.5281/ZENODO.3666845}.
1,108,101,563,725
arxiv
\section{Introduction} \label{sec:introduction} The apparent magnitude distribution of red clump (RC) stars toward Galactic bulge sightlines that are at least $\sim5$ degrees apart from the plane is bimodal \citep{2010ApJ...721L..28N,2010ApJ...724.1491M}, an artefact of an excess in the orbital distribution of bulge stars that would appear X-shaped if the Galactic bar were viewed side-on \citep{2012ApJ...756...22N,2012ApJ...757L...7L}. Given that the class of orbits that contributes to this morphology, predominantly trapped by the x1 tree of families, is sharply sensitive to the Galactic gravitational potential \citep{2002MNRAS.337..578P,2003MNRAS.341.1179A}, and that the kinematics of bulge stars have been shown to be correlated to metallicity \citep{2008A&A...486..177Z,2010A&A...519A..77B,2013MNRAS.432.2092N}, mapping how the strength and extent of the X-shape correlates with metallicity could constrain formation and evolution models of the Galaxy. For example, it has been suggested that: \begin{quotation} \noindent Stars supporting the X-shape would primarily be disc stars, the latest ones captured into resonance that were in the mid-plane prior to their capture into resonance. As they were all disc stars, they should have similar metallicity and that typical of the disc just exterior to resonance - \citet{2014MNRAS.437.1284Q}. \end{quotation} Such a prediction can only be compared to observations that include both metallicity and kinematic information, thus requiring an accounting of the systematics thereof. In that regard, we identify four claims linking the X-shaped morphology of high-latitude bulge stars to metallicity. \citet{2012ApJ...756...22N} used a combination of spectroscopic and photometric data along the bulge minor axis to argue that the X-shape is stronger for stars with [Fe/H]$\geq0$ than for stars with $-0.5 \leq$ [Fe/H] $< 0.0$, with the X-shaped morphology disappearing among stars with [Fe/H] $\lesssim -0.50$. \citet{2012A&A...546A..57U} independently argued, also by means of a combination of spectroscopic and photometric data, that stars with metallicity [M/H] $\leq -0.20$ do not show the split red clump, in contrast to stars with [M/H] $\geq -0.20$. This demarcation has also been recently argued in separate conference presentations by Manuela Zoccali\footnote{http://www.ctio.noao.edu/noao/conference/Presentations} and Alvaro Rojas-Arriagada\footnote{http://www.sexten-cfa.eu/images/stories/conferenze2014/bulge/talks/Formevogalaclu-program.pdf} to be confirmed by data from the Gaia-ESO survey \citep{2012Msngr.147...25G}. \citet{2013ApJ...776L..19D} have argued that bulge RR Lyrae stars (with a mean metallicity [Fe/H]$\approx-1.0$, see \citealt{2012ApJ...750..169P}) are distributed as a spheroid, and not as a bar, unlike the more metal-rich RC stars. The combination of these four works suggests an open-and-shut case: The X-shape of the Galactic bulge is most prominent among the most metal-rich stars, progressively becoming weaker with decreasing metallicity, disappearing entirely for stars with [Fe/H] $\lesssim -0.50$. In this Paper, our aim is neither to confirm nor refute this claim, but to argue for further diligence in treating systematics that emerge due to the convolution of stellar physics with Galactic dynamics, this convolution being that which is ultimately observable. We use a combination of stellar and dynamical models (discussed in Section \ref{sec:Models}) to show that even if the X-shape were uniformly prominent among stars of all metallicities, it would still appear more prominent with increasing metallicity due to a combination of up to three factors. These are the metallicity-dependence of the colour of the RC (Section \ref{sec:Systematic1}), the increase in the ratio of RC to red giant (RG) stars with increasing metallicity (Section \ref{sec:Systematic2}), and the effect of the red giant branch bump (RGBB) (Section \ref{sec:Systematic3}). \begin{table*} \caption{\large Predicted parameters for the combined red giant branch + red clump + asymptotic giant branch luminosity function calculated from BaSTI\textsuperscript{\ref{foot:1}} isochrones \citep{2004ApJ...612..168P,2007AJ....133..468C}, as a function of the metallicity [M/H], [$\alpha$/Fe], the age $t$ in Gyr, and the initial helium abundance $Y$, integrated in the luminosity range $-1.6 \leq M_{I} \leq 1.4$. In the top rows we list the model outputs for scaled-solar abundances and ages $t=12$ Gyr. In the middle rows we show the model outputs for $\alpha$-enhanced isochrones, which are marginally different than scaled-solar isochrones at fixed [M/H], age, and initial helium abundance. In the bottom rows we list the model outputs for a range of ages and helium abundances, due to the uncertainty in the age-helium-metallicity relation of bulge stars at high metallicity. \newline} \centering \begin{tabular}{ccccccccccccccc} \hline \hline [M/H] & [$\alpha$/Fe] & $Y$ & t/Gyr & $B$ & $EW_{RC}$ & $M_{I,RC}$ & $(V-I)_{RC}$ & $(V-K)_{RC}$ & $(M/M_{\odot})_{RC}$ & ${\log{g}}_{RC}$ & $f^{RC}_{RGBB}$ & ${\Delta}I^{RC}_{I_{RGBB}}$ & $f^{RC}_{AGBB}$ & ${\Delta}I^{RC}_{I_{AGBB}}$ \\ \hline \hline \hline $-$1.27 & 0.0 & 0.25 & 12 & 0.66 & 1.75 & $-$0.29 & 0.78 & 1.75 & 0.74 & 2.46 & 0.10 & $-$0.47 & 0.02 & $-$1.03 \\ \hline $-$0.66 & 0.0 & 0.25 & 12 & 0.67 & 1.98 & $-$0.31 & 0.90 & 2.05 & 0.78 & 2.37 & 0.18 & 0.17 & 0.04 & $-$1.04 \\ \hline $-$0.35 & 0.0 & 0.26 & 12 & 0.65 & 2.08 & $-$0.27 & 0.98 & 2.23 & 0.82 & 2.34 & 0.23 & 0.44 & 0.03 & $-$1.04 \\ \hline $-$0.25 & 0.0 & 0.26 & 12 & 0.65 & 2.12 & $-$0.24 & 0.99 & 2.29 & 0.84 & 2.35 & 0.25 & 0.51 & 0.04 & $-$1.04 \\ \hline $+$0.06 & 0.0 & 0.27 & 12 & 0.61 & 2.10 & $-$0.17 & 1.08 & 2.49 & 0.91 & 2.33 & 0.29 & 0.66 & 0.04 & $-$1.04 \\ \hline $+$0.25 & 0.0 & 0.29 & 12 & 0.62 & 2.26 & $-$0.13 & 1.15 & 2.61 & 0.94 & 2.32 & 0.30 & 0.78 & 0.05 & $-$1.02 \\ \hline $+$0.40 & 0.0 & 0.30 & 12 & 0.58 & 2.32 & $-$0.12 & 1.20 & 2.71 & 0.95 & 2.30 & 0.30 & 0.85 & 0.05 & $-$0.98 \\ \hline \hline $-$1.27 & +0.4 & 0.25 & 12 & 0.65 & 1.75 & $-$0.29 & 0.76 & 1.72 & 0.74 & 2.47 & 0.08 & $-$0.53 & 0.02 & $-$1.03 \\ \hline $-$0.66 & +0.4 & 0.25 & 12 & 0.66 & 2.02 & $-$0.33 & 0.88 & 2.00 & 0.77 & 2.37 & 0.16 & 0.11 & 0.03 & $-$1.02 \\ \hline $-$0.35 & +0.4 & 0.26 & 12 & 0.66 & 2.18 & $-$0.31 & 0.95 & 2.17 & 0.80 & 2.34 & 0.21 & 0.41 & 0.02 & $-$1.03 \\ \hline $+$0.06 & +0.4 & 0.27 & 12 & 0.60 & 2.21 & $-$0.25 & 1.07 & 2.31 & 0.87 & 2.30 & 0.26 & 0.66 & 0.02 & $-$1.02 \\ \hline \hline $+$0.26 & 0.0 & 0.35 & 11 & 0.60 & 2.86 & $-$0.31 & 1.08 & 2.43 & 0.72 & 2.20 & 0.20 & 0.80 & 0.05 & $-$1.12 \\ \hline $+$0.26 & 0.0 & 0.32 & 11 & 0.59 & 2.50 & $-$0.17 & 1.11 & 2.50 & 0.78 & 2.27 & 0.25 & 0.69 & 0.05 & $-$1.12\\ \hline $+$0.26 & 0.0 & 0.32 & 7 & 0.58 & 2.64 & $-$0.28 & 1.12 & 2.58 & 0.95 & 2.30 & 0.19 & 0.59 & 0.04 & $-$1.05 \\ \hline $+$0.25 & 0.0 & 0.29 & 7 & 0.59 & 2.32 & $-$0.21 & 1.14 & 2.56 & 1.12 & 2.39 & 0.22 & 0.57 & 0.04 & $-$0.99 \\ \hline $+$0.25 & 0.0 & 0.29 & 4 & 0.61 & 2.33 & $-$0.27 & 1.12 & 2.53 & 1.32 & 2.46 & 0.15 & 0.44 & 0.06 & $-$0.97 \\ \hline \hline \end{tabular} \label{table:PredictedLuminosityParameters} \end{table*} \section{Models} \label{sec:Models} The stellar isochrones and luminosity functions used in this work are predominantly taken from the BaSTI stellar database\footnote{\label{foot:1}http://albione.oa-teramo.inaf.it}. The models assume a scaled-solar abundance mixture without overshooting, and include both the first \citep{2004ApJ...612..168P} and second \citep{2007AJ....133..468C} ascent of the RG branch, hereafter respectively referred to as the RGB and asymptotic giant branch (AGB). Additional models with enhanced helium-enrichment ($Y=0.32,0.35$) that assume otherwise identical physics to those downloaded from BaSTI database have been computed specifically for this work. The predicted parameters of the combined RG, RC, and AGB luminosity function are listed in Table \ref{table:PredictedLuminosityParameters}. The N-body model used in this work has been made by \citet{2003MNRAS.341.1179A} and used by \citet{2012ApJ...756...22N}, where the latter estimated a scale factor between model distance units and Kpc of 1.2 as optimal to interpret Galactic kinematics data. The model is initialised as an isolated, axisymmetric galaxy with a live disk and halo component, with the disk stars having an exponential distribution in the radial direction. The bar grows from the disk and part of it buckles to form an X-shaped structure (for a review of the process, see \citealt{2005MNRAS.358.1477A}). We evaluate the model in the same evolutionary state as \citet{2012ApJ...756...22N}. We assume a distance between the Sun and the Galactic centre of 8.13 Kpc, and a viewing angle between the major axis of the Galactic bar and the line of sight between the Sun and the Galactic Centre of $\alpha=29.4^{\circ}$ \citep{2013MNRAS.434..595C}. \section{First Systematic Bias: The Horizontal Branch Becomes Bluer with Decreasing Metallicity, Eventually Being Selected Against} \label{sec:Systematic1} The first systematic bias that we quantify arises from the fact that the RGB is redder than the RC at fixed metallicity and luminosity ($(V-I)_{RG} - (V-I)_{RC} \approx 0.15$ mag) and that both become bluer with decreasing metallicity, at a rate of $d(V-I)_{RC}/d\rm{[M/H]} \approx 0.25$ mag dex$^{-1}$ -- see Figure \ref{Fig:BastiIsochroneFigure} and Table \ref{table:PredictedLuminosityParameters}. This means that the ratio of RC stars to RG stars will be a metallicity-dependent function of the colour-cut used by any given survey, modifying the diagnostic power of a sample to resolve the distance distribution function. That is because unlike RC stars, RG stars have an intrinsic luminosity function dispersed over $\sim$4 magnitudes, and are thus much less suitable to distance determinations. Reviewing the potential impact of this effect on current results: \begin{itemize} \item \citet{2012ApJ...756...22N} use a colour-cut of $(J-K)_{0} \geq 0.40$ (corresponding to $(V-I)_{0} \gtrsim$ 0.65), and as such should have neutrally-sampled the bulk ($\gtrsim 99$\%) of bulge RC and RG stars, given the assumption that their stellar evolution is adequately predicted by the models used in this work; \item \citet{2012A&A...546A..57U} use a selection that imposes an effective colour-cut at the RC of $0.60 \lesssim (J-K)_{0} \lesssim 0.70$, corresponding to $0.97 \lesssim (V-I)_{0} \lesssim 1.13$. This will have the effect of decreasing the ratio of RC to RG stars at the metal-poor end, and increasing it at the metal-rich end. \citet{2012A&A...546A..57U} indeed reports more prominent RCs for [M/H] $\geq -0.20$ than for [M/H] $< -0.20$; \end{itemize} We note that the model predictions stated in Table \ref{table:PredictedLuminosityParameters} may be overestimating the colour of metal-poor horizontal branch stars in the bulge. The Galactic bulge RR Lyrae (ab type) population has a mean metallicity of [Fe/H]$\approx -1.00$ \citep{2012ApJ...750..169P}, which indicates that metal-poor horizontal branch stars in the bulge may be bluer than predicted by a whopping ${\delta}(V-I) \approx 0.35$ mag. Galactic bulge globular clusters are also known to have horizontal branches that are very blue for their metallicities \citep{2009A&A...507..405B}. \citet{1992AJ....104.1780L} argues that the colour of huge metal-poor horizontal branch stars indicates an old bulge, though we find that an unphysical age of $t \approx 17.0$ Gyr is needed to produce an RRab morphology for stars with [Fe/H]$=-1.0$ if one assumes standard stellar physics. A plausible explanation is that the mass-loss is greater-than-expected for low-metallicity RG stars in the bulge, leading to a bluer horizontal branch morphology. Alternatively, the helium-abundance of these stars might be higher than expected. Until this discrepancy has been explained, it will not be possible to quantify selection biases for low-metallicity ([Fe/H] $\lesssim -0.80$) bulge horizontal branch stars. \begin{figure} \begin{center} \includegraphics[totalheight=0.35\textheight]{BastiIsochroneFigure} \end{center} \Large \caption{\large BaSTI\textsuperscript{\ref{foot:1}} t$=12$ Gyr isochrones of metallicities [M/H]$=-1.27,-0.66,+0.06,+0.40$ \citep{2004ApJ...612..168P,2007AJ....133..468C} over plotted on a dereddened $(V-I,I)$ Galactic bulge colour-magnitude diagram toward the OGLE-III field BLG16 \citep{2011AcA....61...83S,2013ApJ...769...88N}, centred on $(l,b)=(0.00^{\circ},-5.80^{\circ})$. We assume a distance modulus of $\mu=14.55$ for the overplotting of the isochrones. The metallicity distribution function of any bulge sample as well as the ratio of red clump stars to red giant stars will clearly be a sensitive function of the colour selection. } \label{Fig:BastiIsochroneFigure} \end{figure} In addition, the referee directs us to a curious discrepancy shown in Figure 20 of \citet{2003A&A...399..931Z}, where synthetic modelling of the bulge luminosity function predicts a mean RC colour that is $\sim$0.1 mag bluer in $(J-K)$ than the RGB at the same apparent magnitude in $K$, whereas there no such offset in the observed CMD. This discrepancy suggests that it will be difficult to model the selection effects. We agree that this is a cause for concern that would benefit from further investigation. We argue that it is likely due to errors at higher metallicity ([Fe/H] $\gtrsim$ 0), where each of stellar mass-loss along the red giant branch, colour-temperature relations, and spectroscopic model atmospheres are more theoretically uncertain, as well as the fact that the age and helium abundance of the bulge are empirically uncertain at higher metallicities. We think it is unlikely that this is an issue at lower metallicities. The same theoretical framework adopted in present investigation appears fully appropriate to describe the RC morphology of stars in the solar neighbourhood as investigated with the Hipparcos satellite \citep{2004ApJ...612..168P} and the horizontal branch morphology of metal-poor Galactic globular clusters (see, for instance, \citealt{2013MNRAS.430..459D}, and references therein). Separately, Figure 20 of \citet{2003A&A...399..931Z} also shows an excess in observations relative to predictions in the number of horizontal branch stars bluer than the RC, in agreement with \citet{1992AJ....104.1780L}. \section{Second Systematic Bias: The Ratio of Red Clump to Red Giant Stars is an Increasing Function of Metallicity} \label{sec:Systematic2} Higher metallicity has the effect of decreasing the duration of the RGB, and of increasing the lifetime of the core helium-burning phase (see discussions in \citealt{1994A&A...285L...5R}, \citealt{2000ApJ...538..289Z}, and \citealt{2006essp.book.....S}). This means that a metal-rich sample will have a higher ratio of RC to RG+AGB stars, and thus a more easily discernible distance distribution function. We parameterise this effect by measuring the equivalent width of the RC on the luminosity function in the models, $EW_{RC}$, which is the ratio of the number of RC stars to the combined number density of RG and AGB stars at the luminosity of the RC. It is similar to the parameter $EW_{RGBB}$ previously measured for Galactic globular clusters by \citet{2013ApJ...766...77N}, though for bulge stars one cannot separate the AGB from the RGB, thus leading to a different normalisation -- which also effects determinations of the parameter $B$. We find that for a scaled-solar helium abundance and fixed age ($t=12$ Gyr), the equivalent width of the RC rises from $EW_{RC} = 1.75$ at [M/H]$= -$1.27 to $EW_{RC} = 2.08$ at [M/H]$= -0.35$, and finally to $EW_{RC} = 2.32$ at [M/H]$= +$0.40. Thus, the ratio of RC to RGB+AGB stars rises by $\sim$33\% as the metallicity increases from [M/H]$=-1.27$ to [M/H]$=+0.40$. In other words, in the limiting case of metallicity and kinematics being uncorrelated, a sample of bulge stars with [M/H]$=-1.27$ at the apparent luminosity of the RC would need to be $1.33^{2} = 1.76 {\times}$ larger than the corresponding sample of stars with [M/H]$=+0.40$ (since statistical significance typically scales as the square root of the number of data points), in order to measure features such as the bimodality in the distance distribution function with comparable statistical significance, all other factors being equal (which they're not, see Section \ref{sec:Systematic3}). We note that the analysis of \citet{2012ApJ...756...22N} should not be affected by this bias, as their population partition included 80 stars for [Fe/H[$\geq 0$, 240 stars for $-0.50 \leq$[Fe/H[$<0$, and 200 stars for [Fe/H[$< -0.50$ -- \citet{2012ApJ...756...22N} already have more stars in their metal-poor bins, due to where they set the bincenters and the fact they studied sightlines relatively far from the plane, where the mean metallicity is lower. The bias estimated by this section is in fact a lower bound, as reports in the literature argue that metal-rich bulge stars may be, on average, younger \citep{2013A&A...549A.147B} or helium-enhanced \citep{2012ApJ...751L..39N,2013ApJ...766...77N}, or both younger and helium-enhanced \citep{2013MNRAS.428.2577B}, all of which would further increase $EW_{RC}$ at high metallicity. Predicted parameters for these scenarios are listed in the lower part of Table \ref{table:PredictedLuminosityParameters}. \section{Third Systematic Bias: The Prominence of the Red Giant Branch Bump is an Increasing Function of Metallicity} \label{sec:Systematic3} The RC is not the only departure from an exponential continuum in the luminosity function of RG stars. During the RGB, there is a relatively brief period where the nuclear efficiency of the hydrogen burning shell temporarily drops, leading to a corresponding decrease in the luminosity of the star, thus leading to an excess in the luminosity function called the ``red giant branch bump" (RGBB, as before). The number counts and characteristic luminosity of the RGBB are a steeply sensitive function of the age, metallicity, and helium abundance of a stellar population \citep{1997MNRAS.285..593C,2010ApJ...712..527D,2013ApJ...766...77N}. We describe the metallicity-dependence of the predicted parameters for the RGBB in this section, and then we demonstrate that failure to account for this component of the luminosity function will bias determinations of the distance distribution function. Fixing the age to $t=12$ Gyr and the helium abundance to scaled-solar, the predicted luminosity of the RGBB relative to the RC, ${\Delta}I^{RC}_{I_{RGBB}}$, decreases from ${\Delta}I^{RC}_{I_{RGBB}}=-0.47$ to ${\Delta}I^{RC}_{I_{RGBB}}=+0.85$ as the metallicity increases from [M/H]$=-1.27$ to [M/H]$=+0.40$, an impressive shift of 1.32 mag in luminosity over 1.67 dex in metallicity. In contrast, the separation in brightness between the two RCs of the Galactic bulge is $\sim$0.45 mag \citep{2010ApJ...721L..28N,2010ApJ...724.1491M,2013ApJ...776...76P}, approximately corresponding to ${\Delta}I^{RC}_{I_{RGBB}}$ for stars with metallicity [M/H]$=-0.35$. In addition to the decreasing luminosity of the RGBB, the ratio of RGBB to RC stars, $f^{RC}_{RGBB}$, is predicted to increase from $f^{RC}_{RGBB}=0.10$ to $f^{RC}_{RGBB}=0.30$ over the same metallicity range. It is therefore straightforward to understand why ignoring the RGBB will thus lead one to infer distorted brightness differences between the two RCs, with the effect growing worse with increasing metallicity. The predicted parameters of the RGBB are listed in Table \ref{table:PredictedLuminosityParameters}, where we also list the corresponding parameters for the asymptotic giant branch bump (AGBB). In Figure \ref{Fig:NBodyLF}, we convolve the distance modulus distribution for $(l,b) = (0^{\circ},-10^{\circ})$ (a sightline used by both \citealt{2012ApJ...756...22N} and \citealt{2012A&A...546A..57U}) predicted by our N-body model (top panel) with intrinsic luminosity functions corresponding to four different metallicities ([M/H]$=-1.27,-0.66+0.06,+0.40$, middle panels), and Gaussian noise of $\sigma_{I}=0.07$ mag to simulate the effects of photometric errors and differential extinction to produce four predicted apparent luminosity functions (bottom panels). We fit the final luminosity functions using the same methodology as \citet{2013ApJ...776...76P}. If we ignore the RGBB in our fits, fitting for a split red clump leads to ${\chi}^2$ reductions of \{14\%, 33\%, 61\%, 74\%\} in the four metallicity bins relative to the ${\chi}^2$ value obtained when fitting a single RC. In other words, there is the illusion that the distance modulus distribution is more cleanly bimodal among the metal-rich stars, when in fact the increased signal partly emerges from differences in stellar evolution in this construction. This is also rather obvious simply by inspecting the bottom-four panels of Figure \ref{Fig:NBodyLF}. More simply, the RGBB of the nearer (brighter) component has a similar brightness to the RC of the further (fainter) component, which creates a misleading amplification the signal-to-noise of the fainter peak. \subsection{The Effect of a Surface Gravity Cut on Red Giant Branch Bump Contamination} \label{sec:Systematic3logg} In order to have a purer sample of RC stars, \citet{2012ApJ...756...22N} limited their analysis to stars with $1.90 \leq \log{g} \leq 3.10$. We simulate the effect of this selection by constructing a luminosity function whereby for each star we ``measure" $\log{g}$ with a Gaussian error of 0.3 dex, keeping only those stars in the interval $2.00 \leq \log{g} \leq 3.00$. We find that this will not impact the degree of RGBB contamination at high metallicities. For the $t=12$ Gyr, [M/H]$=+0.06$ isochrone, the value of $f^{RC}_{RGBB}$ increases from 29\% to 30\%, whereas for the $t=12$ Gyr, [M/H]$=+0.25$ isochrone $f^{RC}_{RGBB}$ decreases from 29\% to 27\% -- both small changes, reflective of the fact that $\log{g}_{RGBB}$ is typically $\sim 2.60$ at high metallicities, and thus within the measurement error of $\log{g}_{RC}$. The observational fact that this selection improved the clarity of the sample (see Figure 3 of \citealt{2012ApJ...756...22N}) is more likely due to it decreasing the rate of disk contamination. As disk stars are on average closer than the bulge, their contribution to the luminosity function at the apparent magnitude of the RC will be from stars dimmer than the RC, which are more numerous. \begin{figure*} \begin{center} \includegraphics[totalheight=0.65\textheight]{NbodyLF} \end{center} \Large \caption{\large TOP: Predicted distance distribution toward $(l,b)=(0.00^{\circ},-10.00^{\circ})$ from the N-body model \citep{2003MNRAS.341.1179A}, a sightline with an unambiguous bimodality in its distance distribution function. MIDDLE: BaSTI $t=12$ Gyr luminosity functions for the red giant branch as a function of [M/H] \citep{2004ApJ...612..168P,2007AJ....133..468C}. BOTTOM: Convolution of the distance distribution and absolute magnitude distribution function to produce apparent magnitude distribution functions for the four metallicities. Without accounting for the metallicity-dependence of the stellar luminosity function, the red clump will appear more split at higher metallicities due to the similar apparent magnitudes of the near red giant branch bump and the far red clump. } \label{Fig:NBodyLF} \end{figure*} \section{Discussion and Conclusion} \label{sec:Discussion} In this work, we have demonstrated that deriving accurate cartography of the X-shape and its possible metallicity-dependence necessitates a rigorous treatment of not only the spatial morphology, but of stellar physics as well. We explored in detail the predicted metallicity-dependence of the colours of the RC and RGB, of the ratio of RC to RG stars, and of the RGBB and their effects on studies of the split RC. This requirement to rigorously treat stellar evolution has previously been acknowledged by at least some of the literature. Both \citet{2010ApJ...721L..28N} and \citet{2010ApJ...724.1491M}, in their independent discovery papers of the split RC, noted that the RGBB would confuse measurements of the properties of the two RCs. \citet{2013ApJ...776...76P} and \citet{2013MNRAS.435.1874W} both included the RGBB as part of their parameterization in modelling the spatial distribution function of RC stars. \citet{2013A&A...555A..91V} used BaSTI \citep{2004ApJ...612..168P,2007AJ....133..468C} isochrones to estimate an 11\% cross-contamination rate between their bright and faint RC spectroscopic samples. In contrast, \citet{2011AJ....142...76S} ignored the RGBB in their parameterization. We expect that as further analysis is completed, the values of the brightness difference between the two peaks and the fraction of stars in the fainter RC suggested by \citet{2011AJ....142...76S} will be shown to be overestimated. We note that even with the precise predictions listed in Table \ref{table:PredictedLuminosityParameters} it will not be straightforward to convolve N-body models with stellar models to simulate apparent magnitude distribution functions. The first issue is that the luminosity of the RGBB has been shown to be likely overestimated by conventional stellar models by $\sim$0.20 mag with a possible metallicity-trend in the offset \citep{2010ApJ...712..527D,2011PASP..123..879T,2011A&A...527A..59C}. The second issue is that even if stellar models were perfect, it would still not be clear \textit{which} stellar models to actually use, as the age-helium-metallicity relation of the bulge is uncertain at high metallicities \citep{2013A&A...549A.147B,2012ApJ...751L..39N,2013ApJ...766...77N,2013MNRAS.428.2577B}. Finally, we comment on two sources of uncertainty not explored in this work. The first is that of contamination from foreground or background disk stars not in bar orbits but with apparent magnitude distributions overlapping those of stars in the bar. There will be disk contamination at the luminosity of the RC within any bulge photometric sample, and further, that ratio could be metallicity-dependent. As the disk stars have a different spatial distribution than stars captured around bar/bulge orbits, this will lead to distortions in the distance distribution function. The second source of uncertainty lies with the shape of the distance distribution function. Each of \citet{2010ApJ...721L..28N}, \citet{2012A&A...546A..57U}, \citet{2013ApJ...776...76P} and \citet{2013MNRAS.435.1874W} investigated the split RC by assuming a Gaussian distribution for the apparent magnitudes of the two RCs. However, the top panel of Figure \ref{Fig:NBodyLF} predicts non-Gaussian distance distribution functions along the line of sight, with a negative skew for the brighter RC and a positive skew for the fainter RC. For a skewed distribution function, the mode will not correspond to the mean, which could lead to distortions when comparing data to models, or when simply fitting for the RCs in the luminosity function. The mapping of the spatial morphology of Galactic bulge stars and the extent to which the mapping depends on metallicity is a fundamental research enterprise in Galactic archeology. However, this enterprise is a challenging one, with numerous systematics potentially plaguing the way forward. As more data (photometry, spectroscopy, proper motions, etc) comes in from surveys such as Gaia-ESO \citep{2012Msngr.147...25G}, GIBS \citep{2014arXiv1401.4878Z}, VVV\citep{2012A&A...537A.107S} and OGLE-IV \citep{2012AcA....62..219S}, we expect not only better diagnostic power, but also the need for more sophisticated accounting of stellar physics and Galactic dynamics to properly interpret these data. \section*{Acknowledgments} We thank the referee, Manuela Zoccali, for a helpful report and comments. We thank M. Ness, S. Uttenthaler, and M. Asplund for helpful discussions. DMN was supported by the Australian Research Council grant FL110100012. SC is grateful for financial support from PRIN-INAF 2011 "Multiple Populations in Globular Clusters: their role in the Galaxy assembly" (PI: E. Carretta), and from PRIN MIUR 2010-2011, project \lq{The Chemical and Dynamical Evolution of the Milky Way and Local Group Galaxies}\rq, prot. 2010LY5N2T (PI: F. Matteucci). EA acknowledges financial support from the CNES (Centre National d'Etudes Spatiales - France) and from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA grant agreement number PITN-GA-2011-289313 to the DAGAL network. EA is thankful for HPC resources from GENCI- TGCC/CINES (Grants 2013 - x2013047098 and 2014 - x2014047098). This work has made use of BaSTI web tools.
1,108,101,563,726
arxiv
\section{\label{Sec: Introduction} Introduction} Understanding the effect that the lightest and smallest atom, hydrogen (H), has on the physical properties of materials is of paramount importance. For instance, H triggers changes in the mechanical properties of metallic materials, such as a sudden and unpredictable loss of ductility and toughness, which is commonly referred to as hydrogen embrittlement~\cite{Johnson1875,Daw1983}. H also plays for instance a key role in modifying electronic properties in semiconductors~\cite{Stavola1988,Walle2006}. Despite its importance, the direct imaging of H has remained extremely challenging thereby limiting our understanding of its influence on materials. Atom probe tomography (APT) has the ability to detect all elements irrespective of their mass~\cite{Gault2021}, and provides three-dimensional compositional mapping with sub-nanometer resolution~\cite{DeGeuser2020}. This unique combination of high spatial resolution and chemical sensitivity is necessary to enable observations and quantification of hydrogen at specific microstructural features within complex materials. In recent years, these abilities have triggered a surge of interest in the use of the technique to study hydrogen~\cite{Chen2017,Chen2020,BREEN2020108,Mouton2021}. Interestingly in APT measurements, it has been known that characteristic peaks at 1, 2, and 3\,Da, corresponding to H$^+$, H$_2^+$, and H$_3^+$ species respectively, are always produced under high-fields at the surface of many metals. This was studied in detail by Tsong and co-workers in the 1980s~\cite{Tsong1983} who introduced low pressures of H$_2$ inside of the vacuum chamber of the atom probe. Using more modern instrument setups, similar observations have been reported for metals~\cite{Chang2019}, semiconductors~\cite{Tweddle2019, Rigutti2021} and insulators~\cite{Lu2017} with a signal originating either from residual gases from the chamber or H$_2$ from the specimen itself. The H-related peaks can be minimized by reducing the hydrogen content by heat treatment in vacuum~\cite{BREEN2020108}, or by modifying the surface of the specimen by oxidation of the deposition of H-barrier thin films (e.g., TiN~\cite{doi:10.1063/1.1597376}). The study of hydrogen in materials by APT has hence often involved isotope labeling, i.e., using deuterium instead of hydrogen, in order to facilitate identification of the trapping sites for hydrogen in the microstructure~\cite{Gemma2012}, with an emphasis on steels~\cite{Takahashi2012,Chen2017}. In spite of a consensual perspective that H-related species detected by APT are unavoidable, there has been a long-standing debate regarding the origin of detected H atoms. Gaseous H$_2$ molecules are present within the analysis vacuum chamber even in extremely low pressure and temperature (e.g., 10$^{-14}$\,bar and 90\,K). These H$_2$ molecules can be ionized during the measurement either under the effect of only the electric field or, possibly, in combination with the laser pulse, and dissociate, leading to the detection of atomic H$^+$ ions. This detected H is considered as noise in the analysis of the mass spectrum, having nothing to do with the actual distribution of the hydrogen within the microstructure of the sample. Kolli hypothesized controlling the relative amplitude of hydrogen peaks in mass spectra that the hydrogen originates only from the residual hydrogen~\cite{Kolli2017}. In contrast, Breen et al. proposed that a substantial fraction of the detected H was inside the specimen itself~\cite{BREEN2020108}, in line with observations by Chang et al.~\cite{Chang2019}. There are important differences between these two mechanisms. For instance, in the former case, H is initially in the form of gaseous H$_2$ that becomes ionized away from the specimen's surface. In the latter case, hydrogen can be already in its atomic form inside the material or chemisorbed on the surface, and must be desorbed and ionized from the specimen surface itself, potentially following surface diffusion. This uncertainty about the origin limits our ability to precisely quantify H concentration in materials by using APT. It is therefore necessary to enhance our fundamental understanding of the origin and behavior of H in APT in order to elucidate numerous open questions regarding H-involving mechanisms in physics, chemistry, and materials science, including hydrogen trapping or grain boundary segregation of H in the context of hydrogen embrittlement. From a theoretical perspective, the high reactivity of H and its strong impact on the electronic structure of materials hinders theoretical investigations with approximate methods, such as interatomic potentials. The use of first-principle calculations has enabled the significant progress achieved in theoretical understanding of materials with H within the last 25 years~\cite{Walle2006,Ozolins2009,Takahashi2012}. Furthermore, the state-of-the-art approach combining density-functional theory (DFT) with {\it ab initio} atomistic thermodynamics~\cite{Northrup1997,Reuter2001,Walle2002,Reuter2003} allows us to predict the environment-dependent (e.g., temperature and pressure dependent) binding behavior of H both on solid surfaces and in the bulk of a material employing (periodically repeated) supercells for impurities (e.g., H) contained within a finite volume of the host material. Increasing computer power and continuous improvement of the methodology facilitate a high accuracy of the predictions. Here, we investigate the origin of the APT-measured H residuals by combining DFT calculations on a selection of metals (Na, K, Pd, and Pt) and APT experiments on pure Na, a metal with a low-evaporation field, and pure Pt, a metal with a relatively high-evaporation field. Across several datasets, the Na APT measurements exhibit no H-related peaks, in contrast to Pt. Thermodynamic analysis based on DFT calculations allows us to determine the temperature- and pressure-dependent stability of metal surfaces in contact with H gas at the relevant vacuum conditions. Our study sheds light on the origins of H residuals in APT measurements, i.e., the detected H mainly originate from H located at the metal surface, either from contamination during specimen preparation and transfer, or during the APT measurement from adsorption of residual H$_2$ from the chamber onto the surface and migration towards the specimen's apex, making this highly dependent on the analysis conditions. These insights are critical to further optimize experimental workflows enabling the quantification of hydrogen in materials by APT. \section{\label{Sec: Methodology} Methodology} \subsection{\label{m_subsec1} APT specimen preparation from Na} Performing APT analysis requires a needle-shaped specimen in order to generate the intense electrostatic field necessary to initiate the field evaporation of the surface atoms. There are challenges inherent to the sample preparation of alkali metals (e.g., Li, Na, and K), in comparison to transition metals (e.g., Pt). Alkali metals are reactive when in contact with moisture and air (i.e., oxygen), leading to severe oxidation during the sample transfer to form NaO, which is unstable and soon reacts with H to form NaOH. These issues have so far hindered the characterization of alkali metals by APT. Here, we used a specific setup to prepare and transfer specimens that is described in detail in Ref.~\onlinecite{10.1371/journal.pone.0209211}. First, a Na sample ($>99$\,\%, Sigma Aldrich) submerged in kerosene oil was prepared inside an N$_2$-filled glovebox (Sylatech GmbH, Walzbachtal, Germany) to avoid oxidation [Fig.~\ref{fig1}(a)]. The Na sample was first sliced into a small piece ($0.5\times0.75\times0.3$\,cm$^3$) and the piece of bulk Na was attached to a flat Cu stub with adhesive carbon tape. The stub was placed in a CAMECA cryogenic APT puck (CAMECA Instruments, Madison, USA) [Fig.~\ref{fig1}(b)]. This assembly was quickly loaded into an ultrahigh vacuum (UHV) suitcase (VSN-40, Ferrovac GmbH, Zurich, Switzerland) [Fig.~\ref{fig1}(c)]. Once the pressure inside the suitcase reached $<10^{-11}$\,bar, it was detached and transported to a xenon plasma focused ion beam (FIB) /scanning electron microscope (SEM) (Helios PFIB, Thermo-Fisher, Eindhoven, Netherlands) equipped with a Ferrovac docking station [Fig.~\ref{fig1}(d)]. Alkali metals, and particularly Na, were reported to react strongly with the Ga used in conventional FIBs~\cite{Zachman2018}, and the heating and radiation damage caused by the ion-beam milling can lead to melting of the sample~\cite{Rubanov2001}. This explains our choice of a Xe-plasma FIB to limit the ion-beam damage and the reactivity of the implanted ions, as Xe is more inert compared to Ga~\cite{JMayer2007,BURNETT2016119}. Moreover, the cryo-stage is implemented to avoid uncontrolable melting of the Na sample during the FIB process (see Fig.~S1 in the Supporting Information). Figure~\ref{fig1}(e) shows the Na sample inside the FIB chamber. The cryo-stage (Gatan C1001, Gatan Inc., Pleasanton, CA, USA) was pre-cooled to $-189$\,$^\circ$C by cold gaseous nitrogen. A wedge-shaped lamella from the Na bulk was prepared using the lift-out protocol described in Ref.~\onlinecite{THOMPSON2007131}. Clean trenches were milled on the Na surface for the lift-out process. In order to prevent the condensation of gaseous platinum deposition precursor molecules from the lift-out process (e.g. methylcyclopentadienyl trimethyl platinum), the stage was warmed up to room temperature to weld the wedge onto the micromanipulator. Scanning electron micrographs, taken at 5\,kV and 1.6\,nA, for each of the successive steps of the preparation are shown in Fig.~\ref{fig2}(a–j). A low-electron-dose image was taken since the alkali material is also sensitive under electron beam (e.g., Li battery reacts with the electron beam~\cite{Yuan2017}). After the milling process, no open pores or cracks were observed in the SEM and the back-scattered electron (BSE) image in Fig.~\ref{fig2}(j) shows significant contrast difference among Na, Pt deposited during welding, and the Si micro-pillar regions. After the lift-out was performed, the APT puck with the Cu stub was moved back to an intermediate UHV storage chamber part of the Ferrovac docking station. Another APT puck with a commercial Si coupon was inserted in the FIB (previously placed in the intermediate UHV storage chamber) [see Fig.~\ref{fig1}(f)]. The Na lamella was welded to several Si supports using standard Pt precursor (Methyl cyclopentadienyl trimethyl platinum). The cryo-stage was then cooled again to $-189$\,$^\circ$C, and the stage was tilted to 52\,$^\circ$ to be perpendicular to the ion beam column. Progressively, smaller annular milling patterns were used to sharpen the Na into specimens suitable for APT (e.g., tip-diameter less than 100\,nm) [see Fig.~\ref{fig2}(i)]. After the final milling, the cryo-prepared specimens were transferred from the PFIB chamber to the UHV suitcase and subsequently transferred into the CAMECA LEAP (local electrode atom probe) 5000 HR [Fig.~\ref{fig1}(g)]. To summarize, the overall process is shown in Fig.~\ref{fig1}(h). For the Pt specimen, the same protocol was conducted as for Na. The only difference was that Pt bulk was loaded through the FIB intermediate chamber, not the N$_2$ glovebox. \subsection{\label{m_subsec2} APT measurement} Atom probe data were acquired in laser-pulsing mode with a pulse energy of 70\,pJ and rate of 50\,kHz at 1\,\% evaporation rate by adjusting an applied DC voltage. The base temperature was set throughout the measurement to 30\,K and 90\,K, respectively. The Na and Pt specimens that were field evaporated at 90\,K base temperature are labelled as Na$_{\rm 90K}$ and Pt$_{\rm 90K}$ whereas the Na specimen at 30\,K is labeled as Na$_{\rm 30K}$. The chamber pressure was in the 10$^{-14}$\,bar. The 3D data reconstruction, data analysis, and visualization were performed using AP SUITE software version 6.1. \begin{figure}[t] \center \includegraphics[width=0.7\columnwidth]{fig1.png} \caption{An illustration of environmentally sensitive sample preparation for atom probe measurement. (a) The N$_2$ glovebox with the UHV suitcase (white dotted box). Inset shows the Cu-stub puck and Na bulk. (b) The sliced Na is mounted on the puck. (c) The Na in the UHV suitcase. (d) The suitcase is detached and subsequently attached to the Xe-plasma FIB. Inset shows the inside of the docking chamber. (e) The Na and (f) the Si coupon in the FIB chamber. (g) After the fabrication of the APT specimens, the suitcase is attached to APT. (h) The overall process for transferring the Na sample for APT measurement.} \label{fig1} \end{figure} \begin{figure}[t] \center \includegraphics[width=0.9\textwidth]{fig2.png} \caption{APT specimen fabrication process: (a) the as-received Na bulk sample imaged at cryo-temperature. Trenches were milled on (b) the front and (c) the back sides of the interesting region with a width of $<2$\,$\mu$m in 52\,$^\circ$ perpendicular to the ion beam column. (d,e) The L-shape horizontal cut was made at the bottom and the left side of the interesting region in 0\,$^\circ$. (f,g) The stage temperature was heated up to room temperature and the sample was welded with a FIB-Pt deposition on a Si micro-tip. (h) After the welding, the stage was cooled down again to cryo-temperature, and (i) annular milling from the top was performed until the apex radius was below 100\,nm with no pore. (j) BSE image of the final APT specimen. For the Xe cleaning process, the final beam voltage of 5\,kV and 10\,pA were used to etch remaining residuals on the specimens. scale bars: (a) 100\,$\mu$m, (b)-(e) 20\,$\mu$m, (f) 100\,$\mu$m, (g)-(i) 5\,$\mu$m, (j) 2\,$\mu$m} \label{fig2} \end{figure} \subsection{\label{m_subsec3} Computational details} All DFT calculations are performed using the Vienna Ab initio Simulations Package (VASP)~\cite{Kresse1996,Kresse1996a} with the projector augmented wave (PAW) approach~\cite{Bloechl1994}. The kinetic-energy cutoff employed for the plane-wave basis set is 500\,eV. The generalized gradient approximation (GGA) is used for the exchange-correlation approximation~\cite{Perdew1996,Perdew1997,Hammer1999} (see Table~S1 in the Supporting Information for details of the used approximations). Electronic and ionic relaxations are carried out until the total energy convergence is less than 10$^{-5}$\,eV, respectively, 10$^{-4}$\,eV. A repeated slab approach is used to study the most favorable surface planes for body-centered cubic (BCC) and face-centered cubic (FCC) metals, specifically Na(110), K(110), Pd(111) and Pt(111). The periodically repeated slabs are decoupled by adding a vacuum region and applying the dipole correction scheme~\cite{Neugebauer1992}. A detailed description of the computational setup for each metal surface calculation (e.g., vacuum thickness, supercell size, $k$-point mesh, and slab thickness) is given in Table~S1 in the Supporting Information. We employ surface cells larger than the $p({1\times1})$ cell to account for various coverages ($\Theta$) up to 1\,ML (monolayer) (e.g., from $\Theta=1/12$\,ML to 1\,ML for H on Pt(111)). Defining the coverage as the ratio between the number of adsorbate atoms and the number of metal atoms in the top surface layer, 1\,ML is reached for an equal number of adsorbate and surface metal atoms. \subsection{\label{m_subsec4} Surface thermodynamics} The binding energy of hydrogen on the metal surface, $E_{\rm b}$, with respect to a H$_2$ molecule is calculated as \begin{equation} E_{\rm b} = (E^{\rm H-surf}_{\rm tot} - E^{\rm clean-surf}_{\rm tot} - \frac{1}{2}\cdot N_{\rm H}\cdot E^{\rm H_2}_{\rm tot})/N_{\rm H} \quad , \label{eq1} \end{equation} where $E^{\rm H-surf}_{\rm tot}$, $E^{\rm clean-surf}_{\rm tot}$, and $E^{\rm H_2}_{\rm tot}$ are the DFT calculated total energies of a metal surface with and without (i.e., of a clean metal surface) adsorbed H and the H$_2$ molecule, respectively. $N_{\rm H}$ is the number of hydrogen atoms adsorbed on the surface. To account for the stability of the metal surface in a H atmosphere as a function of the temperature and pressure, the change in the Gibbs free energy of each surface phase with respect to a H-free metal surface is calculated as, \begin{equation} \Delta G^{\alpha} = [E^{\rm H-surf}_{\rm tot} - E^{\rm clean-surf}_{\rm tot} - N_{\rm H}\cdot\mu_{\rm H}(T, p) - T\cdot S_{\rm conf}]/A \quad , \label{eq2} \end{equation} where $\mu_{\rm H}(T, p)$ is the chemical potential of hydrogen, which is a function of temperature ($T$) and pressure ($p$), $A$ is the area of the surface cell and $S_{\rm conf}$ is the configurational entropy of the surface atoms. The later is approximated by $-k_{\rm B}\cdot [\Theta\cdot\ln\Theta + (1-\Theta)\cdot\ln(1-\Theta)]$, where $k_{\rm B}$ and $\Theta$ are the Boltzmann constant and the coverage of the adsorbates, respectively. The H chemical potential [$\mu_{\rm H}(T, p)$] is evaluated as follows~\cite{Northrup1997,Reuter2001,Rogal2007}, \begin{equation} \label{eq3} \begin{split} \mu_{\rm H}(T, p) & = \frac{1}{2}E^{\rm H_2}_{\rm tot} + \frac{1}{2}E^{\rm H_2}_{\rm ZPE} + \Delta \mu_{\rm H}(T, p) \\ \Delta \mu_{\rm H}(T, p) & = \frac{1}{2}[H_{\rm H_2}(T, p^0) - H_{\rm H_2}(0 {\rm K}, p^0)] - \frac{1}{2}T[S_{\rm H_2}(T, p^0) - S_{\rm H_2}(0 {\rm K}, p^0)] \\ & + k_{\rm B}T\ln{\frac{p}{p_0}} \quad ,\\ \end{split} \end{equation} where $\frac{1}{2}E^{\rm H_2}_{\rm tot}$ and $E^{\rm H_2}_{\rm ZPE}$ are the total energy at $T=0$\,K and the zero-point energy of a hydrogen molecule. Our calculated value for $E^{\rm H_2}_{\rm ZPE}$ is 0.273\,eV. $\Delta \mu_{\rm H}(T, p)$ contains the temperature- and pressure-dependent free energy contributions. Assuming that H$_2$ gas behaves like an ideal gas, the temperature dependence at standard pressure (i.e., $p^0 = 1$\,atm) is evaluated using tabulated values for enthalpy ($H$) and entropy ($S$) at finite temperature~\cite{JANAF} and the relationship, $G = H - TS$. \section{\label{Sec: Results} Results} \subsection{\label{r_subsec1} Atom probe results} Figure~\ref{fig3}(a) and (b) show the 3D atom maps and corresponding mass spectra acquired from the Na$_{\rm 90K}$ and the Pt$_{\rm 90K}$, respectively. All samples were in the laser pulsing mode (background levels $<10$\,ppm/nsec). While the Pt measurement was smooth, there were several micro-fractures during the Na measurement. We tried high voltage pulsing for both materials, however the level of background signals was higher than would be acceptable ($>1000$\,ppm/nsec) and the Na specimens failed after the collection of only less than 52,000 ions at a pulsed voltage of 10-percentage. The sizes of acquired dataset for Na$_{\rm 90K}$ and Na$_{\rm 30K}$ APT measurements were $>10$\,M ions. In the acquired mass spectrum of the Na$_{\rm 90K}$, strong peaks appear at 23, 62, and 63\,Da correspond to Na, Na$_2$O, and Na$_2$OH, respectively. These peaks associated to residual -O and -OH are frequently observed in experiments following cryo-UHV transfer and can be also associated to low level of frosting on the specimens following preparation ~\cite{khanchandani2021laserequipped,10.1371/journal.pone.0209211}. The composition of the whole Na$_{\rm 90K}$ sample is 99.006\,\% Na, 0.987\,\% O, and 0.007\,\% H following peak decomposition \cite{London2019}. Figure~\ref{fig3}(c), (d) and (e) display the section of the mass spectrum for the hydrogen peaks of the analyses of Pt$_{\rm 90K}$, Na$_{\rm 30K}$, and Na$_{\rm 90K}$, each dataset containing ${2.5\times10^6}$ identified ions. No peak pertaining to H species, at 1, 2 or 3\,Da, is visible above the level of background in the mass spectrum from the analysis of the Na$_{\rm 90K}$ plotted in Figure~\ref{fig3}(e), which corresponds to the lowest electric field conditions across the data reported herein. For the Na$_{\rm 30K}$ analysis, a small peak at 1\,Da is visible, whereas for Pt, strong peaks at 1 and 2\,Da are clearly resolved. \begin{figure}[t] \center \includegraphics[width=0.6\columnwidth]{fig3.png} \caption{3D atom maps and corresponding mass spectra of (a) Na$_{\rm 90K}$ and (b) Pt$_{\rm 90K}$ samples [scale bars are (a) 50 and (b) 20\,nm]. For the mass spectra analyses, 2.5\,M ions were extracted from each atom map. Local region in the mass spectra of (c) Pt$_{\rm 90K}$, (d) Na$_{\rm 30K}$, (e) Na$_{\rm 90K}$.} \label{fig3} \end{figure} \subsection{\label{r_subsec2} Surface thermodynamics and electronic structures} \subsubsection{H Binding on metal surfaces} To elucidate the surface stability of metals in a H-containing atmosphere, we first identify the favorable chemisorption sites for H on the respective metal surface. Based on literature reports for FCC metals [e.g., Pd(111)~\cite{Paul1996,Lovvik1998} and Pt(111)~\cite{Yan2018}] we consider in the following the FCC-hollow sites (i.e., a triply coordinated binding site, on which a next layer metal atom in the continuation of an FCC-stacking sequence would be found) for Pd and Pt. For Na(110) and K(110) we tested various binding sites, specifically top, long-bridge, short-bridge, and hollow sites, because of a lack of previous studies. Increasing the surface coverage from of H 0.11\,ML to 1\,ML, we consistently find that H prefers to bind to the hollow sites. Having identified the most favourable adsorption sites, we calculate the H binding energy ($E_{\rm b}$) on each of the metal surfaces using Eq.~\ref{eq1}. The result is shown in Fig.~\ref{fig4}(a) for all considered coverages. The H-binding energies $E_{\rm b}$ on the surface of the FCC metals (Pd and Pt) are roughly half eV more negative (i.e., more stable) than those on the Na and K surfaces. This clearely indicates a much stronger binding on Pd(111) and Pt(111), which we find to be in the order of $-0.6$ to $-0.5$\,eV/atom over the whole range of considered H coverages up to 1\,ML, in agreement with previous theoretical studies~\cite{Roudgar2003,HANH2014104}. In contrast, the binding energies on the Na(110) and K(110) surfaces are substantially weaker and in a range of $0.1$ to $-0.2$\,eV. The positive binding energy calculated for the lowest coverage ($\Theta = 0.11$\,ML) on the Na(110) surface indicates that H adsorption is thermodynamically unstable, as forming a H$_2$ molecule is an exothermic reaction. With increasing coverage the H binding energy decreases. This is related to the propensity for alkali hydride formation (i.e., NaH and KH) at the surface, as argued in a previous study~\cite{HJELMBERG1979539} regarding the phase transition from bulk Na to NaH upon exposure to large amounts of H$_2$ gas. In fact, we actually observe the formation of surface hydride [i.e., rocksalt NaH(100)] on the Na(110) surface at 1\,ML coverage in our DFT calculation. There has been a clear consensus about the high solubility of H in Pt and Pd bulk since the early 1930's~\cite{Lacher1937}, which implies that diffusion of H atoms from the surface into the bulk region is likely. It suggests that H is simultaneously present both at the surface and in the bulk. The lack of studies for alakali metals regarding this point prompted us to calculate the binding energy of a H interstitial defect in the sub-surface region of Na. We considered different coverages for a H in a tetrahedral site of the 1st, 2nd up to the 5th sub-surface layer of the Na(110) surface. As shown in Fig.~S2 in the Supporting Information, all the binding energies at sub-surface sites are larger than the surface binding energies at the corresponding coverage, which indicates an endothermic binding reaction compared to the surface binding. Furthermore, the further an interstitial H atom is away from the surface, the less favorable its binding energy becomes. In fact, binding energies in deeper layers are close to the formation energy of a H bulk interstitial defect (in a tetrahedral site) as calculated using a Na $p({4\times4\times4})$ bulk supercell [i.e., $E_{\rm f}({\rm H}) = 0.13$\,eV/atom]. This trend consistently demonstrates that H migration from the surface to the sub-surface region and subsequently to the bulk is unlikely in the Na metal, in contrast to the Pd and Pt systems. To better understand why H binds less strongly on alkali surfaces compared to transition metal surfaces, we analyse the electronic structure, selecting the 0.08\,ML H-Pt(111) and 0.11\,ML H-Na(110) surfaces as representative cases. Their corresponding binding energies are 0.05 and $-0.57$\,eV/H atom, respectively, and their density-of-states (DOS) are shown in the upper and the bottom panels of Fig.~\ref{fig4}(b). The total DOS (grey region), the atom-resolved DOS (solid colored line for metal atoms and dashed colored line for the H adatom) and the DOS of a single H atom in a box (black solid line) are aligned with respect to the vacuum level and set to zero. For Pt (the upper panel) we observe that the localized H 1$s$ states in vacuum (black solid line) seen near $-7$\,eV largely overlap with the DOS of the Pt atoms (red solid line). This leads to a strong hybridization between the H $s$ and Pt $d$ states resulting in H $s$ bonding states at lower energy (see a peak in the red dashed line near $-14$\,eV). Such a strong $s$-$d$ hybridization is also confirmed by the analysis of the electron density differences shown in Fig.~S3(c) and (d) in the Supporting Information. A strong bond between H and Pt surface atoms is in agreement with the conventional model explaining bonding between H and transition metal surfaces, the so-called $d$-band model~\cite{Norskov2011}. In the case of the Na system [the bottom panel of Fig.~\ref{fig4}(b)], the $s$ states of the isolated H atom in vacuum are located at a tail of the Na $s$ states, leading to only a relatively small overlap. This hinders a $s$-$s$ hybridization, i.e., there is hardly any change in the DOS of H$_{\rm ad}$ (blue dashed line). Instead, a charge transfer from the Na surface atoms towards the H atom occurs, resulting in a negatively charged H adsorbate atom, as confirmed by the electron density difference [Fig.~S3(a) and (b) in the Supporting Information]. The clear difference in the bonding nature of H on Na and Pt surfaces explains the difference in H binding energies. \begin{figure}[t] \center \includegraphics[width=0.8\columnwidth]{fig4.png} \caption{ (a) Binding energies, $E_{\rm b}$, of H adsorbates on the four metal surfaces for several adsorbate coverages up to 1\,ML. Each colored solid line corresponds to a different metal (i.e., red - Na, green - K, blue - Pd, and orange - Pt). (b) The density-of-states (DOS) of (upper panel) 0.08\,ML H-Pt(111) and (bottom panel) 0.11\,ML H-Na(110) surfaces, where the total, Pt atom-resolved, and H adatom-resolved DOS are indicated by a grey region, a solid colored line, and a dashed colored line, respectively. The DOS of a single H atom in a box is shown as a black solid line. The energy is aligned to the vacuum level set to zero. The position of the Fermi level $E_{\rm F}$ is indicated by a vertical black dashed line. } \label{fig4} \end{figure} \subsubsection{Phase diagrams for H-metal surfaces} Having understood the binding behavior of H on these metal surfaces, we can now utilize our calculations to construct phase diagrams and account for the impact of temperature and pressure on the surfaces' stability. For this we need to evaluate the Gibbs free energy change, $\Delta G$, of H-covered metal surfaces with respect to the clean surface calculated as a function of the chemical potential of H based on Eq.~\ref{eq2}. As shown in Fig.~S4 in the Supporting Information, $\Delta G$ of H-covered surfaces becomes lower than that of the clean surface for all metals as the H chemical potential increases. This indicates that H-covered surfaces with higher H coverage become energetically more favorable at higher H chemical potentials. Based on Eq.~\ref{eq3} we can explicitly evaluate the H chemical potential for any given set of temperature and pressure, which allows us to construct phase diagrams showing the thermodynamically most favorable surface phases at the given conditions, which are depicted as color regions in Fig.~S5 in the Supporting Information. To simplify the phase diagrams, only phase boundaries between the H-free clean surface and the first H-covered surface that becomes stable are shown in Fig.~\ref{fig5}(a) for each metal. The regions to the right and left of each solid line correspond to the stable region for the H-free and the H-covered surface, respectively. Similar to the binding energies, the four solid lines can be split in two groups (i.e., alkali metals and FCC transition metals). Based on this phase diagram, we expect that the Pt and Pd surfaces will be easily contaminated by H adsorbates even at moderate conditions (e.g., $T = 300$\,K and $p = 1$\,bar). The clean surface becomes favorable only at extreme conditions [e.g., $p < 10^{-9}$\,bar at 300\,K for Pt(111)]. The situation is different for the alkali metals, where a comparatively higher resistances to surface contamination with H is observed. For example, at $p = 10^{-1}$\,bar and $T = 300$\,K H-free clean surfaces are thermodynamically more favorable than H-covered surfaces. To connect the theoretical phase diagrams to the experiments, the experimental conditions are shown: colored crosses specify conditions at which a metal tip is prepared (brown), the prepared tips are transferred to the APT equipment (cyan), and the APT measurement is performed in a vacuum chamber (pink). The effect of the electric field present during the operation of APT analysis is also taken into account: the dashed colored lines are the phase boundaries for the Na and Pt surfaces which shift in the presence of the electric field and resulting changes in the dipole moments of the surface phases. Based on the constructed phase diagrams we rationalize the detection of the H atoms in the APT-measured mass spectra in the following section. \section{\label{Sec: Discussions} Discussions} \subsection{Origin of the detected hydrogen} Our APT analyses of Na at 30 and 90\,K show no measurable amount of H, potentially contradicting the common view regarding the origin of the background H from ionization of residual gases. This contrasts with the results for Pt for which substantial amounts of H and H$_2$ are detected. In this section we discuss possible scenarios for H contamination of metals and reflect on the a long-standing debate on the origin of H species detected in APT-measured mass spectra [e.g., Fig.~\ref{fig3}(c)]. It is commonly accepted that residual H$_2$ molecules are still present in a vacuum chamber even at extremely low pressure and temperature. This is in part due to the relatively high content of hydrogen within the stainless steel most vacuum chamber are made of. Residual H$_2$ molecules can then ionize and dissociate due to the intense electric field and/or laser pulse during APT operation. This leads to the detection of H species without any direct interaction with the specimen, simply associated to typical field ionization as encountered in field-ion microscopy for instance. The ionization potential of H$_2$ is 15.4\,eV, and it has been used as an imaging gas, in particular for silicon~\cite{Melmed1975,Koelling2013a}. H species present inside the specimen are detected during APT measurements. Even if the presence of H atoms in a sample can be undesired, contamination may occur during the specimen preparation and transport, or during the measurement itself. The importance of controlling the temperature during specimen preparation was recently pointed out for several alloy systems~\cite{Chang2019, Lilensten2020}, and in particular for materials systems that are known hydride-formers~\cite{Mouton2021}. Breen et al. demonstrated the strong ingress of hydrogen arising from the specimen preparation by electrochemical polishing~\cite{BREEN2020108}. Here, we prepared specimens either at 90\,K or at 300\,K, but at a fixed pressure of approx. 10$^{-9}$\,bar inside the focused ion beam. The specimens were then transported into the atom probe by using an ultra-high vacuum transfer suitcase, being maintained at approx. 300\,K and 10$^{-11}$\,bar. According to our phase diagram [Fig.~\ref{fig5}(a)], both the Na and Pt surfaces will not become covered by H during this process, because the respective H-free clean surfaces are thermodynamically favored. However, Pt samples can become contaminated by H atoms when the specimen is created in the vacuum chamber, especially at the conditions of 90\,K and 10$^{-9}$\,bar, while this is unlikely for a Na surface under similar conditions. Furthermore, given the high affinity of H and Pt discussed in the previous section, we cannot neglect the possibility of H diffusion from a H contaminated surface of a Pt sample into either its sub-surface or bulk region. This mechanism was pointed out to be responsible for the large ingress of H in titanium specimens during specimen preparation~\cite{Chang2019} under similar preparation conditions. Therefore, there is a high likelihood that the Pt specimen has already absorbed H atoms even before the APT measurement commences, which can be excluded in the case of Na. An alternative scenario in which surfaces of samples are contaminated during the actual APT measurement is also conceivable, assuming that the metallic specimen is initially devoid of H. The detected H-related signal can originate from the binding of H to the specimen's surface during the APT measurement as a consequence of interactions between the H$_2$ gas and the surface metal atoms. First we note that from a thermodynamic perspective H-covered Pt surfaces are more stable than the clean surface at the conditions of our APT experiments (i.e., $10^{-14}$\,bar and 30$\backsim$90\,K). This is shown in Fig.~\ref{fig5}(a) and suggests a high possibility of contamination. However, the Na phase boundary [red line in Fig.~\ref{fig5}(a)] intersects the region of specified APT conditions. This means that the clean Na surface is stable against H chemisorption even at the relatively higher temperatures, e.g., 90\,K, whereas H-covered surfaces are expected to form at the lower temperature, e.g., 30\,K. This is supported qualitatively by the APT measurements reported in Fig.~\ref{fig3}. \begin{figure}[t] \center \includegraphics[width=0.9\columnwidth]{fig5.png} \caption{ (a) Surface phase diagram for metal surfaces in equilibrium with a surrounding H$_2$ gas. Each colored solid line indicates the phase boundary between the clean surface (a region to the right of the line indicated by color-dashed arrows) and the H-covered surface (region to the left, as indicated by grey arrows) for each metal. Corresponding phase boundaries for the Na and Pt surfaces, shifted as a consequence of the effect of the electric field and modified dipole moment of the surface phases (see Section.~\ref{Sec: Discussions} for details), are shown as coloured dashed lines. Pink, brown, and cyan crosses show the experimental conditions for APT measurements, tip preparation, and tip transportation, respectively. The schematic picture (b) illustrates the environment of the APT equipment and zoomed-in surface regions of a tip in a contact with H$_2$ gas (c) before and (d) after surface metal layers evaporation during an APT measurement, and (e) the H$_2$ molecules and H atoms concentration profiles corresponding to each geometric region in (d). A blue solid line indicate the initial H concentration. Red and dark orange lines illustrate the H concentration at the thermodynamic equilibrium for Na and Pt, respectively. (f) This schematic illustrates the origin of the H residuals in the APT measurement. Black-filled circles and a green shape represent H atoms and the metal specimen, respectively. Each arrow conceptually shows corresponding diffusion paths for H diffusing from the bulk (red), H surface-diffusing towards the specimen's apex (blue), and desorbed H from the apex (purple). A relatively very slow diffusion of H$_2$ molecules in the UHV is illustrated by dashed red arrows. } \label{fig5} \end{figure} \subsection{Surface contamination during analysis} During an APT measurement, the surface atoms are progressively field evaporated from the specimen's surface, fly through the ultra-high vacuum and are collected by the particle detector, as illustrated in Fig.~\ref{fig5}(b). Therefore, it is necessary to also consider kinetic scenarios, which can be involved in surface contamination by H. The field evaporation of the specimen's outermost surface layer, as illustrated in Fig.~\ref{fig5}(c), exposes a new surface layer that was previously in the sub-surface region, as depicted in Fig.~\ref{fig5}(d). This means that the space where the original surface layer was found has now become empty. Therefore, there is not only a lack of metal atoms in this region, but it is also void of H$_2$ molecules. The later (i.e., H$_2$) have to diffuse from the vacuum through this space towards the surface. The fundamental force driving this diffusion (i.e., of H$_2$ from the vacuum towards the surface) is the gradient of the H concentrations (i.e., $c_{\rm H}$), as it strives to achieve thermodynamic equilibrium. Consequently, the diffusive flux ($J$) of H$_2$ can be modeled based on a linear law as $J = - D\cdot\nabla c_{\rm H}$, where $D$ is the diffusion coefficient and $\nabla c_{\rm H}$ is the H concentration gradient. This process is schematically illustrated in Fig.~\ref{fig5}(e). The corresponding H concentration, denoted as $c_{\rm H [{\alpha}]}$, where $\alpha$ denotes the source for H, which in the presently discussed case depends on the geometric positions, e.g., H$_2$ in vacuum, H$_2$ in the vicinity of the surface, H adsorbed at the surface, or H in the bulk of the material. The initial H concentration is indicated by the blue solid line in Fig.~\ref{fig5}(e) in which H species are present only in the region far away from the surface again assuming that the metallic specimen does not initially contain any H atom. The processes related to the formation of H-contaminated surfaces can be decomposed into four steps \begin{itemize} \item (i) H$_2$ molecules diffuse from the vacuum to the region near the surface due to the concentration gradient between $c^{\rm initial}_{\rm H[H_{2}\,gas]}$ and $c^{\rm initial}_{\rm H[H_{2}@surf]}$; \item (ii) the diffused H$_2$ molecules near the surface dissociate into two H atoms by overcoming an energy barrier; \item (iii) a H-contaminated surface forms, if H$_2$ molecules dissociate and resulting H atoms are chemisorbed on the metal surface; \item (iv) H atoms migrate from the surface to the bulk region if the H surface chemical potential at the given surface concentration is above the bulk chemical potential at the given bulk concentration. \end{itemize} All steps from (i) to (iv) are material-dependent processes, because all concentrations, except for H$_2$ gas in the vacuum far away from the surface (i.e., $c_{\rm H[H_{2}\,gas]}$), are related to the material itself. For example, the concentrations at thermodynamic equilibrium conditions are qualitatively depicted in Fig.~\ref{fig5}(e) for Na with a solid red line and for Pt with a solid dark orange line. Once the thermodynamic equilibrium is achieved, $c^{\rm eq}_{\rm H[H@Pt\,surf]}$ would be substantially larger than $c^{\rm eq}_{\rm H[H@Na\,surf]}$, since the stability of H-contaminated surfaces is higher for Pt than for Na. This implies a much more active exchange of H species for Pt. Furthermore, H migration to the bulk region is not expected for Na surfaces but it is for Pt, so that $c_{\rm H[H@Pt\,bulk]} >> c_{\rm H[H@Na\,bulk]}$. At the conditions corresponding to phase boundaries, shown as solid colored lines in the phase diagram [Fig.~\ref{fig5}(a)], surfaces in contact with H$_2$ molecules and H-contaminated surfaces are in equilibrium and would coexist. However, as a fresh metal surface becomes exposed following field evaporation of the surface layer(s), $c_{\rm H[H@Na\,surf]}$ will be much smaller than $c_{\rm H[H_{2}@Na\,surf]}$, which precludes an exchange of H species between the surface and the environment. In other words, under the conditions of an APT measurement, denoted as pink crosses in Fig.~\ref{fig5}(a), the chances for forming a H-contaminated surface will be much higher for Pt surfaces than for Na surfaces. \subsection{Influence of the electric field} Last but not least, since APT measurements are carried out in the presence of large electric fields (e.g., 1$\backsim$4\,V/{\AA})), the impact of such a large electric field on the surface stability has to be considered. The electric field is known to have a substantial influence on the distribution of detected hydrogen~\cite{Tsong1983,Chi-fongAi1984, Mouton2019}. The content of H, presumably originating from the residual gas from the chamber, is expected to increases as the electric field decreases~\cite{Sundell2013}. In contrast, Andren and Rolander pointed to an influence of the electric field on the hydrogen adsorption behavior and hence suggest a change in the detection of H level as a function of the base temperature~\cite{Andren1992}. For relatively low fields, hydrogen adsorbed on the surface is field evaporated alongside one of the host-metal atoms forming, as reported for Al for instance, AlH, AlH$_2$, AlH$_3$~\cite{Nishikawa1983a}. The bonding of the metal atoms with gaseous species has been reported to favour the field evaporation at lower electric fields~\cite{Muller1965}, and their detection should hence not be seen here. The evaporation field of Na is estimated using the classical image hump model for field evaporation to be in the range of 1.1\,V$\cdot${\AA}$^{-1}$~\cite{Tsong1978a}. No experimental value was ever reported. There are theoretical estimates of the evaporation field for Na adsorbates on Al and W that are also typically in this range of 0.6$\backsim$0.8\,V$\cdot${\AA}$^{-1}$~\cite{Kahn1976,Neugebauer1993}. Under such low fields, typically peaks appear at 1, 2, and 3\,Da in most analyses of metallic samples. These peaks are not observed here (cf. Fig.~\ref{fig3}). Although well-established approaches exist to explicitly include the electric field in DFT surface calculations~\cite{PhysRevLett.124.176801}, they come with large computational costs. Therefore, in the following we utilize as a first approximation to account for the thermodynamic effect of the electric field only free energy contributions, $\Delta U$, due to the presence of an electric field and use the equation $\Delta U^{\alpha} = -\Delta\mu_{\rm dipole}^{\alpha}\mathcal{E}/A$. Here $\Delta \mu_{\rm dipole}^{\alpha}$ is the dipole moment difference of a surface phase $\alpha$ with respect to the H-free surface, $\mathcal{E}$ is the electric field applied during the APT measurement, in the range 0.5$\backsim$5\,V$\cdot${\AA}$^{-1}$, and $A$ is the surface area of the used slab models. By adding $\Delta U^{\alpha}$ to the Gibbs free energy changes calculated from Eq.~\ref{eq2}, we can evaluate the field dependent phase boundary positions for Na and Pt, assuming fields of 2 and 4\,V$\cdot${\AA}$^{-1}$ respectively, shown as dashed colored lines in Fig.~\ref{fig5}(a). For both Na and Pt, the stronger surface dipole (compared to the pristine surface) due to the presence of bound H atoms at the surface leads to a reduction of $\Delta G$ by up to 10\,meV/{\AA}$^2$ for the highest H coverage. This results in a destabilization of H-covered surfaces. Therefore, the phase boundaries for the Na and Pt cases, shown as red and orange dashed lines in Fig.~\ref{fig5}(a), are shifted towards the left, i.e., increasing the region in which the respective clean surface is stable. Comparison between the shifted phase boundaries and the actual conditions of the APT measurement consistently imply, from a thermodynamic standpoint, a low probability of H contamination of Na samples, even in the presence of a large electric field, in agreement with experimental results. To account for the kinetic effect of the electric field, we note that Brandon and Southon~\cite{Brandon1968,Southon1968} independently connected the gas kinetics theory with the high-electric field, allowing us to approximate the time for the field-ion imaging gas molecules (i.e., H$_2$ molecules) adsorption on the electrically charged surface of the APT specimen. The total adsorption of gas per unit area per time, $\Phi$ ($\frac{\rm H{_2}\,molecules}{\rm m^2\cdot sec}$) can be obtained by combining the equation of the classical gas kinetics factor (see Eq.~\ref{kinetic_eq2}), electric-field enhancement factor (see Eq.~\ref{kinetic_eq3}), and probability, $P$, that a gas molecule is ionized on its way to the tip. \begin{equation} \Phi = \Phi_{0}\zeta(1-P) \quad , \label{kinetic_eq1} \end{equation} where $\Phi_{0}$ is the gas kinetic arrival rate. We assumed $P$ as zero to maximize the net adsorption rate of H. The flux of H that arrives at the surface of the Na nanosized tip in the absence of an electric field ($\mathcal{E}$) is given by \begin{equation} \Phi_{0} = \frac{p}{\sqrt{2\pi Mk_{\rm B}T}} \quad , \label{kinetic_eq2} \end{equation} where $p$ is the partial pressure of H$_2$, $M$ is the molecular mass of H$_2$ [2 atomic mass unit (amu)], and $T$ is the base temperature of the specimen (i.e., 30\,K). Since H is the predominant residual gas in metal vacuum chambers at low pressures, here we assume $p$ equal to the analysis chamber pressure during the APT measurement that is ${3.43\times10^{-14}}$\,bar. When an electric field is applied to the Na tip, the H supply is enhanced by a factor of $\zeta$: \begin{equation} \zeta = \frac{\mu^{\rm H_2}_{\rm dipole}\mathcal{E}}{k_{\rm B}T} + \sqrt{\frac{\pi\alpha\mathcal{E}^2}{2k_{\rm B}T}} {\rm erf}\left[\left( \frac{\alpha\mathcal{E}^2}{2k_{\rm B}T}\right)^2\right] \quad , \label{kinetic_eq3} \end{equation} where $\mu^{\rm H_2}_{\rm dipole}$ is the permanent dipole moment of gas for a H$_2$ molecule and $\alpha$ is its polarizability. Assuming that $\mu^{\rm H_2}_{\rm dipole}$ is zero and the error function gives a value of 1, the equation simplifies to Eq.~\ref{kinetic_eq4}: \begin{equation} \zeta \approx \sqrt{\frac{\pi\alpha\mathcal{E}^{2}}{2k_{\rm B}T}} \quad . \label{kinetic_eq4} \end{equation} The following experimental and theoretical values used in our calculation, $\alpha_{\rm H_2} = 5.314$ [atomic unit (au)]~\cite{OLNEY199759} and assuming an electric field of $\mathcal{E} = 2$\,V/{\AA} for Na, yield the total supply of gas $\Phi_{0} \approx 48$\,$\frac{\rm H{_2}\,molecules}{\rm m^2\cdot sec}$ and $\zeta = 11.5$, which results in $\Phi = 549$\,$\frac{\rm H{_2}\,molecules}{\rm m^2\cdot sec}$ striking the Na tip. Assuming that an area of an APT specimen is ${\pi\cdot(100\times100)}$\,nm$^2$, the approximate time to achieve a monolayer coverage of H molecules on the Na tip is over 1800\,years. This implies that the hydrogen detected over the course of an experiment is mostly supplied from chemisorbed or physisorbed hydrogen on the specimen's shank that surface-diffuses towards the apex, and has little to do with the direct chemisorption or ionisation of hydrogen at the apex itself. The field-free adsorption properties of the surface are hence likely the most critical parameter to consider. \subsection{Summary} Our discussions consistently suggest that the possible interaction between the residual H$_2$ gas in the UHV chamber and an APT specimen is both thermodynamically and kinetically almost negligible, which agrees with previous reports~\cite{doi:10.1021/nn305029b,doi:10.1017/S143192761500032X}. As illustrated in Fig.~\ref{fig5}(f), this supports the other possible origin of the H$_2$ detection - out-gassing and diffusion of H from the specimen itself [red-solid arrows in Fig.~\ref{fig5}(f)], with H introduced during the preparation, and/or the specimen holders (i.e., Cu holder and Si coupon)~\cite{BREEN2020108,doi:10.1063/1.1597372} that were exposed to air before going into the UHV system. Subsequent surface diffusion indicated by blue arrows in Fig.~\ref{fig5}(f), leads to a migration of the H at the surface towards the APT specimen apex to finally get desorbed and ionized as shown by the purple arrow in Fig.~\ref{fig5}(f)~\cite{ANTCZAK200739}. Both processes (i.e., ad-/absorption) strongly depend on the material's properties. \section{\label{Sec: Conclusions} Conclusions} In conclusion, we studied the origin of residual hydrogen in APT experiments by combining APT analyses of Na and Pt metals and DFT calculations regarding the thermodynamic stability of H-covered metal surfaces (Na, K, Pt, and Pd). In our APT measurements, large peaks for H residuals are observed for Pt, as commonly observed in most APT experiments, whereas no or negligible amounts of residual H are measured in Na. The surface phase diagrams of H-exposed metal surfaces constructed based on DFT calculations indicate that H contamination can easily occur for Pt and Pd surfaces, but not for Na and K surfaces, at least within the conditions of our experiments. This combined result provides the insight that residual hydrogen in APT measurements mostly originates from H contamination of materials during the specimen preparation and transport and not from the background H$_2$ gas alone. The design of novel instrumentation remains very important. However, careful specimen preparation and transport, involving cryogenic workflows and vacuum transfer, seem to be lower hanging fruits that can lead to substantial improvements in data quality. The possible coatings of specimens and holders with materials on which hydrogen adsorption is not favourable should also be explored in the future. \section{acknowledgments} Funding by the German Research Foundation (Deutsche Forschungsgemeinschaft (DE)) within the framework of SFB1394, project number 409476157 is gratefully acknowledged. S.-H.K.and B.G. acknowledge financial support from the ERC-CoG-SHINE771602. S.-H.K.and B.G. also acknowledge Uwe Tezins, Christian Broß, and Andreas Sturm for their support to the FIB and APT facilities at MPIE. S.-H.K.and B.G. are grateful for the Max-Planck Society and the BMBF for the funding of the Laplace and the UGSLIT projects respectively, for both instrumentation and personnel. \section{Data availability statement} All DFT calculated data used in this work is available in the Pyiron repository and can be given access to upon request. \section{Supporting Information} Supporting Information is available. \section{tbw} \begin{table*}[!htbp] \caption{Computational details for metal slab calculations. Used exchange-correlation approximation functional, size of supercell, $k$-point mesh, the number of valence electrons for the metal atom, thickness of vacuum, thickness and type [e.g., symmetric (asymmetric) slab is abbreviated to Symm. (Asymm.)] of slab, and the numbers of relaxed and fixed layers are listed.} \label{s_tab1} \begin{ruledtabular} \begin{tabular}{c cccc} \noalign{\vskip 2mm} System & Na(110) & K(110) & Pd(111) & Pt(111) \\ \noalign{\vskip 1mm} \hline \noalign{\vskip 2mm} Functional & PBE~\cite{Perdew1996,Perdew1997} & PBE~\cite{Perdew1996,Perdew1997} & PBE~\cite{Perdew1996,Perdew1997} & RPBE~\cite{Perdew1996,Hammer1999} \\ Size of supercell & $p(3\times3)$ & $p(3\times3)$ & $p(2\times2)$ & $(3\times2\sqrt{3})$rect \\ $k$-point mesh & $\Gamma$ $4\times4\times1$ & $\Gamma$ $4\times4\times1$ & $\Gamma$ $4\times4\times1$ & $\Gamma$ $8\times6\times1$ \\ Valence & 7 & 7 & 16 & 10 \\ Vacuum thickness (${\rm \AA}$) & 18 & 18 & 18 & 12 \\ Slab thickness (atomic layer, AL) & Asymm., 8 & Asymm., 8 & Symm., 13 & Asymm., 4 \\ Relaxed layers (AL) & 5 & 5 & 6 & 2 \\ Fixed layers (AL) & 3 & 3 & 7 & 2 \\ \noalign{\vskip 2mm} \end{tabular} \end{ruledtabular} \end{table*} \FloatBarrier \begin{figure}[!htbp] \center \includegraphics[width=0.9\columnwidth]{S_melting_Na.png} \caption{Uncontrollable melting of the Na sample at room temperature. (a-c) Sectioning process. (d-f) Annular milling process.} \label{S_melting_Na} \end{figure} \FloatBarrier \begin{figure}[!htbp] \center \includegraphics[width=0.7\columnwidth]{S_Eb_Hsub.png} \caption{Calculated binding energies of a H interstitial placed in a tetrahedral site in the 1st (blue), the 2nd (orange), the 3rd (green), the 4th (red), and the 5th (purple) subsurface layer are plotted as a function of H coverage. The formation energy of a H interstitial in a tetrahedral site in a Na $p({4\times4\times4})$ bulk supercell is indicated by a horizontal red dashed line.} \label{S_Eb_Hsub} \end{figure} \FloatBarrier \begin{figure}[!htbp] \center \includegraphics[width=0.9\columnwidth]{S_cdd_3d.png} \caption{Electron density differences ($\Delta \rho = \rho_{\rm H-surf} - \rho_{\rm surf} - \rho_{\rm H}$) showing the redistribution of electron density upon adsorption of H on the metal surfaces [(a)$\backsim$(b) 0.11\,ML H-Na(110) and (c)$\backsim$(d) 0.08\,ML H-Pt(111), respectively]. The top (bottom) panels are top- (side-) view of 3D electron density differences. Yellow (blue) areas correspond to electron accumulation (depletion). Scale bar is 3\,${\rm \AA}$.} \label{S_cdd_3d} \end{figure} \FloatBarrier \begin{figure}[!htbp] \center \includegraphics[width=0.8\columnwidth]{S_dG.png} \caption{The Gibbs free energy differences of surface phases with respect to the H-free clean surface are plotted as a function of the H chemical potential for (a) Na(110), (b) K(110), (c) Pd(111), and (d) Pt(111), respectively. Each color line indicates each surface phase (i.e., with different H coverage). Purple color area indicates the stablility region for H$_2$ molecules.} \label{S_dG} \end{figure} \FloatBarrier \begin{figure}[!htbp] \center \includegraphics[width=0.5\columnwidth]{S_phase_diagram_inv_metals.png} \caption{Surface phase diagrams for metal surfaces [(a) Na(110), (b) K(110), (c) Pd(111), and (d) Pt(111)] in equilibrium with a surrounding H$_2$ gas as functions of temperature and pressure. Each coloured region indicates the most favorable surface phase. Pink, brown, and cyan crosses show the experimental conditions used in this work for APT measurements, tip preparation, and tip transportation, respectively.} \label{S_phase_diagram_inv_metals} \end{figure} \FloatBarrier \newpage
1,108,101,563,727
arxiv
\section{Introduction} Astronomy is entering into a new era of big data due to the construction of very large scale facilities such as the Large Synoptic Survey Telescope (LSST), an 8.4 m telescope with a 3.2 Gigapixel camera, which will begin operations in northern Chile in 2022 \cite{huijse2014computational}. The LSST is a robotic telescope that will scan the entire southern hemisphere sky every 3 days, collecting information on 50 billion objects for 10 years \cite{ivezic2008lsst}. Time-domain astronomy studies stellar objects that change in time or position, e.g. supernovae (SNe), the explosive death of stars. The High Cadence Transient Survey (HiTS) \cite{forster2016high} aimed at detecting SNe in their early stages in order to study the astrophysics associated with these phenomena. HiTS has a custom-made pipeline to process the images captured by the telescope and detect transients. Basically, the pipeline subtracts reference images from new images, detects sources and classifies them. The farther away from Earth, the higher the chance of finding SNe events because there are more galaxies. But deeper objects are usually fainter with a low signal-to-noise ratio. For this reason, among others, it is relevant to reduce significantly the false negative rate (FNR) and the false positive rate (FPR) at the output of this pipeline. In our previous work, we introduced a convolutional neural network (CNN) for classifying sources detected by the HiTS pipeline as true transients (`SN candidates') or bogus (`artifacts') \cite{cabrera2016supernovae}. In 2017 the model was enhanced by introducing partial rotational invariances, as well as improving the architecture and training algorithms, to yield the so-called Deep-HiTS model (DH) \cite{cabrera2017deep}. In this paper, we enhance Deep-HiTS by applying a new method for obtaining rotational invariance that exploits cyclic symmetry in CNNs \cite{dieleman2016exploiting}. In addition, we use a visualization approach, the layer-wise relevance propagation (LRP) \cite{samek2017evaluating}, in order to find the relevant pixels in each image that helps to discriminate between SN candidates and artifacts. We assess both qualitatively and quantitatively the effect of the rotational invariant methods using LRP and compare the original Deep-HiTS model with its enhanced version. In addition, we introduce ensemble classifiers to improve the performance of Deep-HiTS. \section{Related Work} \subsection{High Cadence Transient Survey and Deep-HiTS} The High Cadence Transient Survey \cite{forster2016high} aims at detecting sky transients, ranging from hours to days, in particular it searches for early stage SNe. The data was collected from the Dark Energy Camera (DECam) \cite{flaugher2012status} mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The HiTS detection pipeline makes use of four images: \emph{template}, \emph{science}, \emph{difference} and \emph{signal-to-noise ratio} (SNR) \emph{difference} (\emph{difference} normalized by estimated image noise), see Fig. \ref{fig:CAP}. The \emph{difference} image is obtained from the subtraction between the \emph{template} (reference image) and \emph{science} (current image), in order to detect anything that changes in time or position. To generate the \emph{difference} images, the \emph{template} and \emph{science} must be aligned, which is achieved by applying the SExtractor object detector \cite{bertin1996sextractor} and then adjusting a second order transformation between objects detected in both images. To take into account changing atmospheric conditions that produce variations in brightness and blurring of images at the measurement time, a kernel point-spread-function (PSF) of the image with better conditions is matched to the worse one. After this PSF correction, the images are subtracted to obtain the \emph{difference}, which is normalized by its local noise to get the \emph{SNR difference} image. Transient candidates are those events with a local SNR greater than 5, and an image stamp of 21$\times$21 pixels is centered around each event. At the output of the HiTS pipeline, a classifier discriminates between true transients and bogus events. In a previous work, we proposed a CNN for discriminating between SN candidates and artifacts, the so-called Deep-HiTS model (DH) \cite{cabrera2017deep}. To be partially invariant to rotations, DH augmented the inputs by adding 90$^{\circ}$-180$^{\circ}$-270$^{\circ}$ rotations. Table \ref{table:architecture} shows the Deep-HiTS architecture. The inputs are 4 images of 21$\times$21 pixels. Then the rotation augmentation operation increases the batch size by 4. The first two convolutional layers have 32 filters of size 4$\times$4 and 3$\times$3, respectively. They are followed by a 2$\times$2 max-pooling layer. Next are 3 convolutional layers of 64 3$\times$3 filters, followed by a 2$\times$2 max-pooling layer. Up to this point, each of the four rotations is processed separately. Just before the first fully connected (dense) layer, the feature maps for each rotation are flattened and concatenated in the feature dimension. As a result, the first quarter of features corresponds to the feature maps with 0$^{\circ}$ rotations, the second quarter to 90$^{\circ}$ rotations, the third quarter to 180$^{\circ}$ rotations, and the last quarter to 270$^{\circ}$ rotations. The last operation transforms a minibatch of size 200 with feature maps of 6$\times$6$\times$64 to a batch of 50 original samples with a feature vector of size 9216 (=2304$\times$4) per sample. This feature vector passes through two dense layers of 64 neurons, ending with a dense softmax layer that generates the output probabilities for the one-hot encoding of the two classes. Leaky ReLUs are used as activation functions. A dropout probability $ p = 0.5 $ is used in dense layers, mini-batch size of 50, and the initialization of parameters suggested by He et al. \cite{he2015delving}. A cross-entropy optimization with stochastic gradient descent is used, with a learning rate 0.04 that is reduced by half every 100,000 iterations. In this work, the Deep-Hits model was implemented in Tensorflow \cite{abadi2016tensorflow}. An initial process of 100,000 iterations of training is performed. After the first 100,000 iterations, an early-stopping criterion is tested on the validation set every 10,000 iterations. If the error on the validation set drops more than 1\% after 10,000 iterations, then the total number of iterations is extended in another 100,000 iterations from the current step. Otherwise, the training is stopped. \begin{table}[tb] \centering \small \caption{Deep-HiTS architecture.} \label{table:architecture} \begin{tabular}{|c|c|c|} \hline \textbf{Layer} & \textbf{Layer Parameters} & \textbf{Output Size} \\ \hline Input & - & 21$\times$21$\times$4 \\ \hline Zero padding & - & 27$\times$27$\times$4 \\ \hline Rotation augmentation & - & 27$\times$27$\times$4 \\ \hline Conv. & 4$\times$4, 32 & 24$\times$24$\times$32 \\ \hline Conv & 3$\times$3, 32 & 24$\times$24$\times$32 \\ \hline Max-pool & 2$\times$2, stride 2 & 12$\times$12$\times$32 \\ \hline Conv. & 3$\times$3, 64 & 12$\times$12$\times$64 \\ \hline Conv. & 3$\times$3, 64 & 12$\times$12$\times$64 \\ \hline Conv. & 3$\times$3, 64 & 12$\times$12$\times$64 \\ \hline Max-pool & 2$\times$2, stride 2 & 6$\times$6$\times$64 \\ \hline Flatten & - & 2304 \\ \hline \begin{tabular}[c]{@{}c@{}}Rotation\\ concatenation\end{tabular} & - & \begin{tabular}[c]{@{}c@{}}9216\\ (4$\times$2304)\end{tabular} \\ \hline Dense & 9216$\times$64 & 64 \\ \hline Dense & 64$\times$64 & 64 \\ \hline Output softmax & 64$\times$2 & 2 \\ \hline \end{tabular} \end{table} \subsection{Rotational Invariance} Conventional CNN architectures include convolution layers to achieve translational invariance: a feature shifted to a different position at the input will have a similar representation at the output than the original feature. In addition, pooling operations aim at obtaining translational invariance at the local level. Many types of data exhibit rotational invariance properties, e.g. star and galaxy images. Some authors have attempted to directly encode rotational invariance in the architecture of a CNN \cite{cohen2016group}, \cite{dieleman2015rotation}, \cite{dieleman2016exploiting}. In particular Dieleman et al. \cite{dieleman2016exploiting}, exploit symmetries, by generating cyclic transformations of data, defined as counter clock-wise rotations of type $k\cdot90^{\circ},\,k\in\{0,1,2,3\}$. The first operation is to add a cyclic slicing layer at the input. This is done by stacking 4 cyclically rotated copies of each input sample in a batch, in such a way that if the original batch has size $N$, after the cyclic slicing layer the batch size will increase to $4N$. The first $N$ batch samples correspond to the original image, the following $N$ batch samples correspond to rotations in 90$^{\circ}$, and then 180$^{\circ}$ and 270$^{\circ}$. Mathematically, this operation is defined by $S(x)=[x, rx, r^2x, r^3x]^T$, where $r$ corresponds to the rotation operation of 90$^{\circ}$ in counter clock-wise direction, $x$ is the input batch to the layer. Column vectors are used to represent that the resulting feature maps are stacked in the batch dimension. With this operation, four images are generated, which are processed independently by the rest of model layers. The cyclic slicing layer is used by Dieleman et al. in \cite{dieleman2015rotation} to obtain rotation invariant CNNs for galaxy morphology prediction, and at the first layer of Deep-HiTS \cite{cabrera2017deep} to obtain partial rotational invariance for astronomical transients. A second operation is the cyclic pooling layer, which combines the activations from the four rotated copies using a permutation-invariant function. The size of the mini-batch is reduced by 4. Formally, cyclic pooling is defined by an operation over the input $\textbf{x} = [x_0, x_1, x_2, x_3]^T$ as $P(\textbf{x}) = p (x_0, r^{-1}x_1, r^{-2}x_2, r^{-3}x_3)$, where $p$ corresponds to the pooling operation, e.g. average pooling is used in this work, which is applied to each unrotated minibatch. When applying cyclic pooling after a fully-connected layer, it becomes unnecessary to realign the features, since they lose their spatial structure. Although a cyclic pooling layer can be introduced in any part of the model after the cyclic slicing layer, in practice it is used before the output layer to obtain a rotation invariant network. In \cite{dieleman2016exploiting} the authors introduced two additional operations: cyclic rolling and cyclic stacking. Both changes the number of feature maps, but the later also affects the batch size. In our experiments we did not find effective results using these operators, so they are not used in this work. \subsection{Visualizing and Understanding CNN Decisions} Recently several methods have been developed for understanding and visualizing what a CNN has learned in a classification task. In \cite{simonyan2013deep} a sensitivity analysis by computing gradients through backpropagation is proposed, yielding a saliency map. This method does not give a direct explanation of the network's score, but rather indicates which elements of an input would have to be modified and in which direction to make it belong more or less to the class decided by the CNN. In \cite{zeiler2014visualizing} a deconvolution strategy is proposed, which allows generating visualizations of a CNN at any intermediate layer. This method is exclusively limited to the convolutional layers of a network, where the main operations such as reverse filtering (un-filtering) and un-pooling, are similar to those used when propagating the gradient with back-propagation. A classification model can be seen as a function that maps a series of inputs $\textbf{x}$ to an output score $f(\textbf{x})$, which represents the confidence value of certain class membership. To interpret the decision of the network encoded in its parameters, operations that propagate $f(\textbf{x})$ backwards to the input space can be used. This approach generates a relevance value $R_i$ for each feature of $\textbf{x}$. Relevances quantify how much the respective features contribute to the value of the prediction score $f(\textbf{x})$. Positive values of $R_i$ are interpreted as features that contribute to the decision of the network, while negative values are interpreted as evidence against the prediction, potentially decreasing the value of $f(\textbf{x})$. Layer-wise relevance propagation (LRP) \cite{bach2015pixel} backpropagates the output of the network (not the gradients), layer by layer, towards the input space. As a result, we obtain the elements of the input that contribute in a positive or negative way to the classification score $f(\textbf{x})$, which will have respective positive or negative relevance values. LRP is based on the principle of the \emph{conservation of relevances}, i.e., that the sum of the relevances $R_j^{(l+1)}$ in a layer $l+1$ must be equal to the sum of the relevances $R_i^{(l)}$ in the previous layer $l$. Formally the conservation rule is expressed as follows: \begin{equation} \sum_{i}{R_i^{(l)}}=\sum_{j}{R_j^{(l+1)}}. \label{eq:conservation_rule} \end{equation} Many mathematical operations can be created to backpropagate $f(\textbf{x})$ so that its intermediate layers satisfy Eq. (\ref{fig:CAP}), the simplest one is to propagate proportionally to the activations coming from previous neurons, in the same way as in a circuit node the electrical current is divided proportionally to the resistance of each branch\cite{montavon2017methods}, according to Ohm's and Kirchoff's laws. In this way we obtain the following equation: \begin{equation} R_{i\leftarrow j}^{(l,l+1)}=\frac{z_{ij}}{z_j}R_j^{(l+1)}. \label{eq:prop_rule_1} \end{equation} The relevance $R_{i\leftarrow j}^{(l,l+1)}$ propagated from the neuron $j$ in the $l+1$ layer to the neuron $ i $ in the $ l $ layer, is equivalent to the ratio between the weighted activation $ z_{ij} = a_iw_{ij} $ of the neuron $ i $, and the sum of all forward pass activations coming from the $l$ layer $ z_j = \sum_{j}{z_{ij}} + b_j $ to the neuron $ j $, plus a bias. Although the addition of the bias breaks the conservation principle of Eq. (\ref{eq:conservation_rule}), it remains an acceptable approximation. Numerical instabilities are met when rule (\ref{eq:prop_rule_1}) is used, if $z_j\rightarrow 0$ then $R_{i\leftarrow j}\rightarrow \infty$, to avoid this situation a stabilizer hyperparameter $\epsilon \geq 0$ is added, establishing the \emph{epsilon} rule as follows: \begin{equation} R_{i\leftarrow j}^{(l,l+1)}=\frac{z_{ij}}{z_j+\epsilon \, sign(z_j)}R_j^{(l+1)}. \label{eq:prop_rule_e} \end{equation} Another propagation rule, that has no relevance loss by a stabilizer, is the $\alpha\beta$ rule, which separates positive activations, weights and biases $z_{ij}^+$ from negative ones $z_{ij}^-$, as follows: \begin{equation} R_{i\leftarrow j}^{(l,l+1)}=\left(\alpha\,\frac{z_{ij}^+}{z_j^+}-\beta\,\frac{z_{ij}^-}{z_j^-}\right)R_j^{(l+1)}. \label{eq:prop_rule_ab} \end{equation} In Eq. (\ref{eq:prop_rule_ab}), it must be fulfilled that $ \alpha-\beta = 1 $ in such a way that the conservation rule is satisfied layer by layer. When naming this rule, the values used for parameters $\alpha$ and $\beta$ will define the model, for example, for $\alpha=2$ and $\beta=1$, the method is called LRP-$\alpha_2\beta_1$. The current implementation of LRP considers layers with convolutional operations, max-pooling, avg-pooling, drop-out, multiple activations and fully-connected layers. \section{Enhanced Rotational Invariant CNN for Supernovae Detection} The aim of this work is to enhance the CNN models obtained in our previous works for SNe detection \cite{cabrera2016supernovae}, \cite{cabrera2017deep}. For this purpose, Deep-Hits is taken as a baseline. There are three major contributions. The first one is to enhance the rotational invariant capability of Deep-HiTS by adding a cyclic pooling average layer. The second is to test ensemble classifiers. The third is to use the LRP method to analyze and visualize the CNN decisions, in particular to assess the rotational invariance property. The details are described in the Experiments section. The data used to train Deep-HiTS and the new model is explained in what follows. \subsection{Data} Since SNe are extremely rare events, negative samples (artifacts) are obtained by running the HiTS pipeline on the 2013 survey, where 40 sky fields were observed in the $u$ band, approximately every 2 hours for 4 consecutive nights. The negative samples are caused mostly by background fluctuations, interference with other objects, poor astrometry, and defective CCD pixels. A total of 802,087 negative samples were generated\footnote{Data-set may contain some transients not present in the reference image, but we conservatively estimate them as a 0.2\% of total data.}. As SNe are rare events, in order to get a balanced dataset, positive candidates were simulated by picking stars from actual observations, applying a specific SNR distribution and then inserting them back into the \emph{science} image in a different location with respect to the original source. As explained above, each sample consists of four 21$\times$21 images, that are stacked. Out of 1,604,174 data samples available, a fixed amount of 1,220,000 was selected for training, 100,000 for validation and 100,000 for testing. This dataset, as well as the Deep-HiTS code, is publicly available at the following link https://github.com/guille-c/Deep-HiTS. We use accuracy, precision, recall, and f1 score as performance measures, and we plot detection error trade-off curves (DET), depicting FNR versus FPR, to evaluate the quality of models at different operation points. \section{Experiments} \subsection{Enhancing Deep-HiTS} We tested cyclic symmetric operations in order to improve the rotational invariance property of Deep-HiTS. The cyclic slicing is maintained since it is exactly the same operation of rotation augmentation used by the original Deep-HiTS model. In a variant to Deep-HiTS, we implemented the cyclic average pooling (CAP) operation just before the output layer. This means that instead of reordering features prior to the dense layers, as Deep-HiTS does, the features associated with each rotation continue to be processed independently of each other. The CAP operation is applied before the output layer of the network, so that, for each sample, the features coming from each rotation are averaged, thus the output score of the network will always be the same, regardless of whether there is a cyclic rotation in the input. This model is proposed based on the prior knowledge that the detection of SNe is independent of angular rotations of samples. The proposed model with cyclic average pooling will be referred as CAP from now on, and it can be seen in Fig. \ref{fig:CAP}. The CAP architecture is identical to Deep-HiTS until the last convolutional layer. We also changed to ReLU activations instead of leaky ReLUs, for simplicity in the models. The first two rows of Table \ref{table:measures} show that there is no significant difference between the model with leaky ReLUs versus ReLUs. Therefore ReLU activations were used for the rest of the architectures tried in this paper. \begin{table*}[t] \centering \normalsize \caption{Models performance over test set. CAPE shows better results in all metrics with respect to DH. Welch's t-test p-values over DH and CAP results show statistical significance, having values below 10$^{-2}$ (except for recall). Applying the same test over CAP and CAPE show no statistical significance, which means there is no major difference using one or another.} \label{table:measures} \begin{tabular}{ccccc} \hline Model & Accuracy & Precision & Recall & F1-score \\ \hline \begin{tabular}[c]{@{}c@{}}Deep-Hits (DH)\end{tabular} & 99.45$\pm$0.02 & 99.37$\pm$0.04 & 99.55$\pm$0.06 & 99.45$\pm$0.02 \\ Deep-Hits + ReLU & 99.47$\pm$0.02 & 99.44$\pm$0.04 & 99.52$\pm$0.06 & 99.47$\pm$0.02 \\ \hline \begin{tabular}[c]{@{}c@{}}Cyclic Avg. Pool (CAP)\end{tabular} & 99.52$\pm$0.01 & 99.45$\pm$0.01 & 99.61$\pm$0.02 & 99.52$\pm$0.01 \\ \textbf{\begin{tabular}[c]{@{}c@{}}Cyclic Avg. Pool Ensemble (CAPE)\end{tabular}} & \textbf{99.53$\pm$0.01} & \textbf{99.45$\pm$0.02} & \textbf{99.63$\pm$0.03} & \textbf{99.53$\pm$0.01} \\ \hline \hline \begin{tabular}[c]{@{}c@{}}Welch's t-test p-value\\ DH v/s CAP\end{tabular} & 2$\times$10$^{-5}$ & 4.2$\times$10$^{-3}$ & 6.6$\times$10$^{-2}$ & 2.2$\times$10$^{-5}$ \\ \hline \begin{tabular}[c]{@{}c@{}}Welch's t-test p-value\\ CAP v/s CAPE\end{tabular} & 1.8$\times$10$^{-1}$ & 8.9$\times$10$^{-1}$ & 2.4$\times$10$^{-1}$ & 1.8$\times$10$^{-1}$ \\ \hline \end{tabular} \end{table*} \begin{figure*}[h!] \centerline{\includegraphics[width=1\textwidth]{arch}} \caption{Schematic architecture of CAP that represents the effect of cyclic slice and cyclic pooling layers. It shows a four channels input that passes through the cyclic slicing layer where rotation copies of it are stacked along the batch dimension. Afterwards, each rotation is processed by convolutional and dense layers independently, until reaching the cyclic pooling layer, where all rotation features (color coded) are fused together to be processed by the output layer.} \label{fig:CAP} \end{figure*} Table \ref{table:measures} shows the means and standard deviations obtained in 6 trials with different initializations for the following measures: accuracy, precision, recall, and f1-score for each model. An improvement of all metrics is observed when introducing the layer of cyclic pooling average, with respect to Deep-HiTS. According to the Welch's hypothesis test, all CAP measures (except recall) show statistical significance with a probability of less than 10$^{-2}$ that these results were obtained by chance. A CAP ensemble (CAPE) of 3 individual models was generated, with the objective of obtaining better performances. The majority rule was applied to combine the classifiers. Table \ref{table:measures} shows that CAPE obtains results that are indistinguishable from CAP, i.e. there are no statistically significant differences. The performances of Deep-HiTS and CAPE can be compared throughout different operation points, by calculating their detection error tradeoff (DET) curves, which plots false negative rate (FNR) versus false positive rate (FPR), see Fig. \ref{fig:DET}. Better models present lower FNR and FPR with a DET curve near the lower left corner. The DET curve corresponding to CAPE is below the Deep-HiTS curve for a large range of user detection thresholds. To better appreciate the difference, a zoom around the operation point FPR $\sim$10$^{-2}$ is shown in Fig. \ref{fig:DETzoom}. This operation point is used in Deep-Hits to make comparisons. When FPR $\sim$10$^{-2}$, the proposed model reaches an FNR of 1.38$\times$10$^{-3}$, while Deep-HiTS gets an FNR of 2.28$\times$10$^{-3}$. \begin{figure*}[t] \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=1\linewidth]{DETsZ} \caption{} \label{fig:DETnormal} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[angle=-90, width=1\linewidth]{DETsingleZoom} \caption{} \label{fig:DETzoom} \end{subfigure} \caption{Detection error tradeoff (DET) curves of DH and CAPE. While a curve is closer to the bottom left corner, the better the model it represents. (a) Whole range DET curves, with a black box to be zoomed in (b). (b) Zoom of (a) DET curves, which shows a $\sim$40\% improvement in FNR of CAPE with respect to DH, around FPR 10$^{-2}$.} \label{fig:DET} \end{figure*} \begin{figure*}[h!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.78\linewidth]{DHhm} \caption{} \label{fig:DHbogus} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.78\linewidth]{PMhm} \caption{} \label{fig:CAPreal} \end{subfigure} \caption{LRP-$\alpha_2\beta_1$ heatmaps for DH and CAP, when propagating output score of predicted class for each model. Sample used is an `SN candidate' that is misclassified by DH and correctly classified by CAP. LRP relevance heatmaps are shown for each rotation input, and then their average per channel when rotations are realigned. (a) DH heatmaps. (b) CAP heatmaps.} \label{fig:LRP} \end{figure*} \begin{figure*}[h!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.78\linewidth]{DHhm87} \caption{} \label{fig:DHreal} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.78\linewidth]{PMhm87} \caption{} \label{fig:CAPbog} \end{subfigure} \caption{LRP-$\alpha_2\beta_1$ heatmaps for DH and CAP, when propagating output score of predicted class for each model. Sample used is an `artifact' that is misclassified by DH and correctly classified by CAP. LRP relevance heatmaps are shown for each rotation input, and then their average per channel when rotations are realigned. (a) DH heatmaps. (b) CAP heatmaps.} \label{fig:LRPimpBog} \end{figure*} \subsection{LRP Analysis and Visualization} We use the LRP method to visualize the effect of rotational invariance provided by the CAP model, and its advantages over the Deep-HiTS model. The rotational invariance property is reflected when projecting the relevances to the rotations within each channel. The LRP method needed to be adapted to each model. For Deep-HiTS, the reverse operation of the forward step for the feature reordering layer is simply carried out, to change from the reordered features to the feature maps propagated by the convolutional layers. In the case of CAP, the current implementation of LRP is adapted to reorder the features to perform a 2$\times$2 average-pooling, where each element of the filter is a feature associated with a different rotation. In this way, the propagation step of relevances through the CAP layer becomes the same as a normal average-pooling layer. First, we used the LRP visualization method to analyze 197 test samples where CAP gave correct predictions, while Deep-HiTS made wrong decisions. Fig. \ref{fig:LRP} shows the case of a sample labeled `SN candidate', and Fig. \ref{fig:LRPimpBog} shows the case of a sample labeled `artifact'. Both Figs. show on the top row the 4 source images (channels). The next four rows display the heatmaps corresponding to cyclic rotations of $k\cdot90^{\circ},\, k\in\{0,1,2,3\}$. The bottom row depicts the average of the unrotated heatmaps per channel. These heatmaps represent LRP relevances when propagating network's prediction scores with the $\alpha_2\beta_1$ rule. For display purposes the heatmaps were normalized between 0-1 for each sample, so that they are comparable to each other. The heatmaps of Fig. \ref{fig:DHbogus} show that the relevance is concentrated in the upper left corner of the unrotated \emph{difference} image. Our interpretation is that the Deep-HiTS decision of class `artifact' is supported by the light source observed in that region of the \emph{difference} image. In contrast, the heatmaps of Fig. \ref{fig:CAPreal} show that the relevance is concentrated in the center of the \emph{difference} and \emph{SNR difference} images, providing evidence of the presence of an SN. The light source that confuses the Deep-HiTS model appears in blue in the CAP heatmaps, i.e., as negative relevances indicating that the presence of such a light source decreases the prediction score of the SN. A property of the CAP model is that the heatmaps corresponding to cyclic rotations are more evenly distributed. Compare for example the last columns (SNR diff) in Figs. \ref{fig:DHbogus} and \ref{fig:CAPreal}, or the last columns in Figs. \ref{fig:DHreal} and \ref{fig:CAPbog}. More uniform distributions of heatmaps for cyclic rotations is an indication that the model is more rotational invariant. Likewise Fig. \ref{fig:LRPimpBog} corresponds to an `artifact' sample. The blue color in \emph{science}, \emph{difference} and \emph{SNR difference} in CAP heatmaps is more evenly distributed across rotations, indicating that the bright source in the middle of the images reduces the prediction score because it looks similar to an SN candidate. On the other hand, Deep-HiTS classifies the same sample as `SN candidate', based on the evidence presented at the center of the 270$^{\circ}$ rotated \emph{difference} and \emph{SNR difference} images. The qualitative interpretation of the heatmaps presents some ambiguity. The red color stands for positive relevances in favor of the CNN's prediction, and the blue color stands for negative evidence against the network's decision. However, each heatmap color can be interpreted as the presence or absence of the respective input feature. For example, the blue color located over the center in Fig. \ref{fig:LRPimpBog}b is an indicator that the brightness diminishes the output confidence. But blue color corresponding to image regions with darker pixels indicates that the model is getting lower prediction scores due to an absence of brightness. To validate quantitatively the conjecture that the CAP model is more rotational invariant than Deep-HiTS, we computed a measure called \emph{Average Standard Deviation of Cyclic Rotations} ($\bar{\sigma}$) using 5,000 test samples. This measure is computed per channel, so there are four different values per sample, corresponding to each of the 4 channels. To compute $\bar{\sigma}$, we need first to choose a channel $c$ and its $i-th$ rotation $c_i$, then calculate the pixel-wise variance for its LRP-$\alpha_2\beta_1$ heatmap $h^{c_i}$. This is done by computing: \begin{equation} Var(h^{c_i}) = \frac{\sum_{j=1}^{N}(h_j^{c_{i}}-\mu_{j}^c)^2}{N}, \label{eq:hm_var} \end{equation} where $h_j^{c_{i}}$ is the $j-th$ pixel, and N is the total number of pixels in a heatmap, in this case $N=441$. The variance is computed with respect to the average heatmap per channel $\mu_{j}^c$. Examples of average heatmaps can be found at the bottom row in Figs. \ref{fig:LRP} and \ref{fig:LRPimpBog}. Finally, $\bar{\sigma}$ is calculated by computing the root square of $Var(h^{c_i})$ for each rotation heatmap, and then averaging over the four rotations, as follows: \begin{equation} \bar{\sigma} = \frac{\sum_{i=1}^{N_r}\sqrt{Var(h^{c_i})}}{N_r}, \label{eq:hm_sigma} \end{equation} where $N_r=4$ is the number of rotations at the input of the CNN. As the SNR difference image is the channel where transients are most significantly observed, we plot histograms of $\bar{\sigma}$ corresponding to \emph{SNR difference}, for both Deep-HiTS and CAP models. Fig. \ref{fig:hists} shows the histograms separated by class: artifacts and SN candidates. For the class `artifact' there are 2,499 samples and for the class `SN candidate' 2,501. To statistically compare the histograms generated for Deep-HiTS and CAP, a \emph{Non-Central t-Distribution} probability distribution is fitted to each histogram using the open source scientific tools library for Python (\emph{SciPy}, \cite{jones2014scipy}). The curve computed with \emph{SciPy} is normalized so that its integral is 1, thus the curve must be re-scaled to fit the original histogram. The scale factor is obtained by normalizing the histogram in the same way as the fitted distribution does. The normalized curve is multiplied by the scale factor in order to plot it over the histogram. \begin{figure} \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=1\linewidth]{Artifacts} \caption{} \label{fig:histArt} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=1\linewidth]{SN_candidates} \caption{} \label{fig:histSN} \end{subfigure} \caption{Average Standard Deviation for Cyclic Rotations ($\bar{\sigma}$) histograms for `artifact' and `SN candidate' classes for the DH and CAP models. A \emph{Non-Central t-Distribution} is fitted to each histogram. (a) ($\bar{\sigma}$) histograms for `artifact' class. (b) ($\bar{\sigma}$) histograms for `SN candidate' class.} \label{fig:hists} \end{figure} Table \ref{table:plot Metrics} shows the mean, variance, skewness and kurtosis obtained from the fitted distributions to `artifact' and `SN candidate' classes. It can be observed that for both classes the CAP model has a lower mean and a larger kurtosis and skewness than the Deep-HiTS model. This effect is clearly observed in Fig. \ref{fig:histSN}, where the CAP histogram shows a leptokurtic distribution, as well as a lower mean value of $\bar{\sigma}$ with respect to the one of Deep-HiTS. This means that CAP relevances are distributed more evenly throughout cyclic rotations, while Deep-HiTS heatmaps tend to focus its relevances on specific rotations. This fact is supporting evidence that the CAP model is more rotational invariant than the Deep-HiTS model. \begin{table}[h] \centering \normalsize \caption{Moments of Non-Central t-Distribution fitted to $\bar{\sigma}$ histograms of `artifact' and `SN candidate' classes for DH and CAP.} \label{table:plot Metrics} \begin{tabular}{cc|c|c|c|c|} \cline{3-6} & & \begin{tabular}[c]{@{}c@{}}$\bar{\sigma}$\\ mean\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\bar{\sigma}$\\ var\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\bar{\sigma}$\\ skew\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\bar{\sigma}$\\ kurt\end{tabular} \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Artifacts}} & DH & 0.0729 & 0.0014 & 0.38 & -1.34 \\ \cline{2-6} \multicolumn{1}{|c|}{} & CAP & 0.0517 & 0.001 & 1.08 & -0.28 \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}SN\\ candidates\end{tabular}}} & DH & 0.0655 & 0.0007 & 1.09 & -0.28 \\ \cline{2-6} \multicolumn{1}{|c|}{} & CAP & 0.0519 & 0.0003 & 1.62 & 1.23 \\ \hline \end{tabular} \end{table} \section{Conclusions and Future Work} In this work, we enhanced the rotational invariant capability of the Deep-HiTS model by adding a cyclic pooling average layer. The results are consistent with the hypothesis that astronomical objects do not depend on the angle on which the image is observed given the same conditions of observation. An ensemble of CAP models obtained the best results so far with the HiTS dataset, reaching an average accuracy of 99.53\%. The improvement over Deep-HiTS is significant both statistically and in practice. For example, for a standard operation point with FPR $\sim$10$^{-2}$, the proposed model achieves an FNR of 1.38x10$^{-3}$, which entails a $\sim$40\% reduction of missing transients with respect to Deep-HiTS. From the astronomer viewpoint, it is important not to miss positive samples of rare SNe events. We have used the LRP method to visualize and analyze the heatmaps showing the most relevant pixels for the discrimination task at hand. We defined a measure to assess quantitatively the rotational invariance capability of the different models. The results show that the proposed model is more rotational invariant than the original Deep-HiTS model. This is a novel application for the LRP method and the first time that it has been applied to astronomical data. LRP is a positive step towards understanding and visualizing what a CNN has learned. However, the tool may be improved to visualize intermediate layers, adding gradient information, as well as improving the interpretation of positive and negative relevances. Obtaining the best ensemble classifier can also be investigated by changing the size and the rule of the ensemble. \section{Acknowledgments} Pablo Estévez, Pablo Huijse and Guillermo Cabrera-Vives acknowledge support from FONDECYT through grants 1171678, 1170305 and 3160747, respectively. Francisco F\"orster acknowledges support from CMM Basal Project PFB-03. The authors thanks the support of Conicyt through project DPI20140090. Ignacio Reyes acknowledges financial support from CONICYT-PCHA through its M.Sc. scholarship 2016 number 22162464. The authors acknowledge support from the Chilean Ministry of Economy, Development, and Tourism's Millennium Science Initiative through grant IC12009, awarded to the Millennium Institute of Astrophysics, MAS. \bibliographystyle{ieeetr}
1,108,101,563,728
arxiv
\section{Introduction} \label{inroduction} With the advent of the Square Kilometer Array (SKA) radio telescope along with its precursor facilities, we expect the radio sky to be surveyed at high speed and to unprecedented sensitivity. While this may enable paradigm shifts in the studies of radio sources, it comes with very high data volumes. For example, the typical image size from the MeerKAT telescope is estimated to be 11.13 TB \footnote{MeerKat SDP group, S. Ratcliffe, B. Merry, Bennett, T.} (individual MeerKat surveys expect to deal with a large number of objects, e.g. MeerKlass survey expects to find upwards of 200000 radio sources in the HI emission line \footnote{Józsa, G.I.G., SKA South Africa, private communication}). This introduces challenges in all the steps of data reduction from RFI mitigation to calibration and imaging. The science level data volume is expected to be similarly formidable. For example, extrapolating from the Square Kilometre Array Design Studies (SKADS) \citep{skads_2010}, gives a source density of $6.2 \times 10^4$ sources per square degree for a survey reaching 1 microJy at 1.4 GHz \citep{padovani_rev}, and the Evolutionary Map of the Universe survey (EMU) with Australian Square Kilometer Array Pathfinder (ASKAP) \citep{norris_emu_2011} is expected to find about 70 million radio sources, while covering two-thirds of the sky. Handling this amount of data is not possible through manual studies: automation of data processing is therefore essential. In this study, we consider the case of radio galaxy classification in the image domain. Traditionally, source detection and classification is trivially done for unresolved and slightly resolved radio sources through various source finding software (these radio sources may in fact be ``components" rather than true sources, for example, these might be lobes of a double radio galaxy). Somewhat more recently, there have been several attempts to classify/identify radio sources through automated techniques (crowd-sourcing is an alternative, e.g. Radio Galaxy Zoo \citep{RGZ_2015}) such as pattern recognition and decision trees \citep{proctor11}, source matching and pattern recognition \citep{van_velzen_2015} and self-organizing maps \citep{self_organizing_maps_polsterer}. The latter is an example of a Machine Learning technique, which has come into increasing use in the recent years (especially in pulsar and transient detection/identification, see \citep{morello2014,Eatough_2010,bates_2012,wagstaff_2016}). The typical process of source detection and classification hinges on using source finding software to generate source component catalogs. These components need to be combined (or identified) as a single source, where necessary (source de-blending would be the opposite issue). This is especially important for extended sources, which are more likely to get divided in multiple components. These can be AGN-powered or star-forming galaxies. In the past, studies have tended to classify these sources by visual examination. This quickly grows impracticable with increasing survey sizes. Here, we consider the application of deep machine learning techniques to classify extended extragalactic radio sources, more specifically, AGN-powered radio galaxies. Machine learning methods have been applied to a variety of astronomical problems, such as star-galaxy classification \citep{odewahn1992}, red-shift estimation \citep{benitez2000bayesian}, classification of optical transients \citep{mahabal2011discovery} and unsupervised source segmentation \citep{hocking2015teaching} among others. These methods have been robust and reliable, and have performed with high accuracy. For example, with classification of stars and galaxies the best available model has over 99\% accuracy \citep{kim2017star}. Estimation of redshifts with machine learning has been done with accuracies over 92\% \citep{cavuoti2017metaphor,cavuoti2017cooperative}. Machine learning methods have been incorporated to real-time transient classification systems and they perform with accuracies of more than 90\% \citep{mahabal2008towards,mahabal2011discovery}. Classic machine learning algorithms such as support vector machines (SVM), K-nearest neighbors (KNN) and decision trees generally learn on `features' extracted from the observational data \citep{kotsiantis2007supervised}. Features represent unique characteristics of the raw data and are domain specific. Feature extraction is carefully done so that the chosen features will represent specific physical properties of the system \citep{guyon2006introduction}. The efficiency of the learning algorithm mainly depends on the quality of the features used\citep{blum1997selection}. Such machine learning algorithms are generally called shallow learning methods \citep{chen1995machine}. In principle, shallow learning methods \textit{learn} from the features rather than the raw data which may be images or time series values. A good understanding of the data and the objective under investigation is required to properly extract the features and fine tune the machine learning algorithm. The features extracted may also not encapsulate the distinct properties of the object. Deep learning is a branch of machine learning in which the machine learning algorithms learn directly from the data instead of features \citep{bengio2009learning}. Deep learning is advantageous in situations where engineered features do not completely capture the physics of the raw data and the machine learning algorithm is not able to learn with minimal loss \citep{arel2010deep,lecun2015deep}. Recent developments in computing technology, mainly with graphic processing units (GPU), has accelerated the development of deep neural networks (DNN) for different applications. The seminal work by \citet{hinton2006fast} and \citet{bengio2007scaling} made it possible to train DNNs for complex classification and regression problems with very high accuracy. It is interesting to note that DNNs have been beating all other shallow learning algorithms by huge margins \citep{lecun2015deep} especially in applications such as object recognition\citep{krizhevsky2012imagenet}, image captioning \citep{vinyals2015show}, speech recognition \citep{graves2013speech}, video natural language processing\citep{collobert2008unified} and many more. These considerations make DNNs a very useful tool for the classification of extragalactic radio sources performed in this study. There are a variety of ways in which radio galaxies can be classified. The classification can be made on a purely morphological basis, or can take other parameters into account, e.g. spectral index, host galaxy brightness at optical/infrared wavelengths, host galaxy spectra/type. Restricting ourselves to classification schemes based solely on morphology, we find schemes such as the Fanaroff and Riley classification (FR henceforth) \citep{FR74}, Wide-angled tailed and Narrow-angled tailed radio galaxies etc. In this study, we have chosen to restrict our investigations to classifications made only on the basis of the radio morphology. The advantage of this choice being that the source samples used are not restricted by the availability of ancillary data. In turn, the classification algorithm is valid for data which has no or limited ancillary data available. This is a major consideration for deep radio surveys as well as surveys which are outside the coverage of available ancillary data. The first classification scheme which we consider here is the Fanaroff-Riley (FR) classification. The FR scheme divides extended radio galaxies in two classes, designated as FRI and FRII, membership of which depends on the ratio R of the distance between the brightest points in the source and the total size of the source.Radio galaxies for which R $ < 0.5$ are classified as FRI and those for which R $ \geq 0.5$ are classified as FRII. Typical features associated with FRI-type radio galaxies include diffuse, plume-like jet(s) and cores which are brighter than jets/lobes. FRII-type radio galaxies on the other hand, show bright `hotspots', typically at the end of the lobes and cores which are less bright than these. The FR classification scheme, which starts on a morphological basis, also corresponds to a division in radio power ($P_{1.4GHz} = 10^{25}$ W/Hz) and possibly host galaxy optical luminosity \citep{OL96}. For a detailed discussion, see \citet{saripalli12}. The FRI/II sources form the bulk of AGN-powered radio galaxies. These are important sources of the feedback processes in the cosmic structure formation \citep{croton2006}. Several arguments have been advanced to explain the morphological differences including intrinsic differences in the AGNs powering these sources, the environments of the sources large-scale or galactic scales or the mode of the accretion. However, these factors have not been able to successfully explain the FR dichotomy \citep{Gendre2013}. Apart from these sources, there are the so-called FR 0 sources \citep{Sadler2014,Baldi2016} as well as sources with 'hybrid' morphology \citep{Gopal-Krishna2000} which require further examination. An issue in the latter studies is the relatively small fraction of FRI sources found in current all-sky surveys - due to their relatively high detection and completeness threshold, high redshift and/or low luminosity, low-surface brightness source populations are not probed well. Upcoming all-sky surveys with SKA will probe FRI populations to high redshifts \citep{kapinska2015} and would be able to answer these questions \citep{kharb2016}. The other category of sources considered in this work are bent-tail sources. Bent-tailed radio galaxies include Wide Angled Tailed (WAT), Head-tail (HT), Narrow Angled Tailed (NAT) radio galaxies. As their name suggests, these radio galaxies have jets ('tails') which are bent at angle from the host optical galaxy, the nature of the angle between the jets determining if the radio galaxy is a WAT or NAT. In some of these galaxies the jets are swept back to such an extent that they appear as a head (the core) and a tail. These are the HT radio galaxies. The peculiar radio morphology of the bent-tailed sources is generally attributed to their environment, typically a galaxy cluster or a group \citep{burns98}. As such, these sources can be used as tracers of clusters of galaxies \citep{rgz16}, \citep{mao_atlas}, especially at high redshifts where the information from optical or X-ray bands may be unavailable or sparse. The plan of the paper is as follows. In the next section we describe the source sample chosen for training and classification. Section 3 gives a concise background of Convolutional Neural Networks. Section 4 contains a description of the specific neural network model we have chosen, the pre-processing needed for the sample source images and the training process. Section 5 explains the classification model used to determine the final classification of the sources. Section 6 presents the results and discussion. In section 7, we briefly summarize the study and present conclusions. \section{Sample Selection} \label{sample_selection} In this section we describe the sample formation for this study. We have formed separate samples for FRI, FRII and Bent-tailed radio galaxies respectively. The factors to consider while selecting the samples were high sample numbers and images which are well-resolved as well as the free availability of images. With these constraints, we have decided to restrict ourselves to sources from the Faint Images of Radio Sky at Twenty Centimeter (FIRST) \citep{FIRST1995} radio survey. As described below, since there are no source samples of each category which are sufficiently large, we have combined several different samples of sources, further, creating artificial sources by processing the sources from these samples (see Section ~\ref{preprocessing_sample_images} for details). We initially selected the FRI-II sample from a subset of the Combined NVSS and FIRST Galaxies sample \citep{config1,config2} (CoNFIG henceforth). This sample of radio sources was compiled specifically to address the need and lack of samples of FRI-II sources in the literature. The CoNFIG sample was compiled from an overlapping region from the NRAO VLA Sky Survey (NVSS) \citep{NVSS1998} and FIRST surveys. The CoNFIG sample is divided into four sub-samples of varying flux density limits in NVSS, named CoNFIG-1-4, with $S_{1.4GHz}\geq 1.3,0.8, 0.2 $\& $0.05$ Jy respectively (CoNFIG-2-4 are spatially subsets of CoNFIG-1). It should be noted that even the faintest sources in the sample are bright relative to the the bulk of the sources expected to be detected in upcoming surveys. In total, the source catalogue from CoNFIG contains 859 sources. This CoNFIG sample was classified by morphological basis into two categories, FRII and FRI radio galaxies (as well as Compact sources and sources of Uncertain morphology, which we do not include in this study). As the NVSS images for most of these sources are unresolved with the NVSS beam FWHM of $45"$, the structural information is obtained with FIRST images which has a beam FWHM of $5"$. The criteria for the classification were presence of 'hotspots' at the edge of radio lobes as well as alignment of the lobes (if the lobes showed hotspots and were aligned, the source was classified as FRII; collimated jets and hotspots close to the core were taken as signs of FRI radio galaxies, note that this includes bent-tailed radio galaxies). In the present study we make use of only the sources classified as FRI/II from these. The FRI/II radio galaxies have an associated flag which can be understood as the degree of confidence in the classification of the source; the flag can either be 'confirmed' or 'possible'. The final classification of the sample provides 71 FRIs (50 confirmed) and 406 FRIIs (with 390 confirmed). As an initial sample we have chosen the 50 confirmed FRIs and 390 confirmed FRIIs. The sparsity of the FRI-type radio galaxies is due to the relatively shallow flux density limits of the CoNFIG survey. For example, at $z=0.15$, the median redshift of FRI radio galaxies in CoNFIG-4 (which is the deepest and spatially the smallest CoNFIG region), the limiting flux density for the other three regions corresponds to a radio power above the nominal radio power divide between FRI/FRII classes. The bulk of the FRIs comes from low redshifts, while the reverse is true for the FRIIs. To supplement the smaller number of the FRI radio galaxies and address the imbalance in the training set (see Section \ref{preprocessing_sample_images} for more details), we decided to include the recent FRICAT catalog of FRI radio galaxie \citep{FRICAT}. The FRICAT catalog is a subsample of the \citet{bestandheckman12} sample, by imposing an upper redshift cut of $z = 0.15$, which gave an initial sample of 3357 sources. A further constraint of the size of the radio emission of atleast 30 kpc from the centre of the host galaxy as seen in FIRST images was applied (corresponding to 11".4 for the most distant objects in the FRICAT catalog -thus giving several resolution elements for the smallest source in the sample). Further, sources displaying only FRI morphology (one sided and two sided jets with as well as narrow-angled tailed objects were included). This classification was done visually by all the three authors independently and a source was included in the catalog if at least two of the authors agreed on the classification. This makes the classification more robust; this is also similar to the procedure we have adopted independently (see Section \ref{classification_model}). Including the FRICAT model gives another 219 FRIs (we have excluded the sample of small FRI galaxies included in the FRICAT catalog in the present study). It should be noted that the majority of the FRI source sample for this study is from low redshift universe, while the majority of FRII radio galaxies corresponds to relatively high redshifts. This also means that for a given physical extent, FRIswould have more structural detail. For bent radio galaxies, we have made use of the catalog from \citet{proctor11}, where the FIRST radio source database has been classified along morphological categories using a combination of pattern-recognition tools and visual inspection (the latter for sources with more than four components and thus expected complex morphology). For details of the classification method, see \citet{proctor03} and \citet{proctor06}. In brief, sources in the FIRST catalog are separated in groups, with low-count groups (those with fewer than 3 members) being classified using decision tree pattern recognition techniques in various categories (WAT, NAT, W-shaped sources etc.), and higher-count groups classified using visual inspection. We make use of only the latter category to form a sample of bent-tailed sources. These sources have been visually examined and classified into a variety of types. From these, we have chosen only the confirmed WAT and NAT radio galaxies, excluding those sources where the WAT and NAT identification is uncertain (marked by '?' next to the classification in the table). This gave us 299 bent-tailed radio galaxies. \begin{table}[!htbp] \centering \begin{tabular}{lllll} \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{Initial Sample Size} & \multicolumn{1}{l}{Image-based Cut} & \multicolumn{1}{l}{Morphology based Cut} & \multicolumn{1}{l}{Final Sample Size} \\ \hline FRI Sources & 269 & 77 & 14 & 178 \\ \hline FRII Sources & 390 & 92 & 14 & 284 \\ \hline Bent-tailed Sources & 299 & 11 & 34 & 254 \\ \hline \end{tabular} \caption{Table summarizing the sample selection process; the image-based cut refers to sources excluded due to presence of artefacts, lack of structural information or very large source size, the morphology based cut refers to sources discarded due to confusion regarding the morphology. } \label{table_source_selection} \end{table} From these initial samples, we have excluded all the source images in which there are strong artefacts. We have also excluded those images which contain multiple sources, as well as sources too large to fit in the cutout and sources small to have sufficient structural information. This reduces the sample size to 47 FRIs from the CoNFIG sample and 145 FRIs from the FRICAT sample (giving 192 FRIs in total), 298 FRIIs and 288 bent-tailed radio galaxies. Since all these are samples based on visual inspection, there is a possibility of confusion in the assigned class of a source due to different studies estimating different morphologies for the same source. To resolve this issue, we have excluded all \textit{cross-matched} sources from the FRI and FRII samples which have been marked as confirmed WAT/NAT, W-shaped or Ring (and Ring-lobe) morphologies in either \citet{config2} or \citet{proctor11}. We have removed the bent-tailed radio galaxies from the FRICAT by visual inspection. After removing these sources, we are left with $178$ FRIs, $284$ FRIIs and $254$ bent-tailed radio galaxies. This process is summarized in Table ~\ref{table_source_selection}. In the next section, we describe the convolutional neural network which will be trained in classification using this sample of sources. \section{Convolutional Neural Networks} \label{convolutional_neural_network} Artificial neural networks (ANN), inspired by biological neurons, try to approximate nonlinear functions from a set of inputs by combination of simple functions \citep{cybenko1989approximation}. ANNs generally consist of a network of interconnected neurons which may have many inputs and a single output like a biological neuron. Having a proper learning rule and activation functions, such interconnected neurons in a specific architecture can be used for classification and regression applications \citep{jain1996artificial}. The output $\mathit{y}$ of a single neuron can be mathematically be represented as \begin{equation}\label{neuron} y = \sum_{j=1}^{d} w_{j}x_{j} + w_{0} \end{equation} where $\mathit{x_{j}}$ are the different inputs to the neuron, $\mathit{w_{j}}$ are the weights to the corresponding inputs and $\mathit{w_{0}}$ is the bias term. The $\mathit{w_{j}x_{j}}$ term represents a dot product. The output y is then usually passed through an activation function. Similar to the action potential in a biological neuron which decides the rate of neuron firing, the activation function in an artificial neuron restricts the neuron output to normalizable values. \begin{equation} \label{neuron-new} \hat{y} = f \left(y \right) \end{equation} $\mathit{f}$ in equation \ref{neuron-new} is the activation function. The activation functions are of different types namely threshold functions, piece-wise linear functions and sigmoid functions \citep{duda2012pattern}. Similarly large number of neurons can be interconnected with multiple layers of neurons having distinct activation function \citep{duda2012pattern}. Therefore Equation \ref{neuron} can be rewritten as \begin{equation}\label{hidden} Net_{k} = \sum_{j} ^{n_H} y_{j} w_{kj} + w_{k0} \end{equation} In equation \ref{hidden}, $\mathit{k}$ are each units in the output layer and $\mathit{n_H}$ are the number of hidden layers. Combinations of such $\mathit{n_{H}}$ layers can be used to learn non-linear functions with backpropagation \citep{hecht1989theory}. During the learning process, the inputs, multiplied with their associated weight and bias propagate from the input layer to the output layer through the different hidden layers of neurons. This is commonly referred to as forward pass or forward propagation. At the output the error between the calculated output and expected output is estimated and this error is sent back from the output to the input layer to adjust the weights of the neurons. This is called backward pass or backpropagation. In a convolutional neural network, the dot product in equation \ref{hidden} is replaced by a convolution operator. Hence, $\mathit{w_{j}}$ will be a vector instead of a single value as in the case of a normal neural network. $\mathit{w_{j}}$ is often called a kernel or filter. This facilitates convolutional neural networks to directly operate on raw data such as images or time series data as opposed to feature vectors in normal neural networks \citep{lecun1995convolutional}. Yann LeCun for the first time showed the successful application of convolutional neural network (CNN) to digit recognition \citep{lecun1995convolutional}. CNNs have been widely used for image classification \citep{lawrence1997face}, speech signal processing \citep{hinton2012deep} and text classification \citep{collobert2008unified}. CNNs are also referred to as Time Delay Neural Networks (TDNN) because they are generally insensitive to translations of a pattern \citep{duda2012pattern}. This property is achieved by a method called weight sharing, which constrains backpropagation to generate the same weight values for similar a region in the input space. The input space refers to the data space which is input to the network, for example images or time series data. Weight sharing is an important property of CNNs which allows the generation and extraction of translation independent features from the raw data. It can be explained with the following example. Consider a cutout image of an FRI type galaxy. The galaxy remains FRI even if the same galaxy is at the center of the image cutout or even at any of the corners provided it is clearly visible. The same is the case when the spatial size of the galaxy within the image changes. There are specific properties that make the galaxy FRI type irrespective of its position, size,flipping or mirroring and tilt. The CNN learns features that are shift, translational and rotational invariant through weight sharing. This simply means that the set of weight values which represent the FRI galaxy features are same irrespective of translational and rotational variations across different samples of the same type. Thus the weights are shared for a specific property of a class are shared among different samples. For example the set of weights which extract features for the two hot spots of FRII galaxy are the same for any sample of FRII type galaxy. CNNs also have the same feedforward operations as that of a conventional neural network, enabling application of similar learning principles. One of the main advantages of convolutional neural networks is that the input to the network is the raw data, or images in this case, rather than feature values designed by astronomers \citep{krizhevsky2012imagenet}. This enables the network to learn and generate a hierarchy of features with minimal information loss \citep{oquab2014learning}. Each layer of convolutions learn different features. For example, the first few layers learn simple feature such as edges and corners. Successive layers combines these elementary features into more complex features to generalize the input data. This succession of feature learning generates a hierarchy of features \citep{masci2011stacked}. Figure \ref{cnn} shows a single layer of convolution in a CNN. \begin{figure}[!ht] \centering \includegraphics[scale=1]{featuremaps.eps} \caption{Illustration of a single convolutional layer with multiple output feature maps. With a single input it is possible to learn different features with different filters/kernels, thus creating a depth of feature maps. The gray squares represent different filters learned in a single layer. The small squares on the input image with pointers to the feature maps simply shows the convolution kernel sliding across the image to generate values in the feature maps.} \label{cnn} \end{figure} CNNs are generally characterized by a third dimension in the network, often called the depth. In the case of image data, different kernels/filters can be learned at once in a given layer as shown in Figure \ref{cnn}. This enables learning of different features of the input data. Each learned kernel adds to the depth dimension of a layer. The general notion of CNNs is that the depth increases in the forward direction. The complexity of the features learned in each layer increases in the forward direction and finally they are combined into a fully connected layer.The final layer usually comprises of a cross-entropy function \citep{hagenauer1996iterative,de2005tutorial} for calculating the loss and a scoring layer at the end. The cross-entropy function is a decoding scheme used in information theory which is based on probability distribution of the sample classes. The final feature layer in a neural network need to be decoded or converted to give the correct class of the training/ test sample. The cross-entropy does this job by looking at the distribution of the feature values. In this study we will be using a binary cross-entropy function since the models individually will be doing a binary classification. Deep neural networks are neural networks which has many hidden layers, generally more than two \citep{hinton2012deep}. A convolutional neural network which have multiple layers of convolutions is termed Deep Convolutional Neural Network (DCNN) \citep{krizhevsky2012imagenet}. DCNNs have been widely used for image recognition and speech signal processing applications, and have been performing with exceptional accuracies \citep{krizhevsky2012imagenet,hinton2012deep,lecun2015deep}. In this study we have made use of DCNN for radio galaxy classification. Another interesting property of DCNNs is transfer learning \citep{yosinski2014transferable}. Transfer learning enables to train a network for a new application with few training examples. In areas like astronomy, it is often difficult to have clean, hand labeled datasets for different applications. It is possible to exploit the property of transfer learning in DCNNs to train a pre-existing model for a different classification problem. In addition to this, it also to improve the existing model without having to retrain from scratch, as opposed to other, shallow, machine learning methods. One of the main objectives of this study is to also provide a DCNN model that can be used for future transfer learning applications. DCNNs have been used for different applications in optical astronomy such as star-galaxy classification \citep{kim2017star} and redshift estimation \citep{hoyle2016measuring}. In recent work by \citet{dieleman2015rotation} a rotational invariant convolutional neural network was used for optical galaxy classification which gave near-human accuracy. These results provide motivation for the application of such techniques to radio astronomy as well. \section{Network Model} \label{network_model} Neural network model design in general is considered to be a hyperparameter optimization problem. This simply means that there is no strict guideline for the design of a neural network. There is no rule to decide the number of hidden layers or number of neurons for a model. The model design is usually done with respect to the complexity of the data that is investigated. In the case of convolutional neural networks, model complexity is generally found to increase with complexity of the objects for classification. Even though simplified models can deliver good accuracies, their prospect for transfer learning \citep{oquab2014learning} are limited. This is because simple models generally have fewer layers of convolutions and activations. For this study we initially explored different simple models with up to 5 layers comprising of 3 - 4 convolutional layers. During training these models performed poorly with accuracies below 60\% which only is slightly better than random guess. One of the objectives of this study is to provide a model complex enough and standard that it can be used for studying more complex source morphologies with fewer training samples enabled by transfer learning. Therefore we chose to use a standard model which has been successfully used for different transfer learning models \citep{oquab2014learning}. We have used a slightly modified version of the Alexnet convolutional neural network \citep{krizhevsky2012imagenet}, see Figure \ref{alexnet}. This model has been successfully used for different image classification problems and it gave promising results (accuracies greater than 80 \%) for the initials tests we did. The advantage of this model is that it can be easily adapted to new classification problems and also can handle background noise in images \citep{joshi2012scalable,sukhbaatar2014training}. The original network is designed to work on color images and in our case we have modified it to work on single channel images. We have also made corrections to handle the image size and number of classes. \begin{figure}[!htb] \includegraphics[width=\textwidth]{Alexnet.eps} \caption{Convolutional neural network architecture used in this study. The model takes in a single channel gray scale image of size 150 $\times$ 150 and outputs the classification score for two classes. The network has a total of 12 layers with 5 convolutional layers, 3 pooling, 3 fully connected and a scoring layer. The scoring layer produces the probability scores for two classes.} \label{alexnet} \end{figure} The model consists of 5 convolutional layers with three sets of maxpooling layers. The maxpooling layers are basically sub-sampling layers. This layer performs down-sampling of an input layer in a non-linear fashion so as to reduce computational complexity \citep{boureau2010theoretical} during forward propagation. Each convolution is followed by a Rectified Linear Unit (ReLU) \citep{nair2010rectified} which a kind of activation layer. This is then followed by a normalization layer (NORM). The final pooled output (Pool5) is then passed onto a series of three fully connected layers which also have ReLU activations. Figure \ref{layerexp} shows what happens in the main component layers of the network mainly the convlutional layer and pooling layer. \begin{figure}[!htb] \centering \includegraphics[scale=.5]{CNN-exp.eps} \caption{Generalized illustration of convolution layer and pooling layer of the model. Convolution and pooling layers are the two main components of the DCNN used in this study. The figure shows the output of two such layers in the model. The left most column shows a preprocessed input image which us fed into the network. The second column shows just six filter outputs of the first convolutional layer. It can be seen from the highlighted pixels that each filter/ feature map has learned different features of the input image. Not all the filters have learned proper features which can be seen from the low values pixels in the output. The left most column shows outputs of the pooling layer of the corresponding convolutional layers. It can be seen that the pooling layer has down-sampled the output of the first convolutional layer. Similar operations happen in the rest of the network} \label{layerexp} \end{figure} Within the fully connected layers, neurons having weak connections are dropped out during training. Those neurons/nodes whose weights have a very small value do not interact with other nodes. Such node connections are insensitive to weight updates and are termed as weak connections. These nodes can be discarded off the network since they do not influence the forward pass. This procedure is called \textit{dropout} and is a mechanism to avoid over-fitting \citep{srivastava2014dropout}. 50\% dropout is carried out in this network, which is essentially removing 50\% weak neuron connections. The fully connected layer FC3 has a depth of two in contrast to the other fully connected layers. The final layer during training calculates the cross entropy loss \citep{gold1996softmax} and during validation outputs a Softmax probability score for the two classes. Thus the network has 12 different layers including the different convolutional layers, pooling, fully connected and softmax layer. The functional description of each layer, kernel sizes and learned parameters are given in Table \ref{table1}. \begin{table}[!ht] \centering \caption{Table of Layer parameters and functions} \label{table1} \begin{tabular}{ccccc} \hline \textbf{Layer name} & \textbf{Function} & \textbf{Depth} & \textbf{Kernel size} & \textbf{Parameters} \\ \hline \hline Conv1 & Convolution & 96 & 11x11 & \multirow{2}{*}{11712} \\ \cline{1-4} Pool1 & Max Pool & 96 & 3x3 & \\ \hline Conv2 & Convolution & 256 & 5x5 & \multirow{2}{*}{307456} \\ \cline{1-4} Pool2 & Max Pool & 256 & 3x3 & \\ \hline Conv3 & Convolution & 384 & 3x3 & 307456 \\ \hline Conv4 & Convolution & 384 & 3x3 & 663936 \\ \hline Conv5 & Convolution & 256 & 3x3 & \multirow{2}{*}{442624} \\ \cline{1-4} Pool5 & Max Pool & 256 & 3x3 & \\ \hline FC1 & Fully Connected Layer & 4096 & \multirow{3}{*}{} & 16781312 \\ \cline{1-3} \cline{5-5} FC2 & Fully Connected Layer & 4096 & & 16781312 \\ \cline{1-3} \cline{5-5} FC3 & Fully Connected Layer & 4096 x 2 & & 8194 \\ \hline Softmax & Softmax Layer & 2 & \multicolumn{2}{c}{} \\ \hline \hline \multicolumn{4}{c}{\textbf{Total Number of parameters learned}} & 35881666 \\ \hline \end{tabular} \end{table} In Table \ref{table1}, we can see that as the number of convolutional layers increases in the forward direction, and also that the total number of parameters the network has to learn increases dramatically. This is one drawback of deep neural networks : one needs large amount of memory to hold these parameters during training. At present, this challenge has been overcome with the advent of GPUs which can do fast computation of error gradients in a neural network while also having enough memory to hold large number of parameters during the training phase. During the testing phase, where there is only forward computation and no backward computation, the memory issue is reduced. Memory overflow can only happen when the batch size of the input is too large or the image size is too big compared to the total available memory. \subsection{Pre-processing Sample Images} \label{preprocessing_sample_images} Certain image preprocessing steps are desirable before the image is fed into a convolutional neural network or any other machine learning algorithms. This is usually done for maintaining the homogeneity of the sample space. This procedure has specific importance with convolutional neural networks because the neurons behave like visual receptors similar to human visual receptors. The basic idea is that if a human is able to \textit{see} an object, the network should also be able to \textit{see} it. The different stages are shown in Figure \ref{preprocess}. First, the sigma-clipped statistics\footnote{Using the Astropy functionality for sigma clipping,$ http://docs.astropy.org/en/stable/api/astropy.stats.sigma\_clipped\_stats.html\\ \#astropy.stats.sigma\_clipped\_stats$} of each image are estimated in order to calculate the background noise and flux levels. With sigma-clipped statistics, pixels above certain sigma level from the median are discarded or nulled. In this study all values below a 3$\sigma$ level of the background were cut-off by suppressing those values to zero so as to highlight the contribution of the source and remove any unwanted background noise.The value for sigma-clipping was chosen by training and testing the model at different sigma values. After different iterations and tests we found that the value of 3-sigma was better than 2-sigma or 5-sigma. In all cases other than 3-sigma the model had an accuracy less than 60\%. \begin{figure}[!htb] \includegraphics[scale =.65]{preprocess.eps} \caption{The different stages of image pre-processing before training / testing the network. (a) is the image of the object which is of size 300x300 pixels. (b) The pixels below some noise threshold are suppressed with sigma-clipped statistics. (c) The image is cropped from the center to size 150x150 pixels and similarly for the rotated and flipped versions.The smaller boxes show different cut outs at different rotations.} \label{preprocess} \end{figure} One of the important requirements of neural networks is a large number of training samples. With less than 500 training samples in total, it is impossible for the network to learn the features and generalize the different classes. To overcome this issue, training samples can be over-sampled while preserving the labels by rotating and flipping each sample \citep{krizhevsky2012imagenet}. Generally speaking to train machine learning algorithms, the nominal number of training examples is in order of 10000. For deep neural networks this number is much larger. In this particular case where we have only few training samples the label preserving oversampling will help create a large training set to train the network optimally. Here each sample which is either a rotated or flipped version of a specific image is treated as a unique training sample. This procedure also helps the network learn rotational invariant features of the samples in the convolutional layers. Another issue that need to be addressed along these lines is the class imbalance in the synthetic training set. Machine learning algorithms require fairly equal number of training samples for each class for the model to have good balance between bias and over-fitting \citep{duda2012pattern}. The model will tend to over-fit if there is large number of training samples for a specific class compared to the other. In this study we have balanced the number of training samples by suitably choosing the number of angles for the rotations and their flipping. This procedure will help to have more evenly distributed samples in the parameter space learned by the convolutional layers. The different pre-processing steps, shown in Figure \ref{preprocess} can be summarized as follows. The downloaded images were 300x300 pixels in size. The images were rotated by small angles in steps of either 1$^{\circ}$, 2$^{\circ}$ or 3$^{\circ}$ such that number of samples for all the three classes were roughly equal. The number of rotation steps were chosen such that all the different classes had roughly equal number of bootstrapped training samples so as to minimize any over-fitting issues in the model. Afterwards a 150x150 patch centered on the source was cut out from the main image. Flipped and rotated versions of the samples were also generated. \subsection{Training} \label{training} To evaluate machine learning models, the whole data is generally split into two parts. The first part of the data is used to train the machine learning model. The second part of the split data is used to validate the performance of the trained model. This part of the data is known as validation or test data. It is a general practice to take larger portion of the data to be used for training and the remaining for validation. None of the samples in the validation set are seen by the model during training. In this study the complete dataset for the 3 classes was split with a 70-30 ratio, where 70\% of the original data were taken for training and 30\% for validation. So the actual number of training samples were 125 FRIs, 227 FRIIs and 177 Bent-tailed. The number of validation samples were 53, 57 and 77 for FRI, FRII and Bent-tailed radio galaxies respectively. The data oversampling and augmentation were done for the training set. Thus, the number of samples for FRIs were 45000, 40680 FRIIs and 31860 Bent-tailed `sources'. Afterwards the training data was again split randomly for training and testing with a 80-20 ratio. Therefore the number of samples that went into training from this second split were 36000 FRIs, 32688 FRIIs and 25488 Bent-Tailed `sources'. The training and test samples, being over-sampled versions of the same training images, have overlap since they were generated from the same base samples, but the validation samples which were split from the original 70-30 split were never seen by the network during training. The network was implemented with the deep learning package called Caffe \citep{jia2014caffe} which is widely used in computer vision applications. We used an NVIDIA forked version of Caffe to support training on multiple graphical processing unit (GPU) cards. Images in Portable Network Graphics(PNG) format were converted to Lightning Memory-Mapped Database (LMDB) for quick access to memory. The training was done on a machine with an Intel(R) Xeon(R) CPU, 260 GB memory and four TITAN-Black GPUs with 12GB RAM each. The kernels of each layer were initialized with random Gaussian values. We used a stochastic gradient descent algorithm \citep{duda2012pattern} with a batch size of 100 for training. The batch size determines the number of samples that is used for a single forward pass before calculating the backpropagation error by the stochastic gradient descent algorithm. The best learning rate was a step function with base learning rate of 0.01. The training was done for 30 epochs and a validation of the network was done during every epoch to keep track of the learning performance. The learning curves for the three binary classification models are shown in Figure \ref{learningcurve}. \begin{figure*}[ht!] \centering \gridline{\fig{fr1vsfr2.eps}{0.35\textwidth}{(a)} \fig{fr1vsbent.eps}{0.35\textwidth}{(b)} \fig{fr2vsbent.eps}{0.35\textwidth}{(c)} } \caption{Learning curves showing the training loss and test accuracy for the three different binary classification models. It can be seen that the test accuracy and training loss saturates after 10 epochs for all the three models.(a) shows the training and testing accuracy for FRI vs FRII classification. Similarly (b) and (c) shows the learning curves for FRI vs bent-tailed and FRII vs bent-tailed classifications, respectively.} \label{learningcurve} \end{figure*} The learning curves give a measure of the performance of the machine learning model for the training and testing data \citep{perlich2011learning}. The training loss, which is a negative log-likelihood, is calculated from the cross entropy error \citep{hinton2006reducing} and is given as \begin{equation} \label{loss-eqn} L(w) = - \frac{1}{N} \sum_{n=1}^{N} \left[y_{n} log \hat{y_{n}} + (1 - y_{n}) log (1 -\hat{y_{n}}) \right] \end{equation} In equation \ref{loss-eqn}, $N$ is the number of training samples, $\mathit{w}$ is the weight vector, $\hat{y_{n}}$ the expected output and $y_{n}$ is output during a forward pass. The stochastic gradient algorithm minimizes the error $L(w)$ by properly adjusting the values of the weight vector $\mathit{w}$. Thus the training loss gives us an idea of how well the model is learning over each iteration or epoch. The test accuracy is determined when, for each epoch, the model parameters are fixed and no learning takes place, while the model is tested against the test data. The test accuracy computed for each epoch is given as \begin{equation} Accuracy = \frac{1}{N} \sum_{n}^{N} \delta \left\lbrace \hat{l_{n}} = l_{n} \right\rbrace , \delta \left\lbrace condition \right\rbrace \begin{cases} 1 & \text{if condition} \\ 0 & \text{otherwise} \end{cases} \end{equation} $N$ being the number of test samples, $\hat{l_{n}}$ is the predicted class label for the $n^{th}$ sample and $l_{n}$ is the true label. With four GPUs, the training for each test cases took around 1.2 hours. In all the cases we observed that the accuracy tended to saturate to high values after 10 epochs and the training loss fell steeply to very low values. This is because each model is a simple binary classification problem and the network tends to learn quickly without too many training epochs. \subsection{Filter Visualization} \label{filter_visualization} Filter visualization of the network model helps to understand what happens as the learning progresses with each epoch. Different filters learn different properties/features of the object. Looking at the filter visualizations and the learning curves in Figure \ref{learningcurve}, we can see how the confidence of the network improved with each epoch. During the first few epochs, the network had confusion between the object and the background, and with further learning it gains confidence in distinguishing both. This is shown in Figure \ref{filter}. \begin{figure}[!ht] \centering \includegraphics[scale=.5]{filter_change.eps} \caption{Visualization of the output of two random filters from the first convolutional layer of the network. During the initial epochs the network, the weights had small values (middle row); and as the learning progressed the network learned to distinguish between the background and the object. It can be seen from the visualization in Epoch 30 (bottom row) that the weights had solidified with larger values for the object and the background. The filter values here are scaled from their actual values for visualization.} \label{filter} \end{figure} Figure \ref{filter} shows the output from two random filters in the first convolutional layer at the first epoch and the last epoch. It is evident that the network has learned to distinguish between the object and the background. Filter 12 has learned the sharp / high frequency features of the radio galaxy and Filter 93 more of the smooth features. In both cases the network has learned to recognize the radio galaxy in the final epoch. Initially only a part of the radio galaxy is recognize,d while later on the major parts of the galaxy are being recognized. The filter outputs in the initial layers are fairly easy to explain, but as one goes into further layers in the forward directions, the filter visualizations are more difficult to explain \citep{zeiler2014visualizing}. \section{Classification Model} \label{classification_model} With three classes of objects in the study, we trained three different models for binary classification namely FRI vs FRII, FRI vs Bent-tailed radio galaxies and FRII vs Bent-tailed radio galaxies. Since the actual sample size of the three objects for training is highly imbalanced, there will be a general bias in the models having comparable and large sample numbers. Initially we trained a single DCNN to classify the three objects together. So the single model would predict if the given sample was either FRI or FRII or Bent-tailed radio galaxy. Even though the bootstrapping procedure that generated synthetic training samples to overcome the issues of few training examples and class imbalance problem, the single model we trained to classify the three classes performed inefficiently during training and validation. During training the model showed large training loss which a clear indication of poor learning and over-fitting. It was found during training that the training error increased during each epoch and the corresponding test accuracy in each epoch went below 60\%. This confirmed that the model was not performing well. The first solution to minimize this large training loss with a single model was change the loss function used for training. The Info-Gain loss function is designed and suitable for tackling class imbalance in convolutional neural networks \citep{jia2014caffe}. We experimented the info-gain loss function with different parameters and retrained the network. It was found that with all the different parameters we tried to optimize with this loss function, the network did not learn the task optimally and both the training and validation results were poor. We then broke the three class classification problem into three binary classifications which performed better in terms of individual classifications. To overcome the issue of tuning model complexity, we have made use of a fusion classifier which is basically a majority voting classifier\citep{dietterich2000ensemble}. This is illustrated in Figure \ref{fusion}. \begin{figure}[!htb] \centering \includegraphics[scale=.4]{Fusion.eps} \caption{Fusion model with majority voting ensemble classifier which combines the predictions from the three binary classifier models to make the final prediction. The figure shows the individual predictions from the three binary classifiers being fed into a fusion classifier which gives the final classification. This model is ideal in situations where there the individual models have a slight bias and also beneficial to identify odd inputs.} \label{fusion} \end{figure} The fusion model takes the individual predictions of the binary classifiers and their corresponding probabilities to make the final prediction. In general, if for a given sample, two classifiers predict the same class with high probability then the final class will be the same. But if the three binary predictions are different and have mixed or low probabilities for their predictions, such samples will be rejected and classified as ``\textit{strange}" objects, and their probability value will be set to zero. This allows users to find objects of potentially interesting or confusing morphology. \pagebreak \section{Results \& Discussion} \label{results_and_disctussion} The performance of the fusion model is evaluated on the basis of the classification precision, recall and $F_{\beta}$ score in percentage. The precision gives a measure of correctly classified samples and is given as, \begin{displaymath} \mathrm{Precision} = \frac{TP}{TP + FP} \end{displaymath} $TP$, the true positives is the number of correctly classified test samples and $FP$, the false positives, is the number of incorrectly classified test samples. The recall which is also called the sensitivity of the classifier is given as, \begin{displaymath} \mathrm{Recall} = \frac{TP}{TP+FN} \end{displaymath} where FN is the number of false negatives in the prediction. The recall value can be used to check if the model is over-fitting. For a good model, the precision and recall should be high. The $F_{\beta}$ score is a measure that combines both values of precision and recall. The $F_{\beta}$ score is expressed as \begin{displaymath} F_{\beta} = (1+\beta^{2}) \cdot \frac{Precision \times Recall }{Precision + Recall} \end{displaymath} In our test cases we make use of $F_{1}$ score where $\beta = 1$. For a good classification the $F_{1}$ score is close to 100\%. The trained model was used to classify 30\% validation samples from the FIRST dataset using the fusion model. Table \ref{results} shows the classification results for the FIRST samples in the validation set. The ``support" column shows the number of test samples in each class. Figure \ref{sampleres} shows some of the predictions by the classification model. The average score for precision, recall and F1 score are calculated as a weighted average from the receiver operating characteristic (ROC) \citep{bradley1997use} of the predictions. From Table \ref{results}, we can see that for all validation samples the models show excellent precision, recall and F1 score. The average precision is 88\% and average recall is 86\%, with an F1 score of 86\%. The results of the fusion classifier can be understood as follows. To be assigned a class, a source needs to be identified as belonging to that class in both the individual classifications in which that particular class features. The bent-tailed radio galaxy classification shows a very high precision at 95\%, meaning that most of the classifications labeled as bent-tailed have been identified correctly. The recall for the bent-tailed class is poorer at 79\% , which implies that the algorithm was not able to identify all bent-tailed radio galaxies in the validation sample. The FRI radio galaxy classification shows both high precision and recall - this implies that the network model is able to identify FRI radio galaxies without much confusion. The FRII radio galaxy classifications have excellent recall at 91\%, but poorer precision at 75\% compared to the other two classes. Since the FRI classifications have both high recall and precision, the precision for FRII classification can be directly linked with the recall of bent-tailed sources. This implies that sources which are being identified as FRII are actually bent-tailed radio galaxies. Figure \ref{bent-misid-frii} shows these sources for our validation sample, many of these showing two or more bright spots. It may be possible that the algorithm is confused by the bright spots and the diffuse emission in these sources did not get the same 'weight', leading to the misclassification as FRII radio galaxies. Overall, the results are comparable to manual classification, while being many times faster. This technique, when applied in an iterative manner would likely reduce the misidentification rate (as seen in FRII-bent classification), with increasing sample size available for training. The effect of training sample size is shown in Table ~\ref{relativeresults}. To do this, we chose a training sample which was $25\%$ of the total training sample and created a classification model with it. The validation sample remained the same across the three classification models. The next training sample was obtained by \textit{incrementing} this sample by a factor of two and generating a classification model with the new training sample. The results show that the average precision, recall and F1 scores, all improve with increasing training sample size. \begin{figure} \gridline{\fig{J0041521+002837_new.eps}{0.2\textwidth}{(a)} \fig{J0714060+510000_new.eps}{0.2\textwidth}{(b)} \fig{J0818013+495610_new.eps}{0.2\textwidth} {(c)} \fig{J0914206+582253_new.eps}{0.2\textwidth}{(d)}} \gridline{\fig{J0916395+052552_new.eps}{0.2\textwidth}{(e)} \fig{J0950070+434400_new.eps}{0.2\textwidth}{(f)} \fig{J1025019+040141_new.eps}{0.2\textwidth}{(g)} \fig{J1055010+520156_new.eps}{0.2\textwidth}{(h)}} \gridline{\fig{J1155134-003137_new.eps}{0.2\textwidth}{(i)} \fig{J1217400+033958_new.eps}{0.2\textwidth}{(j)} \fig{J1317383+191016_new.eps}{0.2\textwidth}{(k)} \fig{J1421410-074334_new.eps}{0.2\textwidth}{(l)}} \gridline{\fig{J1510562+054441_new.eps}{0.2\textwidth}{(m)} \fig{J1616377+422656_new.eps}{0.2\textwidth}{(n)} \fig{J2239024-093235_new.eps}{0.2\textwidth}{(o)} } \caption{Bent-tailed radio galaxies misidentified as FR-II type radio galaxies. The prediction result for the validation set showed low precision with high recall for FR-II types radio galaxies and high precision with low recall for Bent-tailed radio galaxies. The figure illustrates the effect of this result with many Ben-tailed radio galaxies misclassified as FR-II radio galaxies.} \label{bent-misid-frii} \end{figure} \begin{table}[!htb] \centering \begin{tabular}{ccccccc} \hline \multirow{2}{*}{\textbf{Class}} & \multicolumn{2}{c}{\textbf{Training Samples}} & \multirow{2}{*}{\textbf{Precision (\%)}} & \multirow{2}{*}{\textbf{Recall(\%)}} & \multirow{2}{*}{\textbf{F1-Score(\%)}} & \multirow{2}{*}{\textbf{Support}} \\ \cline{2-3} & \textbf{Actual} & \textbf{Augmented} & & & & \\ \hline \hline Bent-tailed & 177 & 25488 & 95 & 79 & 87 & 77 \\ \hline FR I & 125 & 36000 & 91 & 91 & 91 & 53 \\ \hline FR II & 227 & 32688 & 75 & 91 & 83 & 57 \\ \hline Average & \multicolumn{2}{l}{} & 88 & 86 & 86 & 187 \\ \hline \end{tabular} \caption{The table shows the class of the source, size of the training samples for each class, Precision, Recall and F1-score of classification for the validation sample as well as the support.} \label{results} \end{table} \begin{table}[!htbp] \centering \begin{tabular}{cccc} \hline \multicolumn{1}{c}{\textbf{Relative Sample Size}} & \multicolumn{1}{c}{\textbf{Avg Precision(\%) }} & \multicolumn{1}{c}{\textbf{Avg Recall (\%)}} & \multicolumn{1}{c}{\textbf{Avg F1 score(\%)}} \\ \hline \hline 25 \% & 54 & 50 & 51 \\ \hline 50 \% & 66 & 65 & 65 \\ \hline \end{tabular} \caption{The table shows the results of variation in the training sample size. The first column shows the training sample size relative to the complete sample described in Table ~\ref{results} and the other three columns give the respective weighted averages of precision, recall and F1 score. } \label{relativeresults} \end{table} \begin{figure*}[!htb] \centering \includegraphics[scale=1]{class-table.eps} \caption{Sample predictions made by the classifier model. The first column shows the name of the object and their coordinates, the middle column the image cut out and the left column shows the true class and the predicted class.} \label{sampleres} \end{figure*} Table \ref{valtable} shows some of the predictions with probability for the validation samples with their true class and their coordinates. \begin{deluxetable}{cccccc} \tablecaption{Table of predictions for validation samples \label{valtable}} \tablehead{ \colhead{Source} & \colhead{RA} & \colhead{DEC} & \colhead{True Class} & \colhead{Prediction} & \colhead{Probability} \\ \colhead{} & \colhead{h:m:s} & \colhead{d:m:s} & \colhead{} & \colhead{} & \colhead{}\\ } \startdata 1426+0093 & 14 26 49.84 & +00 55 59.9 & FRII & FRII & 99.99995 \\ 3C 194 & 08 10 03.67 & +42 28 04.0 & FRII & FRII & 99.99995 \\ 3C 208 & 08 53 08.83 & +13 52 55.3 & FRII & FRII & 99.99735 \\ 3C 228 & 09 50 10.77 & +14 19 57.3 & FRII & FRII & 99.9999 \\ 3C 240 & 10 17 49.77 & +27 32 07.7 & FRII & FRII & 99.99785 \\ 3C 243 & 10 26 31.96 & +06 27 32.7 & FRII & FRII & 100.0 \\ 3C 244.1 & 10 33 33.87 & +58 14 37.9 & FRII & FRII & 99.9998 \\ 3C 251 & 11 08 37.60 & +38 58 42.1 & FRII & FRII & 99.9981 \\ 3C 268.2 & 12 00 59.77 & +31 33 57.9 & FRII & FRII & 99.996 \\ 3C 268.4 & 12 09 13.52 & +43 39 18.7 & FRII & FRII & 99.99425 \\ 3C 277.2 & 12 53 32.70 & +15 42 27.3 & FRII & FRII & 99.98255 \\ 3C 294 & 14 06 44.10 & +34 11 26.2 & FRII & FRII & 99.99975 \\ 3C 322 & 15 35 01.27 & +55 36 49.8 & FRII & FRII & 99.99985 \\ 3C 323.1 & 15 47 44.23 & +20 52 41.0 & FRII & FRII & 99.9888 \\ 3C 336 & 16 24 39.42 & +23 45 17.5 & FRII & FRII & 99.99995 \\ 3C 342 & 16 36 37.38 & +26 48 06.6 & FRII & FRII & 99.9957 \\ 4C -00.55 & 14 23 26.70 & -00 49 56.5 & FRII & FRII & 99.99925 \\ 4C 01.39 & 13 57 01.51 & +01 04 39.7 & FRII & FRII & 99.9999 \\ 4C 03.21 & 11 11 22.71 & +03 09 10.4 & FRII & FRII & 86.14455 \\ 4C 05.53 & 11 48 47.51 & +04 55 27.7 & FRII & FRII & 99.99995 \\ J151056.2+054441 & 15 10 55.851 & +05 44 39.29 & BT & FRII & 99.95345 \\ J151744.96+310015.8 & 15 17 44.96 & +31 00 15.8 & FRI & FRI & 99.9999 \\ J152439.9+620225 & 15 24 42.006 & +62 02 50.93 & BT & BT & 99.9971 \\ J152522.33+314037.1 & 15 25 22.33 & +31 40 37.1 & FRI & FRI & 99.99995 \\ J153522.1+342247 & 15 35 22.994 & +34 23 02.98 & BT & BT & 99.9821 \\ J153616.2+142045 & 15 36 16.805 & +14 20 41.16 & BT & BT & 99.6882 \\ J153932.09+013710.5 & 15 39 32.09 & +01 37 10.5 & FRI & FRI & 100.0 \\ J154549.4-024954 & 15 45 48.671 & -02 49 59.76 & BT & BT & 99.99845 \\ J155222.36+223311.9 & 15 52 22.36 & +22 33 11.9 & FRI & FRI & 99.9942 \\ J155721.38+544015.9 & 15 57 21.38 & +54 40 15.9 & FRI & FRI & 99.9996 \\ J160318.6+192414 & 16 03 18.856 & +19 24 18.13 & BT & BT & 99.97035 \\ \enddata \end{deluxetable} One observation that we found during the study was that the convolutional neural network was very sensitive to the preprocessing done to the images. During the training of the network, we performed sigma-clipping of the images before feeding them to the network. The same procedure has to be done for predictions with the network. Figure \ref{preeffects} shows validation sample J163401.9+062637 before and after preprocessing. \begin{figure*}[ht!] \centering \gridline{\fig{nosigma.eps}{0.25\textwidth}{(a)} \fig{sigma.eps}{0.25\textwidth}{(b)} } \caption{Sample (J163401.9+062637) from the validation set (a) without any preprocessing and (b) after sigma clipping. The sigma clipped image on the left has far less artefacts and background noise compared to the raw image on the left. This shows the effect of preprocessing the images before being fed into the classifier. The sample without the sigma clipping was incorrectly classified during the validation process.} \label{preeffects} \end{figure*} In the example shown in Figure \ref{preeffects}, the sample image was incorrectly classified without sigma clipping and was correctly classified with high confidence after the preprocessing. In this case the actual class label was bent-tail radio galaxy and the prediction without preprocessing was FRII. Depending on the resolution and noise statistics of any image, sigma-clipping can have slight effects on the final image which can also affect the predictions. \section{Wider Application of Deep learning model} Machine learning algorithms trained on data from a specific survey has to be retrained to be used on data from other surveys. Shallow machine learning algorithms need to be retrained from scratch for this purpose which is not realistic in the case of radio astronomy, mainly due to the limitation of sufficient labeled training data. Deep learning methods also suffer from the issue of the need to be retrained, however deep neural networks especially DCNNs like the one in this work need not be trained from scratch. The idea of transfer learning discussed in Section \ref{convolutional_neural_network} makes it possible to use an already trained network model to be retrained with fewer examples from a different survey. The main idea of transfer learning in the context of radio galaxy morphological classification can be explained as follows. The initial layers of the neural network will have learned the basic features like edges and bright spots of the input data. The complicated features are always learned in the last few layers. So in the case of classifying radio galaxies, the initial layers of the network learns the basic shape related features of the different radio galaxies. From Figure \ref{filter_visualization} it is evident that the initial layers of the network has learned the basic features of the radio galaxies. For the three different morphologies discussed in this paper, the basic features will be relatively same irrespective of the survey. The last few layers which learns more complex features would be dependent on the resolution and other factors which differ with surveys. Therefore it is possible to retrain the network to work on images from other surveys by retraining only the last few layers and freezing the initial layers. There are different variations and methodologies of transfer learning. The methodologies are designed and optimized for various applications. For different methodologies and applications, the number of training samples required for retraining a network will be different. With some methodologies discussed in Section \ref{convolutional_neural_network} the required number of training samples needed to retrain with new dataset is less compared to the original number of samples that was used to train the model. The number of the training samples required are in the order of 1000s for normal image processing applications, however this has not been tested for any astronomical applications. All applications of transfer learning found in literature are done with standard imaging datasets specifically designed for computer vision applications with high signal-to-noise ratio. Since the signal-to-noise ratio of astronomical images are not comparable to those imaging applications, the numbers associated with the training samples may slightly differ with astronomical images. A demonstration of the use of transfer learning is beyond the scope of this study. This is because even though the idea of transfer learning is simple, the implementation needs a thorough and systematic analysis as there is no preexisting study that discusses optimizing transfer learning for radio image data. Transfer learning and fine tuning the network for other surveys depend on many factors such as the number of samples in the new dataset, selection of layers that need to be retrained, size of the layers and learning rate. The network may be prone to over-fitting depending on the size of the new dataset and content. There is no clear guideline on which of the initial layers to be frozen and optimized specifically for radio images. The assumption with an already trained network is that it has learned the classification problem with high accuracy (above 90\%). Therefore retraining will be done with smaller learning rates. But if the accuracy is below certain thresholds, this rule will not hold true. A detailed study on optimizing the implementation of transfer learning for radio images is ongoing and will be published in another paper. Even though these challenges exist, the model that we present here enables astronomers to use for not only classification purposes but also other applications with data from other surveys. With upcoming telescopes, this will enable easy integration of the automated classification system to their science processing pipelines. \section{Conclusions} \label{conclusions} To summarize, in this study, we demonstrate the utility of Machine Learning Techniques in handling large datasets by using deep neural networks to classify images of extended radio galaxies. We use archival data from the FIRST radio survey to train as well as test a convolutional neural network. Initial samples of $\sim 150-200$ sources were used for each class, augmented by rotated versions of these images to train the network. We test the resulting model on a separate validation sample. The results show that the derived model displays good performance across the source categories which we have examined. We find that the precision is highest for the bent-tailed radio galaxies, at $95\%$, whereas it is $91\%$ and $75\%$ respectively for FRI and FRII classes. The recall is highest for the FRI/II classes at $91\%$ and is at $79\%$ for bent-tailed radio galaxies. These results show that the neural networks can reliably identify different classes of radio galaxies, and are comparable to manual classification, while being much faster, and are thus a good technique for source classification and identification when dealing with large image-based datasets. At present deep learning techniques are performing with unprecedented accuracies for different classification problems. Bringing these techniques to radio astronomy is critical for handling the data from upcoming radio facilities such as the SKA and its precursors. Early methods involving pattern recognition and shallow machine learning methods are mainly dependent on hand crafted features, which may not completely capture the properties of the radio galaxies. Our methods with DCNN completely removes the layer of handcrafted feature extraction and builds an end-to-end machine-based model. This method completely embraces the principle of \textit{learning from data} and is a novel approach in radio astronomy. Another consideration is that of the processing time. The time required to classify a single image with this model is less than 0.17 seconds. Even though the classification is very quick, the inference time for convolutional neural network can be further improved with faster GPUs, and by changing the batch size of the input. Some of the issues we have identified which pertain to radio astronomy data as well as the specific methods employed, are as follows. One of the main requirements and disadvantages of deep learning models is the large sample size required for training. The level of precision obtained with the present model is mainly dependent on the size of the training samples. Hence, large training samples are essential for the use of 'supervised' machine learning methods. Here we have tried to solve the issue by 'bootstrapping' the available images to generate a semi-synthetic dataset. However, this may result in a smaller feature space for the neural network to run and results for datasets not originating from the same observations may suffer. Yet another issue is that the the techniques used show heavy dependence on pre-processing. With images from different surveys the pre-processing will affect the inference of the classifier. Developing machine learning techniques to make inferences in non-stationary environments is still an open problem. Another issue that affects the quality of the trained model is the number of representative samples that each class originally had. We originally had fewer FRI radio galaxy samples compared to FRII and Bent-tailed radio galaxies. Even though the `bootstrapping' generated enough samples to train the network, the representative samples for each class were different. Therefore the features learned during the training will be confined for each class in the feature space making the model less general and in turn reducing the overall accuracy. We tried to push the accuracy limits of the model by generating the synthetic samples and modifying the loss functions and the success was limited. Since the model allows for transfer learning, this issue can be managed by retraining the model with new samples from future catalogs. We aim to make the code and model publicly available to the community. The Caffe model and associated code for classification will be available in public domain at https://github.com/ArunAniyan/RadioGalaxyClassification. An online web service which permits a radio image to be uploaded for classification is also under construction. This will also help improving the model with feedback from the users and by retraining with more samples, enabling the astronomers to use the service for research purposes with better accuracy. \section{Acknowledgments} \label{acknowledgements} We thank the Square Kilometer Array South African Project (SKA SA), the SKA SA postgraduate bursary program and the South African Research Chair Initiative (SARChI) program for funding the research project. This research has been conducted using resources provided by the Science and Technology Facilities Council (STFC) through the Newton Fund and the SKA Africa. We thank the anonymous referee for the comments and suggestions which have improved the manuscript considerably. We would like to also thank Prof.Oleg Smirnov, Etienne Bonnassieux, Dr.Nadeem Oozeer and Dr.Jasper Horrell for their valuable suggestions and comments. We also thank Dr.Lindsay Magnus for his inputs on the MeerKAT data rates. The authors would also like to thank Dr.Roger Deane for detailed feedback which was instrumental to this work.
1,108,101,563,729
arxiv
\section{Introduction} The spatial resolution achieved with adaptive optics systems, implemented recently on major solar telescopes (Rimmele 2000, Scharmer et al. 2000, Soltau et al. 2002), now approaches the values needed to resolve the intrinsic length scales expected for magnetic, as well as nonmagnetic, structures on the solar surface. The spectacular convergence between numerical simulations (Carlsson et al. 2004, Keller et al. 2004, Steiner 2005) and recent observations of the small scale magnetic field (Lites et al. 2004) provides confidence, both in our theoretical understanding of these structures and in the power of the best current observations to test the theory convincingly. The more puzzling phenomenology of sunspots still awaits a similar breakthrough. The discovery that penumbral filaments have dark cores flanked by lateral brightenings (Scharmer et al. 2002), however, strongly suggests that the fundamental scales of penumbral filaments are now being resolved as well. These recent observations have also highlighted the evolutionary connection of penumbral filaments with umbral dots and light bridges, as well as their morphological similarities. This hints at the possibility of a common underlying structure. The center-to-limb variation of structure in sunspot images provides some geometrical information on vertical structure (e.g. Lites et al. 2004), but most information about the 3rd dimension is encoded in spectral line profiles and their polarization properties. Inversions of these data into a vertical structure model, based on techniques developed by e.g. \ Skumanich and Lites (1987), Ruiz Cobo and del Toro Iniesta (1992) and Frutiger et al. (1999), are very far from unique and must be regularized by assumptions about the vertical structure of the thermodynamics and magnetic field configuration in the atmosphere. Alternatively, forward modeling by radiative transfer in detailed structure models is used (e.g. \ Solanki \& Montavon 1993, Martinez Pillet 2000, M{\"u}ller et al. 2002). On the theoretical side, 3D MHD simulations including full radiative transfer (Stein \& Nordlund 1998, V\"ogler et al. 2003) are able to model increasingly large volumes of the solar atmosphere. Recent progress in understanding of umbral dots with such simulations (Sch\"ussler \& V\"ogler 2006) raises hopes that convergence between theory and observation of sunspot structure is a realistic prospect for the near future. Much of the current thinking about penumbral structure and Evershed flows is based on 1D MHD simulations of thin magnetic flux tubes assumed to move in a background of different magnetic properties (Schlichenmaier et al. 1998a, b). These simulations have led to the view that the magnetic field in the observable layers of the penumbra is intrinsically far removed from its lowest energy state, the potential field. Interpretations of polarized spectra obtained at low spatial resolution made in a similar view are the so-called embedded flux tube or `uncombed' penumbra models (Solanki \& Montavon 1993). Spruit and Scharmer (2006, hereafter SS06) have questioned these interpretations of penumbra fine structure. In the layers above the photosphere, the magnetic field is already so dominant that deviations from a potential field configuration is unlikely on the observed time scales of penumbral structure. The Alfv\'en crossing time on which a magnetic structure changes, unless it is close to a potential or at least a force free field, would only be of the order of seconds for the embedded tubes of width $\sim 100$ km assumed in the models. It seems unlikely that such embeddings will be found in realistic 3D MHD simulations, such as may become possible in the near future. In SS06 an alternative scenario is proposed for understanding penumbra fine structure, its magnetic field configuration and its energy balance. It assumes the existence of field-free, radially aligned gaps below the visible surface, intruding into a nearly potential field above. In this model, the dark penumbral cores outline the centers of the field-free gaps, analogous to the dark lanes running along light bridges in spots (Berger \& Berdyugina 2003, Lites et al. 2004), and on a smaller scale the `canals' seen in strong field regions outside spots (Scharmer et al. 2002). Remarkably, the dark cores of light bridges have already been reproduced in full 3-D radiative MHD simulations (Nordlund 2005, Heinemann 2006). They are due to the enhanced opacity associated with the higher gas pressure in the field-free gaps combined with an overall drop of temperature with height. Outlining the `cusps' of these gaps, they appear as elevated above and dark relative to their surroundings. This is consistent also with the appearance of the dark cores of penumbral filaments. Another important motivation for embedded tube models has been the small length scale on which the inclinations of field lines vary. Vertical length scales on the order of a hundred km above the photosphere, as inferred from observations (Sanchez Almeida \& Lites 1992, Solanki et al. 1993) led to the idea that the penumbral atmosphere could not be the smooth structure expected from a potential field. The assumption of non-potential structure in the form of small-scale inclusions within the height of formation of a spectral line may have seemed straightforward. In SS06 we have shown that this seemingly obvious conclusion is nevertheless erroneous. Not only are small scale variations consistent {\em with}, they are actually an inevitable property {\em of} potential fields. By the nature of the Laplace equation, an inhomogeneity of wavelength $\lambda$ imposed at the boundary of the domain decays into it with an (e-folding) length scale $\lambda/2\pi$. For horizontal structure with $\lambda\sim 1"$ in the penumbra, this predicts a vertical length scale of $\sim$100 km. In this way the interpretation of filaments as gaps, solving the penumbral heat flux problem, and the observation of small scale field inclination variations mutually support each other. It does so within the elegance of a potential field model which is expected, on theoretical grounds, to hold approximately in the atmosphere. The shape of a gap (i.e., its width as a function of depth) is determined by the condition of pressure balance between the surrounding magnetized fluid and the nonmagnetic stratification in the gap. This is the same mathematical problem as finding the shape of the sunspot flux bundle from the balance of pressures at its outer boundary (e.g., Jahn and Schmidt 1994). The analytic potential field model used in SS06 took this force balance into account only in the coarsest sense. More quantitative detail is needed when comparing the model with observations. For example, the top of the gap must have a `cuspy' shape in order for the explanation of dark cores (see above) to hold, and the observed ranges of field line inclinations in inner and outer penumbra must be reproduced. In this paper we address this with numerical solutions for the field configuration, assuming a pressure stratification in the gap approximating that of the normal convection zone, and a similar but reduced pressure distribution inside the magnetic field. These assumptions lead to models that are distinctly different for the inner and outer penumbra and that allow observed properties of dark-cored filaments to be explained. \section{Embedded flux tubes models} In this section we present some critical thoughts on recent embedded flux tube models. The uncombed penumbra model (Solanki \& Montavon 1993), often also referred to as the embedded flux tube model, was developed in order to explain the strong vertical gradients in the inclination of the magnetic field inferred from Stokes~V spectra (e.g., Sanchez Almeida \& Lites 1992), while avoiding strong curvature forces (Solanki et al. 1993). As explained above, potential fields avoid these problems automatically, removing much of this motivation. Nevertheless, the line of reasoning was straightforward and made contact with existing `magnetoconvection' views of the penumbra in which vertical displacements of approximately horizontal field lines played a major role. Solanki and Montavon (1993) proposed that these penumbral flux tubes could be modeled as flux tubes with circular cross sections, internal field lines aligned with the flux tube and external field lines wrapping around the flux tube. As pointed out in SS06, such round flux tubes cannot be in magnetostatic equilibrium. The surrounding magnetic field of such a flux tube must vanish at the top and bottom. At the sides of the flux tube, the surrounding magnetic field strength will be increased by the presence of the flux tube. This produces forces that will compress the flux tube horizontally and make it expand upwards at its top and downwards at its bottom. In SS06 we estimated that this will flatten the flux tube in tens of seconds. This is due to the low density and correspondingly high Alfv\'en speed in the line-forming layers. Unsuccessful efforts to construct reasonable magnetostatic flux tube models (Borrero 2004), high-light the fundamental difficulties of such embeddings in the penumbra atmosphere and furthermore demonstrate that these problems are in no way reduced for {\em thin} flux tubes. The suggestion that magnetostatic equilibrium may be achievable with partly flattened flux tubes (Borrero et al. 2006b), remains a speculation. Our objection to round flux tubes therefore applies also to the moving tube model of Schlichenmaier, implemented in a thin flux tube approximation. At a sufficiently large depth below the surface where the gas density is high enough, the flattening time scale (Alfv\'en crossing time) may be large, but the mismatch of time scales will become a problem already long before a tube reaches the penumbral photosphere. Long, stable flux tubes, maintaining their identity and extending all the way from the inner to the outer penumbra as in Schlichenmaier's moving tube model provide a possible explanation to the Evershed flow, but their assumed long-lived identity also constitutes a hindrance to explaining penumbral heating. With the degrees of freedom in existing embedded tube models, they can explain observed polarized spectra (e.g. Bellot Rubio et al. 2004, Borrero et al. 2004, 2005, 2006a,b, Martinez Pillet 2000). While these inversions represent a significant advance in exploiting spectropolarimetric information, confidence that they substantiate flux tube models is misplaced, since inversion of line profiles is fundamentally non-unique. While it can rule out classes of models, it can not be used as positive evidence for a model that fits the data. At a basic level, the very nature of radiative transfer, with broad contribution functions at each wavelength, also makes it very difficult to distinguish discontinuities from smoother variations on the basis of observed (polarized) spectra. These flux tube models, while meant to represent a structure embedded in a surrounding medium, do this in a physically inconsistent way. The radiative transfer model is a single ray intersecting a constant-property flux tube, while the background atmosphere is assumed similarly homogeneous. The displacement of field lines needed to accommodate the structure is ignored. An embedding would displace the surrounding magnetic field lines (unless violation of div ${\bf B}=0$ is assumed), causing inhomogeneity of order unity around it in field strength, field line directions, or both. The agreement with observations obtained with such models is thus of unquantifiable significance. While not necessary for interpreting the observed line profiles, attempts are sometimes made to fit the results into a concept of nearly horizontal tubes extending from the inner penumbra to its outer edge (e.g. Bellot Rubio et al. 2004). The magnetic field of such flux tubes would have to be almost exactly parallel to the $\tau=1$ surface of the penumbra, else they would quickly run out of the line forming region (cf. Schlichenmaier \& Schmidt 2000, Bellot Rubio et al. 2004). This is in disagreement with measured field line inclinations (e.g. Bellot Rubio et al. 2004, Borrero et al. 2005, Langhans et al. 2006). Attempts to make these measurements agree with the notion of long horizontal tubes are unconvincing. We emphasize that two-component inversions clearly provide indisputable evidence for the existence of large inclination and field strength gradients in the penumbra and thereby crucial information not attainable by other means. However, such inversions do not allow firm conclusions about the underlying structure responsible for these gradients. \section{Periodic potential field with field-free gaps} Following SS06, we develop our 2D potential field model in cartesian coordinates. The $z$-coordinate is the vertical direction, $y$ the horizontal direction parallel to the filament (also called here the {\em radial coordinate}), and $x$ the horizontal direction perpendicular to the filament (also called the {\em azimuthal} coordinate). The structure is assumed independent of the $y$-coordinate. This is justified by the fact that penumbral filaments are long compared to their widths. Whereas a 3D model obviously would be more satisfactory, the present relatively simple model is adequate for demonstrating major differences between `gappy' magnetic fields in the inner and outer penumbra and for comparing these models with observations. We assume that there are no field lines entering or leaving the gap, i.e., the discontinuity follows field lines, and that the magnetic field is a potential field outside the gap and identically zero inside the gap. Because of the periodic field and symmetry assumed (shown in Fig.~1), we can expand $B_x$ as a sine series and $B_z$ as a cosine series \begin{equation} \label{eq:bx0} B_x = \sum\limits_{n=1}^{\infty} f_n(z) \sin (k_n x) \end{equation} and \begin{equation} \label{eq:bz0} B_z = \frac{g_0}{2} + \sum\limits_{n=1}^{\infty} g_n(z) \cos (k_n x) , \end{equation} where $g_0$ constitutes a height independent vertical field component equal to the average vertical flux density, \begin{equation} k_n = n \pi /L \end{equation} and $L=S/2$ equals half the separation between two filaments. Since the discontinuity is aligned with a field line, the magnetic field is divergence-free ($\mathbf{\nabla \cdot B} = 0 $) everywhere, which implies that \begin{equation} \label{eq:fngn} f_n = - \frac{1}{k_n} \frac{{\rm d}g_n}{{\rm d}z} . \end{equation} The assumption that the magnetic field is zero inside the gap and a potential field outside the gap implies that the height dependent sine and cosine coefficients can be obtained for $n=0,1,2...$ as \begin{equation} \label{eq:fn1} f_n(z) = \frac{2}{L} \int\limits_{0}^{L} B_x \sin (k_n x)~{\rm d}x =\frac{2}{L} \int\limits_{x_{\rm g}}^{L} \frac{\partial{\phi}}{\partial{x}} \sin (k_n x)~{\rm d}x \end{equation} and \begin{equation} \label{eq:gn1} g_n(z) = \frac{2}{L} \int\limits_{0}^{L} B_z \cos (k_n x)~{\rm d}x =\frac{2}{L} \int\limits_{x_{\rm g}}^{L} \frac{\partial{\phi}}{\partial{z}} \cos (k_n x)~{\rm d}x , \end{equation} where $x_{\rm g}=x_{\rm g}(z)$ outlines the current sheet constituting the interface between the field-free and magnetic atmospheres. Our goal is to derive a second relation between $f_n$ and $g_n$. Integrating the first equation by parts and using that $\sin(k_n L) = \sin (n \pi) = 0$, we obtain \begin{equation} \label{eq:fn2} f_n = -\frac{2}{L} \phi (x_{\rm g},z) \sin(k_n x_{\rm g}) - \frac{2 k_n}{L} \int\limits_{x_{\rm g}}^{L} \phi \cos (k_n x)~{\rm d}x . \end{equation} To evaluate Eq. (6), we use that \begin{eqnarray} \frac{\rm d}{{\rm d}z} \left( \int\limits_{x_{\rm g}}^{L} \phi \cos (k_n x)~{\rm d}x \right) \nonumber\\ = \int\limits_{x_{\rm g}}^{L} \frac{\partial{\phi}}{\partial{z}} \cos (k_n x)~{\rm d}x - \frac{{\rm d}x_{\rm g}}{{\rm d}z} \phi (x_{\rm g},z) \cos(k_n x_{\rm g}) , \end{eqnarray} giving \begin{equation} g_n = \frac{2}{L} \frac{\rm d}{{\rm d}z} \left(\int\limits_{x_{\rm g}}^{L} \phi \cos (k_n x)~{\rm d}x \right) + \frac{2}{L} \frac{{\rm d}x_{\rm g}}{{\rm d}z} \phi (x_{\rm g},z) \cos(k_n x_{\rm g}) \end{equation} Comparing to Eq. (\ref{eq:fn2}), we obtain after simplifications \begin{equation} g_n = - \frac{1}{k_n} \frac{{\rm d}f_n}{{\rm d}z} - \frac{2}{L k_n} \frac{{\rm d} \phi(x_{\rm g},z)}{{\rm d}z} \sin (k_n x_{\rm g}) . \end{equation} We introduce the variable $D_{\rm g}$ \begin{equation} D_{\rm g} = \frac{{\rm d} \phi(x_{\rm g},z)}{{\rm d}z} , \end{equation} which can be evaluated as \begin{equation} \label{eq:btz1} D_{\rm g} = \frac{{\rm d}x_{\rm g}}{{\rm d}z} \frac{\partial{\phi}}{\partial{x}} + \frac{\partial{\phi}}{\partial{z}} = \frac{{\rm d}x_{\rm g}}{{\rm d}z} B_{{\rm g}x}(x_{\rm g},z) + B_{{\rm g}z}(x_{\rm g},z) . \end{equation} Using that the field is tangent to the interface \begin{equation} \frac{B_{{\rm g}x}}{B_{{\rm g}z}} = \frac{{\rm d}x_{\rm g}}{{\rm d}z} \end{equation} to eliminate $B_{{\rm g}x}$, we obtain \begin{equation} D_{\rm g} = B_{{\rm g}z} \left[1 + \left(\frac{{\rm d}x_{\rm g}}{{\rm d}z}\right)^2 \right] , \end{equation} where $B_{{\rm g}z}=B_z(x_{\rm g},z)$ is the vertical magnetic field component at the boundary. Finally, using Eq. (\ref{eq:fngn}) we obtain \begin{equation} \label{eq:gn2} g_n = \frac{1}{k_n^2}\frac{{\rm d}^2g_n}{{\rm d}z^2} - \left[1 + \left(\frac{{\rm d}x_{\rm g}}{{\rm d}z}\right)^2 \right] \frac{2}{L k_n} \sin(k_n x_{\rm g}) B_{{\rm g}z} . \end{equation} This equation can be written as \begin{equation} \label{eq:gn3} g_n = \frac{1}{k_n^2}\frac{{\rm d}^2g_n}{{\rm d}z^2} + \left[ 1 + \left(\frac{{\rm d}x_{\rm g}}{{\rm d}z}\right)^2 \right] s_n B_{{\rm g}z} , \end{equation} where \begin{equation} s_n = - \frac{2}{L k_n} \sin(k_n x_{\rm g}) = \frac{2}{L} \int\limits_{x_{\rm g}}^{L} \cos (k_n x)~{\rm d}x . \end{equation} Comparing to Eq. (\ref{eq:gn1}) shows that the $s_n$ can be identified with the cosine coefficients of a vertical magnetic field of unit amplitude that has no horizontal gradients within $x_{\rm g}<x<2L-x_{\rm g}$ and that is zero outside this interval. We also note that the last term vanishes when $x_{\rm g}(z)=0$. This implies that the magnetic field is potential for all $x$, which is the case above the height where the field-free gap closes. For such heights, the equations for all $g_n$ are uncoupled \begin{equation} g_n = \frac{1}{k_n^2}\frac{{\rm d}^2g_n}{{\rm d}z^2} \end{equation} and the solutions are (eliminating solutions that grow exponentially with height): \begin{equation} \label{eq:gn5} g_n(z) = g_n(z_0) \exp(-k_n (z-z_0)) , \end{equation} where $z_0$ is the height above which the gap is closed for all $z$. At heights where the field-free gap is open, Eq. (\ref{eq:gn2}) shows that the equations for all $g_n$ are {\it coupled} through the $B_{{\rm g}z}$ term. This term can be expressed as a weighted sum of all $g_n$ terms, see below. A direct solution of Eq. (\ref{eq:gn2}) would therefore correspond to a relatively large matrix equation with $M N$ unknowns, where $M$ is the number of depth points and $N$ the number of cosine coefficients used to expand $B_z$ in the $x$-direction. \section{Numerical solution} We introduce a depth grid $(z_1,z_2,....z_M)$, where $z_1$ corresponds to the lower boundary and $z_M$ to the upper boundary, identified with the {\it first} depth point for which the gap is closed. $\Delta z$ is the grid spacing, and we represent derivatives at depth point $m$ numerically as \begin{equation} \label{eq:d2gn} \frac{{\rm d}^2g_n(z_m)}{{\rm d}z^2} = \frac{1}{\Delta z^2} (g_n(z_{m-1}) - 2g_n(z_m) +g_n(z_{m+1})) . \end{equation} With the boundary condition discussed below and assuming that the shape of the gap is given (this will be determined by force balance across the discontinuity, see Sect. 5), Eq. (\ref{eq:gn3}) can be written as a matrix equation for each $g_n$, \begin{equation} \label{eq:an} \mathbf{A_n \cdot g_n = S_n \cdot B_{{\rm g}z}} , \end{equation} where boldface quantities are either vectors or matrices and $\mathbf{S_n}$ is a diagonal matrix. This can be inverted to express $g_n$ in terms of $B_{{\rm g}z}$ \begin{equation} \label{eq:gn4} \mathbf{g_n = A_n^{-1} \cdot S_n \cdot B_{{\rm g}z}} . \end{equation} $B_{gz}$ can be expressed in terms of $g_n$ as \begin{equation} \label{eq:btz2} B_{{\rm g}z} = g_0 + 2 \sum\limits_{n=1}^{N} g_n(z) \cos (k_n x_{\rm g}(z)) , \end{equation} where $g_0$ is assumed given. Note that at heights where the gap is {\it open}, this equation contains a multiplicative factor of two compared to what is expected from Eq. (\ref{eq:bz0}), because that equation gives a {\it boundary} value that is the average of the values at both sides of the discontinuity. This equation can be represented as a matrix operation \begin{equation} \label{eq:btz3} \mathbf{B_{{\rm g}z} = 2 \sum\limits_{n=1}^{N} C_n \cdot g_n + d +g_0} , \end{equation} where $\mathbf{d}$ represents the lower boundary condition. Combining this equation with Eq. (\ref{eq:gn4}), we obtain a matrix equation for $B_{{\rm g}z}$, \begin{equation} \label{eq:btz4} \mathbf{B_{{\rm g}z} - 2 \sum\limits_{n=1}^{N} ( C_n \cdot A_n^{-1} \cdot S_n) \cdot B_{{\rm g}z} = d + g0} . \end{equation} This shows that we can build up a matrix equation for $B_{{\rm g}z}$, i.e. involving only the vertical component of the magnetic field {\it along the discontinuity}. Having thus calculated $B_{{\rm g}z}$, the solution for each $g_n$ can be obtained by solving Eq. (\ref{eq:gn4}) for each $n$ separately. We note that the matrix equation for $B_{{\rm g}z}$ allows $x_{\rm g}$ to be arbritarily chosen and need not be at discrete grid points. When combined with the requirement of force balance across the discontinuity (Sect. 5), the equation thus defines a smooth solution $x_{\rm g}(z)$. The lower boundary condition for $g_n$ is easily expressed in terms of a given vertical magnetic field $B_z$ at the lower boundary. A reasonable assumption is that the vertical magnetic field is constant, $B_z = B_0$ for $x_{\rm g}(z_1) < x < 2 L - x_{\rm g}(z_1) $, implying that $g_n = B_0 s_n$ and thereby also fixing the constant value of $g_0$ at the lower boundary. The effects of this approximation can be reduced by increasing the depth of the lower boundary. \section{Magnetostatic potential field model} Because of the potential field assumed, both the field-free and magnetic components of the atmosphere are in hydrostatic equilibrium but with a gas pressure that is in general different at any height for the two components. Equilibrium across the gap dictates that the sum of gas pressure and magnetic pressure must be continuous across the gap. With a given gas pressure variation with height in the two components, also the variation of the field strength along the discontinuity is given. To satisfy that constraint with an assumed given magnetic field at the lower boundary, the {\it shape} of the discontinuity, i.e. the variation of $x_{\rm g}$ with $z$, must adjust itself to produce the field strength needed to comply with force balance across the discontinuity. This is a free boundary problem, first applied to sunspot models by Schmidt and Wegmann (1983) and later by Jahn and Schmidt (1994). To solve this problem in the context of the present model, we first write \begin{equation} \label{eq:pf} P_{\rm f} = P_{\rm m} + B_{\rm g}^2/(2\mu_0).\label{peq} \end{equation} where $P_{\rm f}$ and $P_{\rm m}$ are the gas pressures in the field-free and magnetic components respectively and $B_{\rm g}$ is the magnetic field strength along the discontinuity. In addition to its $(x,z)$-components, i.e., in the plane perpendicular to the filament, the magnetic field has a component $B_y$ parallel to it. This component is assumed homogeneous, except in the gap, where it vanishes. To specify the problem, the gas pressures $P_{\rm m}$ and $P_{\rm f}$ have to be given as functions of depth $z$. Since the gap communicates directly with the convection zone surrounding the spot, its pressure can be approximated from a model for the mean pressure stratification in the convection zone. The gas pressure in the magnetic field is more uncertain. An important measure for $P_{\rm m}(z)$ is the {\em Wilson depression}, the depth below the normal solar surface of the optical depth unity surface, which, however, is known with some accuracy only for the umbra. The choice of $P_{\rm m}(z)$ also influences the height $z_0$ where the gap closes. To complete the model definition, we have assumed that $z_0=0$ everywhere in the penumbra, that is, the top of the gaps is at the level of the normal solar photosphere. This agrees with the appearance of the bright filaments, in particular its dependence on viewing angle and disk position, but must be considered an assumption subject to future improvements. With this assumption, the Wilson depression $\delta z_{\rm W}$ of the {\it magnetic} component becomes a part of the solution of the problem. In the inner penumbra, we shall find a value of about $300$ km, a plausible value in view of the observed value in the umbra, $\delta z_{\rm Wumbra}\approx 400$~km. A similar value ($300$~km) was also found for the height of a dark cored light bridge by Lites et al. (2004), based on purely geometrical arguments. The magnetic field strength along the boundary is calculated as \begin{equation} \label{eq:bg} B_{\rm g}^2 = B_y^2+B_{{\rm g}x}^2+B_{{\rm g}z}^2=B_y^2+B_{{\rm g}z}^2 \left[1 + \left(\frac{{\rm d}x_{\rm g}}{{\rm d}z}\right)^2\right] . \end{equation} All quantities, here and in the following, refer to conditions along the discontinuity. Combining Eqs. (\ref{eq:pf}) and (\ref{eq:bg}), we can write \begin{equation} E(z)=0 , \end{equation} where \begin{equation} E(z)=B_{{\rm g}z} \left[ 1 + \left(\frac{{\rm d}x_{\rm g}}{{\rm d}z}\right)^2 \right]^{1/2}-(2 \mu_0(P_{\rm f}-P_{\rm m})-B_y^2)^{1/2} . \end{equation} To find the shape of the gap, we have chosen to minimize the integral of $E^2$ along the discontinuity, $L$, with respect to $x_{\rm g}(z)$. Thus $L$ is given by \begin{equation} L = \int\limits_{0}^{s_{\rm max}} E(s)^2 {\rm d}s = \int\limits_{z_{\rm min}}^{z_{\rm max}} E(z)^2 \left[1 + \left(\frac{{\rm d}x_{\rm g}}{{\rm d}z}\right)^2\right]^{1/2} {\rm d}z , \end{equation} where $s$ is a coordinate along the discontinuity and ${\rm d}s=({\rm d}x^2+{\rm d}z^2)^{1/2}$. To achieve this, $x_{\rm g}(z)$ was defined at a small number of nodes and interpolated between the nodes by cubic splines, following Schmidt and Wegmann (1983) and Jahn and Schmidt (1994). $E(z)$ was linearized with respect to small perturbations in $x_{\rm g}$ at the nodes and the linearized equation solved with least squares methods. The solutions converged in $5$--$10$ iterations to errors in $E(z)$ of less than about $1$--$3$~mT. As an approximation to the field-free gas pressure, $P_{\rm f}(z)$, we have taken a polytropic stratification, i.e., a scale height that varies linearly with $z$, such that $H=H_{f1}=390$ km at $z=-500$ km and $H=H_{f2}=160$ km at $z=0$. Over the relevant depth range this is a fair match to a mean solar model. For the magnetic atmosphere we assumed that the pressure variation with $z$ was identical to that of the field-free atmosphere, but scaled by a constant $C$. The implied temperature variation with height is thus identical for the two components. The assumption that the gap closes at $z=0$ together with the assumed known pressure variations $P_{\rm f}$ and $P_{\rm m}$ means that the scale factor $C$ is determined by the field strength at the height where the gap closes. Here, the azimuthal field $B_x$ must vanish for symmetry reasons, and force balance across the discontinuity requires that at height $z=0$, \begin{equation} P_{\rm m}(0) = P_{\rm f}(0) - (B_y^2 + B_{{\rm g}z}(0)^2)/(2 \mu_0) . \end{equation} The vertical field component, $B_{{\rm g}z}(0)$, depends on the average vertical field, $\bar B_z$, and the shape of the gap. For a gap with a flat top, $B_{{\rm g}z}(0)$ is close to zero, whereas for a pronounced cusp shape, $B_{{\rm g}z}(0)$ is closer to $\bar B_z$. The assumption that the gap closes at $z=0$ therefore implies a relation between the gas pressure in the magnetic component, the radial field $B_y$, the average vertical field $\bar B_z$ above the surface, and the {\it shape} of the gap. The model used for the gas pressure in the field-free component is such that the magnetic atmosphere is completely evacuated at the top of the gap (at $z=0$) when the field strength is $170$~mT at that height. For stronger magnetic fields, the gap must close {\it below} the height $z=0$. This means that the top of the gap will be associated with a Wilson depression relative to the quiet sun, but does not imply that it will be invisible since the gas above it has reduced gas pressure, and thereby low opacity. \subsection{Model parameters and properties} In Table I are shown parameters of four models for the magnetic field discussed in the following. These parameters are related to $\bar B_z$ and $B_y$ through \begin{equation} \bar B_z = \bar B \cos(\bar \gamma) \end{equation} and \begin{equation} B_y = \bar B \sin(\bar \gamma) , \end{equation} where $\bar B$ is the average field strength and $\bar \gamma$ is the average inclination. We have chosen separations $S$ between the filaments that are in the range $500$--$1000$~km, in rough agreement with those found for dark-cored filaments (Langhans 2006). The average magnetic fields and inclinations used are similar to those found by Borrero et al. (2005), discussed in Sect. 6.2. \begin{table}[tbh] \centering \begin{tabular}{llllll} \hline \hline \vspace{1mm} Case & $\bar B$ (T) & S (km) & $\bar \gamma (^\circ)$ \\ \hline I & 0.10 & 1000 & 75 \\ II & 0.14 & 1000 & 60 \\ III & 0.18 & 1000 & 45 \\ IV & 0.18 & \phantom{0}500 & 45 \\ \hline\hline \end{tabular} \caption{Model parameters. $\bar B$ is the average field strength, $S$ the separation between the filaments and $\bar \gamma$ the average inclination of the magnetic field.} \label{tab:cases} \end{table} Figures (1)--(3) show the results of these calculations. Figure 1 shows the shape of the discontinuity and the field lines for the calculations made. Also shown as a dashed horizontal line is the height at which the gas pressure in the magnetic component equals the gas pressure in the field-free component at $z=0$. In the absence of radiative transfer calculations, this is used as a proxy for the continuum forming layer, referred to in the following as the penumbral photosphere, and therefore also as an indication of the Wilson depression. In Figs. (2) and (3) are shown the inclination angle and field strength variations along this photosphere. {\it Case I} corresponds roughly to conditions in the outer penumbra. The average magnetic field chosen is strongly inclined (average inclination $75^\circ$ with respect to the vertical) and weak (average field strength $100$~mT), the separation between two gaps, $S$, was set at $1000$ km. Figure 1 shows the shape of the gap and the field lines calculated. We note that the discontinuity is flat-topped over more than $400$ km above the center of the field-free gap. Figure 3 shows the variation of the field strength as function of $x$ at the photosphere (full), $100$ km (short dashes) and $200$km (long dashes) resp. above the photosphere. The field strength above the center of the gap is identical to that of the radial ($B_y$) component, showing that $B_z$ vanishes above the gap. Figure 2 shows the variation of the magnetic field inclination, calculated as $\tan^{-1} (B_y/B_z)$, with $x$ at $z=0$ (full) and at $z=100$ km (dashed). The inclination varies from $90^\circ$ above the gap to $54^\circ$ midways between two gaps. Thus the magnetic field is associated with large inclination variations but small variations in field strength. The Wilson depression in the magnetic component is about $60$ km. \begin{figure}[htbp] \centering \includegraphics[bb=55 112 740 404, clip, width=1.00\hsize]{6019fg1a.eps} \centering \includegraphics[bb=55 112 740 404, clip, width=1.00\hsize]{6019fg1b.eps} \centering \includegraphics[bb=55 112 740 404, clip, width=1.00\hsize]{6019fg1c.eps} \centering \includegraphics[bb=55 68 740 404, clip, width=1.00\hsize]{6019fg1d.eps} \caption{\small Gap shape (thick lines) and field lines for Case I (top), representing the outer penumbra, Case II representing the mid penumbra and Cases III-IV the inner penumbra. The dashed horizontal line indicates the height where the gas pressure in the magnetic component is equal to that of the field-free component at $z=0$. } \label{cores1} \end{figure} \begin{figure}[htbp] \centering \includegraphics[bb=55 112 740 404, clip, width=1.00\hsize]{6019fg2a.eps} \centering \includegraphics[bb=55 112 740 404, clip, width=1.00\hsize]{6019fg2b.eps} \centering \includegraphics[bb=55 112 740 404, clip, width=1.00\hsize]{6019fg2c.eps} \centering \includegraphics[bb=55 68 740 404, clip, width=1.00\hsize]{6019fg2d.eps} \caption{\small The variation of the magnetic field inclination (from the vertical) with horizontal coordinate $x$ at the penumbral photosphere (full) and $z=100$ km (short dashes) and $z=200$ km (long dashes) above the penumbral photosphere. Case I (top), represents the outer penumbra, Case II the mid penumbra and Cases III-IV the inner penumbra. } \label{cores1} \end{figure} \begin{figure}[htbp] \centering \includegraphics[bb=55 112 740 404, clip, width=1.00\hsize]{6019fg3a.eps} \centering \includegraphics[bb=55 112 740 404, clip, width=1.00\hsize]{6019fg3b.eps} \centering \includegraphics[bb=55 112 740 404, clip, width=1.00\hsize]{6019fg3c.eps} \centering \includegraphics[bb=55 68 740 404, clip, width=1.00\hsize]{6019fg3d.eps} \caption{\small The variation of field strength with horizontal coordinate $x$ at the penumbral photosphere (full) and $z=100$ km (short dashes) and $z=200$ km (long dashes) above the photosphere. Case I (top), represents the outer penumbra, Case II the mid penumbra and Cases III-IV the inner penumbra. } \label{cores1} \end{figure} {\it Case II} corresponds roughly to conditions in the mid penumbra. The magnetic field is less inclined, $\bar \gamma=60^\circ$, and stronger, $\bar B = 140$~mT. Figure 1 shows that the shape of the discontinuity is intermediate to that of a flat top and in the form of a {\it cusp}. However, for Case II the shape of the discontinuity is sufficiently flat that $B_z$ nearly vanishes at the top ($B_{zg} \approx 2$ mT) leading to a magnetic field that is nearly horizontal immediately above the center of the gap. Due to the stronger radial magnetic field component compared to Case I, the Wilson depression is increased to $130$ km. {\it Case III} corresponds roughly to conditions in the inner penumbra. The field is even stronger ($180$~mT) and more vertical, $\bar \gamma=45^\circ$, than for Case II. The cusp is now sufficiently pronounced that field lines can easily follow the discontinuity and $B_z$ is therefore non-vanishing, $B_z \approx 44$~mT at the top of the gap. As shown in Fig. 2, the inclination of the magnetic field above the gap is close to $71^\circ$ just above the center of the gap and close to $51^\circ$ at $z=100$ km. The Wilson depression for Case III is larger than for Case II, about $200$ km, due to the increased magnetic pressure above the center of the gap. {\it Case IV} is identical to Case III except that the separation between two nearby gaps has been reduced to $500$ km. The shape of the discontinuity and the magnetic field topology is similar to that of Case III but compressed by a factor two in the $x$-direction. This leads to a stronger vertical field component and therefore higher field-strength at the top of the gap ($B_z \approx 80$ mT) which reduces the gas pressure in the magnetic component and increases the Wilson depression to $310$ km. The gradual transformation from a flat-topped boundary (Case I) into a pronounced cusp shaped top (Cases III and IV) can be explained by conservation of magnetic flux. The weak {\it vertical} magnetic field of Case I can be squeezed into a much narrower channel between two field-free gaps, while constrained by magnetostatic equilibrium across the discontinuity, than for Cases III and IV. This allows the top of the gap to extend over a larger horizontal distance and a flat top to form for Case I, but not for Cases III--IV. No particular significance should be attached to the fact that the limiting value of the inclination is $90^\circ$ for the calculations shown. This is a direct consequence of our assumption that the field-free gaps are not associated with any Wilson depression. Depending on the (local) radial gradient of that assumed Wilson depression, the limiting inclination will be smaller or larger than $90^\circ$, thereby also allowing field lines that dip down. \subsection{Cusps} As Fig. 1 shows, the gap-tops have a pronounced spike in the inner-penumbra cases: the vertical magnetic field line at the top of the gap `splits in two'. This configuration occurs whenever a field free plasma penetrates into a magnetic field, and is known in the controlled-fusion literature as a `cusp'. Cuspy configurations like the `stellarator' and the `picket fence' play a role as alternatives to the Tokamak configuration, because of their inherent MHD-stability (e.g., Rose \& Clark 1961, Artsymovich, 1964, Haines 1977). Consider first the simple case $B_y=0$ and, without loss of generality, ignore the gas pressure inside the magnetic region. Let the $x$-coordinate be such that $x=0$ at the cusp point. By symmetry, $B_x=0$ at $x=0$. The gas pressure in the gap is balanced by the magnetic pressure at its boundary, $P_{\rm g}=B^2/2\mu$. This includes, in particular, the cusp point, where $P_{\rm g}=B_z^2/2\mu$. On the axis of the gap and crossing the boundary from the inside to the outside of the gap, $B_z$ thus jumps from zero to a finite value at the cusp point. Measured along the field lines, however, all components of $\bf B$ are smooth, continuous functions. Still assuming $B_y=0$, the approximate location $z_{\rm c}$ of the cusp point can be found by balancing the gap pressure $P_{\rm g}(z_{\rm c})$ against the pressure $\bar B^2/2\mu$ of the {\em average} vertical field strength $\bar B_z$ which, unlike the precise value of $B_z$ at the cusp, is known in advance. (This approximation becomes exact in the limit of vanishing gap width). Since the gas pressure does not vanish at any height in the gap, the gap is always terminated by a cusp, as long as $B_y=0$. The situation is more interesting if $B_y\ne 0$. Define a critical height $z_0$, such that $P_{\rm g}(z_0)=B_y^2/2\mu$. The shape of the gap top now depends on the location of $z_0$ relative to $z_{\rm c}$. If $z_0$ is above $z_{\rm c}$ (or $B_y\la\bar B$), the gap has a cusp. In the opposite case, the gap pressure at the gap top can be balanced by $B_y^2/2\mu$. As a consequence, the cusp gradually becomes less pronounced with increasing strength of $B_y$, relative to $\bar B$, and at some value disappears completely. Above this value, $B_z$ vanishes at the top of the gap, and the gap pressure is balanced entirely by $B_y^2$. \section{Comparison with observations} Several important characteristics of our simple potential field models are consistent with images and magnetograms recorded with the Swedish 1-m Solar Telescope (SST) as well as results of two-component inversions by Borrero et. al (2005). Table II summarizes parameters calculated from the models and discussed in the following. \begin{table}[tbh] \centering \begin{tabular}{lllllll} \hline \hline Case & $\delta z_{\rm W} $ & $\theta_{cr} $ & $B_{\rm f} $ & $B_{\rm m} $ & $\gamma_{\rm f} $ & $ \gamma_{\rm m} $ \\ & (km) & ($^\circ$) & (mT) & (mT) & ($^\circ$) & ($^\circ$) \\ \hline \hline I & 60 & 77 & 100 & 120 & 90 & 54 \\ II & 130 & 57 & 120 & 180 & 89 & 43 \\ III & 200 & 35 & 130 & 230 & 71 & 34 \\ IV & 310 & 21 & 150 & 280 & 58 & 28 \\ \hline\hline \end{tabular} \caption{Model properties calculated. $\delta z_{\rm W}$ is the Wilson depression of the magnetic component, $\theta_{cr}$ the average inclination (from the vertical) of the boundary between the field-free and magnetic components, $B_{\rm f}$ is the field strength above the top of the field-free gap, $B_{\rm m}$ the field strength in the middle of the magnetic component at the penumbral photosphere and $\gamma_{\rm f}$ and $\gamma_{\rm m}$ are the corresponding inclinations of the magnetic field.} \label{tab:cases} \end{table} \subsection{Images and magnetograms} We first note that the models imply large differences in the {\it appearance} of the field-free and magnetic components at different radial distances in a sunspot. In the outer penumbra, the Wilson depression is small and the field-free gaps occupy a large fractional area close to the surface. For conditions corresponding to the inner penumbra, the Wilson depression is large, on the order $200$--$300$~km, and the field-free gaps occupy a smaller fractional area. {\it The models therefore correspond to field-free gaps that appear as elevated and quite distinct structures in the inner penumbra.} This is consistent with the interpretation (Spruit \& Scharmer 2006) that dark-cored filaments (Scharmer et al. 2002) should be identified with field-free gaps and that the dark cores are located at the center of such gaps. The models are also consistent with the observation that dark cores are easily identifiable in the inner, but not outer, penumbra. The steep "walls" of the field-free gaps, inclined by about $35^\circ$ for Case III and $21^\circ$ for Case IV imply that the limb side "walls" of the field-free gaps cannot be seen at $\pm 90^\circ$ away from the disk center direction at heliocentric distances larger than approximately $35^\circ$ ($\mu = \cos \theta \approx 0.82$). This is consistent with the observation that dark cores are seen with lateral brightenings on {\it both} sides of the dark core at all azimuth angles only for sunspots that are close to disk center (S\"utterlin et al. 2004, Langhans et al. 2005). The calculated magnetic fields for Cases I--IV have properties that are distinctly different for the inner and outer penumbra and that can be compared to magnetograms. For Case I, corresponding to the outer penumbra, inclination variations are $36^\circ$ but these strong inclination variations are associated only with small, on the order of $20$ mT, fluctuations in the field strength. Over a large fraction of the area, the magnetic field is nearly horizontal. For Cases III and IV, corresponding to the inner penumbra, the magnetic field inclination is in the range of $58$--$71^\circ$ above the field-free gap and 28--34$^\circ$ above the magnetic component close to the penumbral photosphere. Because of the cusp-shaped magnetic field in the inner penumbra (Cases III--IV), the inclination of the magnetic field above the gap deviates by about $20^\circ$ from the horizontal plane, in contrast to Case I where a large fraction of the visible surface is associated with a nearly horizontal magnetic field. For Cases III--IV, the difference in field strength close to the penumbral photosphere above the gap and the center of the magnetic component is large, on the order of $0.10$--$0.13$~mT or a factor $5$--$6$ larger than calculated for the outer penumbra. This agrees with magnetograms of sunspots obtained in the wings of the neutral iron line at $630.2$~nm that show strongly reduced magnetic signal in dark cores as compared to the lateral brightenings in the inner penumbra but much smaller variations in field strength in the outer penumbra (Langhans et al. 2005, 2006). The large variations in field strength we find for the inner penumbra are only partly due to horizontal variations in field strength at a fixed height. {\it A major contribution to these variations in field strength is due to a combination of the Wilson depression and field lines converging with depth}. A possibly significant discrepancy between the magnetograms and our magnetostatic penumbra models is that the inclination changes inferred from the magnetograms are small in the inner penumbra, on the order of $10$--$15^\circ$, whereas the models predict nearly two times larger fluctuations. We note that for Case III, these horizontal inclination variations are strongly reduced already $100$ km above the penumbral photosphere, possibly explaining the smaller inclination variations measured from $630.2$nm magnetograms, but also implying smaller fluctuations in field strength from our model than observed. \subsection{Two-component inversions} Borrero et al. (2005) have analyzed spectropolarimetric data obtained from neutral iron lines at $1.56~\mu$m and interpreted these data within the context of the uncombed penumbra model (Solanki \& Montavon 1993). In contrast to the \ion{Fe}{I} $630.2$ nm line, these NIR lines are formed within a thin layer close to the photosphere, making a comparison with our models reasonably straightforward. Borrero et al. (2005) pointed out that these lines are, due to their low formation height, insensitive to the location of the upper boundary of the assumed flux tube. Therefore, effectively, their model corresponds to two constant property components. The only exception is in the outer penumbra, where the inversions return a lower boundary for the assumed flux tube that is above the continuum forming layer. We identify the `flux tube' component with the atmosphere above the field-free gap, and their background component with our magnetic component. Compared with the inversions obtained for a sunspot close to sun center (cf. Figs. 5 \& 6 of Borrero et al. 2005), their inversions, as well as those of Bellot Rubio et al. (2004), then show agreement with our model as regards {\it i)} the inferred variation of the field strength with radial distance for both the flux tube and background components, {\it ii)} the inferred variation of the magnetic field inclination with radial distance for both the flux tube and background components and {\it iii)} the variation of the inferred flux tube fill factor with radial distance in the sense that this fill factor is smaller for the inner than for the outer penumbra. \subsection{Temperature structure} Our model does not contain an energy equation and we cannot make quantitative predictions about the variation of temperature horizontally and with height. Borrero et al. (2005) found that in the inner penumbra, their flux tubes are hotter than the background atmosphere by about $500$~K but in the outer penumbra cooler than the background atmosphere by a similar amount. We note that over the $250$ km height assumed to correspond to the vertical extent of the flux tube, the temperature in their background atmosphere drops by approximately $1400$ K. Since in the inner penumbra the flux tube has much higher gas pressure and therefore much higher opacity than the background atmosphere, the higher temperature found for the flux tube relative to the background at the same height may still be associated with a lower radiation temperature (lower intensity) than for the background atmosphere, as found by Martinez Pillet (2000). If so, the inversions can be consistent with our model also as regards inferred temperatures. \subsection{What about flows?} The model for penumbral gaps presented here only addresses their equilibrium aspects. Flows of various kinds are observed in the penumbra, and the question arises to what extent the gappy penumbra model can accommodate such observations. As pointed out in the above, the configurations found here have horizontal fields directly overlying the center of the gap, for conditions corresponding to the outer penumbra. At first sight, this looks good because the outer penumbra is also the region with the strongest horizontal flows (Evershed flow), thus allowing the simplest interpretation of horizontal, field-aligned flows. On closer inspection this interpretation has problems. The field lines wrapping around the gap are sufficiently horizontal only over a relatively short distance, before turning up into the atmosphere. Such a field line cannot support a steady flow because the rapid decrease of density with height in the atmosphere would require a rapidly diverging flow speed. In SS06 we speculated that, instead, the flow on such field lines is episodic and patchy. Observed Evershed flow do indeed show localized and time dependent variations but also a stronger, {\it steady} flow component (Shine et al. 1994, Rimmele 1994, Rouppe van der Voort 2003, Rimmele \& Marino 2006). A second problem is that flows are observed not only in the outer penumbra, but also in the inner penumbra where the measured field inclinations with respect to the horizontal appear to be substantial in all penumbral components (Bellot Rubio et al. 2004, Borrero et al. 2005), including the dark cores (Langhans et al. 2006). It is clear that this is a generic problem (not just in the gappy penumbra interpretation), since it shows that irrespective of the nature of the observed flows, they cannot be at the same time steady and field-aligned (again, because of the decreasing gas density with height). Direct evidence for strong flows in dark cores of filaments, strongly suggesting also a significant vertical velocity component, were first reported by Bellot Rubio et al. (2005). Rimmele and Marino (2006), elaborating on the moving flux tube model, claim that the observed upflows turn into horizontal outflows within fractions of an arcsec. However, an analysis of the azimuthal variation of the measured line-of-sight velocities for the individually resolved flow channels, needed to support interpretations involving horizontal flux tubes, is not presented. Whereas the cospatial nature of flows and dark cores, noted also by Langhans et al. (2006), is indisputable, it has not been demonstrated that such flows are parallel to {\it the visible surfaces of} the dark cores. At the moment there appear to exist only poorly developed ideas for the time dependence and/or inhomogeneity that would be needed to explain the observations. Rimmele and Marino (2006) propose inhomogeneities involving a tangle of flux tubes crossing over each other at different heights in the atmosphere, and interpret this as support for the tube model. The objections to this elaboration are the same as for the original moving tube model (see SS06), but in a more extreme form. On account of the high Alfv\'en speeds in the atmosphere, for example, an inclusion like a tube embedded at an angle with respect to its surroundings is not in equilibrium, and will change on the Alfv\'en crossing time over the width of the tube (seconds, for a diameter of 100km). The lack of equilibrium will be even more serious when the tube is replaced by a tangle of narrower ones. The gap model, however, opens opportunities that have not been considered before. On the one hand, the gap contains a convective flow much like granulation: up in the middle and down on the radiating sides of the gap. It also allows for (but does not require yet in the present theory) horizontal flows along the length of the gap. These could be driven by the variation of physical conditions from the inner penumbra to the edge of the spot. The moat flow, seen appearing from under the penumbra at its edge, would already be present in the gaps, for example. Both kinds of flow should have an effect in Doppler measurements, since the surface of the gap is so close to $\tau=1$ that the spectral lines used are partly formed in the field-free region. Recent 2D Spectrometric data in the non-magnetic \ion{Fe}{I} $557.6$~nm line at 0.5 arcsec resolution indeed suggest that the Evershed flow peaks close to the photosphere (Bellot Rubio et al. 2006). \subsection{Stokes spectra} A crucial test for any penumbra model is its ability to reproduce observed polarized spectra. Of particular importance is to reproduce asymmetric Stokes-V profiles responsible for production of net circular polarization at locations where the magnetic field vector is at large angle with respect to the line-of-sight. Such strong gradients, in combination with gradients in velocity, are needed to reproduce observed spectropolarimetric data (e.g. Sanchez Almeida \& Lites 2002, Solanki \& Montavon 1993, Martinez Pillet 2000). The model presented here predicts strong gradients in both the inclination and azimuth angle along the photosphere as well as in the vertical directions. A more direct test must await detailed radiative transfer calculations and flow models. We note that our conceptually simple model has a more intricate magnetic field configuration than implemented in inversion techniques based on e.g., the embedded flux tube model. As shown in Fig.~2, the inclination decreases with height (becomes more vertical with height) above the field-free gap, but above the center of the magnetic component, the inclination increases with height. As shown in Fig.~3, the field strength increases with height above the field-free gap and decreases with height above the magnetic component. We speculate that embedded flux tube inversions may respond to such magnetic field gradients by returning a lower boundary for the flux tube (identified with the field-free gap) that is located slightly above the photosphere, as found by Borrero et al. (2006a) for the mid and outer penumbra. \section{Limitations of the present model} The model, focusing on the gas pressures and the magnetic field configuration, is coarse in terms of the temperature distribution. A self-consistent temperature structure requires a model of the convection within the field-free gap, and radiative transfer to estimate the radiative flux and cooling near the surface as well as heating of the magnetic component. A significant uncertainty is the thermodynamic state of the magnetic flux bundles between the gaps, and the processes that determine their field strength. Another complication is that the field lines are concave towards the field-free gas (much more so in the outer than in the inner penumbra) and therefore the configurations modeled in this paper should in principle be subject to fluting instabilities. We repeat our speculation (SS06) that this interaction may have something to do with generating the Evershed flow. Ultimately, 3D MHD simulations will be required to understand these complicated interactions. Our 2D model is such that $B_x$ and $B_z$ are assumed divergence-free, thereby implying that $\partial {B_y}/\partial{y}$ must be zero, excluding gradients of $B_y$ in the radial direction. This is a consequence of using independent 2D models for the inner, mid and outer penumbra. A 3D model would remove this inconsistency but does not constitute a problem as regards our main conclusions, summarized below. \section{Conclusions and discussion} We have shown that magnetostatic penumbra models characterized by field-free gaps can be constructed by allowing the shape of the discontinuity to adapt itself to the required force balance between the field-free and magnetic components. This is in contrast to the embedded flux tube model where the geometry, a round flux tube with internal field lines aligned with the flux tube and external field lines wrapping around the flux tube, is assumed given a priori. As pointed out by SS06 and further discussed in Sect. 2, such flux tubes cannot be in magnetostatic equilibrium. The magnetostatic gappy penumbra model presented here is conceptually simple. We have assumed a potential field interlaced by field-free gaps, a variation of gas pressure with height that is similar to that of the quiet sun for the field-free gap and a gas pressure in the magnetic component that is simply that of the field-free gap scaled by a constant. Further input quantities of the models are the average field strengths and inclinations typical of the outer, mid and inner penumbra. With these constraints, we have calculated boundary shapes and magnetic field configurations. The calculated models have properties that are distinctly different in the inner and outer penumbra and that agree with observed images and magnetograms. In particular, we find that field-free gaps in the {\it inner} penumbra are cusp-shaped and associated with a magnetic field that is inclined by about $70^\circ$ from the vertical for filaments that are separated by $1000$ km. Here, the magnetic component is associated with a Wilson depression on the order $200-300$ km relative to the field-free component that makes field-free gaps appear as elevated, distinct features. This large Wilson depression, in combination with field lines converging with depth, explains the large variations in field strength inferred from magnetograms and two-component inversions. The steep walls of the field-free gaps explain why dark-cored penumbral filaments are seen with lateral brightenings on both sides of the dark core at all azimuth angles only for sunspots that are close to disk center. In the {\it outer} penumbra, we find that field-free gaps are associated with flat-topped boundaries and a horizontal magnetic field above the center of the gap. Near the surface, this magnetic field shows large inclination variations horizontally, but only small fluctuations in field strength, in agreement with observations. We associate the atmospheres above our field-free gap and magnetic component with the flux tube and background components respectively in the inversions of Bellot Rubio et al. (2004) and Borrero et al. (2005). Our models are then consistent with these inversions as regards the variation of field strength and inclination with distance from the umbra for the two components. Our models also show a widening of the cusp and gradually reduced Wilson depression, consistent with the gradual widening and fading of dark cores, towards the outer penumbra. This is also consistent with a systematic increase of the flux tube filling factor towards the outer penumbra. Whereas our calculations were made by assuming potential magnetic fields, the differences between the inner and outer penumbra are fundamentally due to magnetic flux conservation, constrained by magnetostatic equilibrium and the average properties of the magnetic field assumed. Flux conservation and a strong {\it vertical} field in the inner penumbra forces the field-free gap to narrow and the magnetic component to widen, as compared to what is the case in the outer penumbra, and a cusp to form. These {\it qualitative} differences between the inner and outer penumbra constitute solid results, not likely to change with more accurate models. More realistic magnetic field configurations, intended for detailed comparisons with observations, may however need to take into account the effects of horizontal temperature gradients, convection and flows. The interpretation of line profiles and polarimetry is frequently formulated in terms of an embedded flux tube paradigm. The practical implementations of such models, however, are generic 2-component inversions, often not particularly consistent with the physics of embedding of a flux tube in a background magnetic field (cf. discussion in section 2). While these inversions therefore cannot be used as support for their embedded tubes, they have allowed general conclusions about magnetic field strength and inclination variations in the penumbra to be drawn from spectropolarimetric data. Since its introduction 13 years ago by Solanki and Montavon (1993), no magnetostatic embedded flux tube model has yet emerged, and there are good physical reasons why a realistic model is unlikely to materialize. The embedded flux tube model thus remains a conceptual cartoon, useful in a restricted sense as a two-component model for quantifying variations of the magnetic field, temperature, line of sight velocity and other properties within the resolution element. The moving tube model of Schlichenmaier (1998a, b) leads to a number of predictions that have successfully been tested against observations. In spite of this success, the presence of flux tubes with circular cross sections in the penumbra meets with the same objections as the embedded flux tube models; such flux tubes (if they exist) are more likely to manifest themselves as sheets with an azimuthal thickness much smaller than their vertical extent (Jahn \& Schmidt 1994). Interchange convection in such flux sheets has been suggested as a possible heating mechanism for the penumbra (Jahn \& Schmidt 1994). However, the long measured lifetimes (on the order of $1$~hr) for filaments (e.g. Sobotka \& S\"utterlin 2001, Langhans et al. 2005) as well as flow channels (Rimmele \& Marino 2006) leads to the conclusion that this is not a viable heating mechanism (Schlichenmaier \& Solanki 2003). While the flux tubes simulated by Schlichenmaier (1998a,b) appear to explain Evershed flows, such flux tubes or flux sheets therefore also pose severe problems for explanations of penumbral heating. As shown by Schlichenmaier and Solanki (2003), individual flux tubes cannot heat penumbral filaments extending over more than approximately $1000$--$2000$~km. Such flux tubes must either submerge to give room for new flow channels, or heating of the submerged part of the flux tube must occur. Schlichenmaier and Solanki (2003), relying on simulations by Schlichenmaier (2002), speculate that the submerged part of the flux tube is heated radiatively by hotter gas below the photosphere, causing it to reappear as a hot upflow channel. The efficiency of such radiative heating is restricted to shallow depths below the photosphere. Therefore, this explanation ultimately relies on (efficient) convection to provide the needed heat flux. Evidence to support either of the two scenarios discussed above is absent in the highly resolved images analyzed by Rouppe van der Voort et al. (2004). We also note that Rimmele and Marino (2006) found no evidence for downflows along the observed flow channels. Our model predicts strong vertical and horizontal gradients in both the magnetic field inclination and azimuth angles. In this paper, we have made no attempts to adjust our models to match such gradients with those inferred from inversion techniques. In future work we intend to use our models to calculate synthetic continuum and narrowband images as well as conventional and polarized spectra, using empirically determined temperatures and velocity fields. Our model predicts not only strong magnetic field gradients, but also a strongly warped $\tau = 1$ surface that should enable a number of critical tests with observed high-resolution data. \acknowledgements{We are grateful to the referee, R. Schlichenmaier, for constructive criticism and suggestions for improvements and to B. Lites and A. Nordlund for comments on an earlier version of the manuscript.}
1,108,101,563,730
arxiv
\section{Introduction} Liquid propagation in porous medium is a much-studied topic. Research in this area is important, for example, for the petroleum industry \cite{chavent1986mathematical,coats1998compositional}, for groundwater contamination studies \cite{bear2012modeling,pinder1973galerkin} and for road construction \cite{roseen2011water}. A porous medium can be imagined to have pores (cavities) and capillaries (throats) connecting the pores. The invading fluid needs different external pressure to enter into different sized throats or pores. This so-called entry pressure can be calculated from the Washburn equation \cite{washburn1921dynamics}% \begin{equation} p=-\dfrac{2\gamma\cos(\theta)}{\rho}, \label{eq_washburn2}% \end{equation} where $\gamma$ is the surface tension of invading phase, $\theta$ is the contact angle between the non-wetting invading phase and the material and $\rho$ is the characteristic radius of the capillary (or throat). There are two main computational approaches to model liquid propagation in porous materials \cite{sahimi2011flow}: the continuum and the pore network approach. Commercial packages like Fluent and Comsol utilize the continuum approach, in which the porous material is treated as a volume-averaged continuum. The fairly low computational cost of this approach however means that the microscale features of the material are not resolved, limiting the method to problems in which the connectivity of the pore space does not play a major role. Pore network (discrete) modeling resolves the microscale features of the medium at the expense of larger computational cost. The porous medium can be modeled as a graph, where the vertices and edges correspond to the pores and capillaries, respectively. These pore-scale models date back to the work of Fatt \cite{fatt1956network1,fatt1956network2,fatt1956network3}. The transport inside the network is modeled using finite difference schemes. This approach is widely used to simulate the multiphase flows in fuel cell electrodes \cite{putz2013openpnm}. OpenPNM (an open-source pore network modeling package) \cite{OpenPNM} also applies this approach. The advantages of pore network modeling compared to continuum approach is presented in \cite{openpnm2013,openpnm2016}. The distribution of pore sizes is a crucially important property of porous materials. Mercury porosimetry \cite{giesche2006mercury} is a commonly used method to determine the pore size distribution of rock samples. During this process mercury is forced into the samples using increasing external pressure. \ The volume of the injected mercury as the function of the pressure is the so-called saturation curve. The modeling of mercury porosimetry was first studied by Chatzis and Androutsopoulos \cite{chatzis1977modelling,androutsopoulos1979evaluation,chatzis1985modeling}. An external pressure driven access-limited invasion percolation model called porcolation was introduced in \cite{bak2016porcolation}. The porcolation method can be used for real networks provided that the statistical information required for the network generation is available. According to recent studies \cite{tahmasebi2017image}, the modeling of granular porous media can be effectively and accurately done based on processing \mbox{2D/3D} images. The main objective of the current work is to study saturation properties of both regular and irregular networks. We carried out porcolation simulations on square/cubic networks, networks based on exotic graphs like the Sierpi\'{n}ski triangle and carpet, and also on localized and completely random networks. Saturation curves were determined with OpenPNM. This paper is structured as follows: in Section \ref{section:simulation_algs} the theoretical background of different percolation models is presented. Section \ref{section:porcolation} presents saturation curves of porcolation simulations on square and cubic networks. In Section \ref{section:exotic-network} saturation curves for Sierpi\'{n}ski triangle and Sierpi\'{n}ski carpet networks are shown. In Section \ref{section:irregular-graphs} the locality properties of irregular 3D networks were investigated with a newly developed graph generation model and saturation curves were also obtained for networks with different pore degree distribution. Section \ref{section:conclusion} concludes the paper. \section{Percolation models}\label{section:simulation_algs} Percolation theory was introduced in 1957 by Broadbent and Hammersley \cite{BroadbentHammersley1957Percolation}. They investigated how the random properties of a medium influence the percolation of a fluid through it. In the following four percolation models are presented. \subsection{Ordinary Percolation} There are two fundamentally different types of the ordinary percolation model: bond-percolation and site-percolation \cite{christensen2002percolation} (Figure \ref{fig:lattice}). \begin{figure}[th] \centering \subfigure[]{ \includegraphics[width=0.25\textwidth]{./Figures/lattice.png} \label{fig:lattice-site} } \quad \subfigure[]{ \includegraphics[width=0.25\textwidth]{./Figures/lattice-bond.png} \label{fig:lattice-bond} } \caption{(a) Site-percolation, (b) Bond-percolation on square lattice.}% \label{fig:lattice}% \end{figure} In \textbf{site-percolation} (Figure \ref{fig:lattice-site}) each lattice site is occupied with some probability $P$. Occupied sites having one common side are called neighbors \cite{percolation1994book}, while a group of neighboring occupied sites is called a cluster. Clusters have a crucial role in percolation theory, since the existence of spanning cluster (a cluster that connects opposite boundaries) means that the invading fluid (the fluid that enters the medium under pressure) can percolate through the medium. If the occupation probability $P$ is small, there is only a slight chance of having a spanning cluster. On the other hand, if $P$ is nearly 1, there will almost certainly be a spanning cluster. The critical value of occupation probability ($P_\text{crit}$), at which an infinite cluster appears in an infinite lattice is $0.593$ \cite{gebele1984site} and $0.312$\cite{grassberger1992numerical} for two dimensional square and three dimensional cubic networks, respectively. In \textbf{bond-percolation} (Figure \ref{fig:lattice-bond}) it is not the lattice sites, but the connecting bonds that are occupied with probability $P$. Site- and bond-percolation yields different critical probabilities for the same lattice, but the values can be calculated from each other \cite{Berg1982Note,fisher1961some}. \subsection{Invasion Percolation} The existence of a spanning cluster in ordinary percolation is a static property, thus ordinary percolation does not say anything about the dynamics of cluster growth (i.e. liquid propagation). Invasion percolation was introduced in \cite{lenormand1980description,wilkinson1983invasion}, as a variant of the ordinary percolation to fulfill the need of describing the dynamics of liquid propagation in porous medium. Invasion percolation can also be site- or bond-based. The basic idea of invasion percolation is that every site (or bond) has an invasion resistance value $r \in{} [0,\,1]$. The invading phase starts from a prescribed region (set of sites), and at every step it occupies the most easily \textquotedblleft accessible\textquotedblright\ site, i.e. the site that is a neighbor of an already invaded site with the lowest resistance. \subsection{Porcolation and Drainage}\label{subsection:por_drain} The porcolation model (PORisometry perCOLATION) is an access limited site-percolation model introduced in \cite{bak2016porcolation}. The idea of this model came from porosimetry experiments, where the injection pressure of the invading non-wetting fluid is gradually increased. The sites from where the fluid is injected into the medium are called the starting set. In the porcolation model, each site $s_i$ has a volume $V_{i}$ and an invasion resistance (entry pressure value $p_{i}$) as shown in Figure \ref{fig:porcolation-drainage}. The $t_{ij}$ throat defines the connection between $s_i$ and $s_j$ sites. This process is driven by an external pressure $p\in{}[0,\,1]$. For a given pressure $p$, all sites with invasion resistance $p_i\leq{}p$ get occupied, provided they are connected to the starting set through a chain of neighboring sites having resistances $p_i\leq{}p$. Hence, the main difference from invasion percolation is that in porcolation all accessible sites can be invaded simultaneously. \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{./Figures/porcolation-drainage-latexed_V4} \caption{Mapping a porcolation graph to a drainage graph. $t_{ij}$ is the throat connecting vertices $s_{i}$ and $s_{j}$. $V_{i}$ and $V_{j}$ are the pore volumes, while $p_{i}$ and $p_{j}$ are the corresponding pore entry pressure values. The calculated throat entry pressure is $p_{ij}=\max(p_{i},\,p_{j})$ }% \label{fig:porcolation-drainage}% \end{figure} Drainage is an access limited bond-percolation model. In drainage, sites $s_i$ are connected by throats $t_{ij}$, which have individual entry pressure values $p_{ij}$. In this case the accessible throats are the ones that are connected with already occupied bonds to the starting set. A site that has a connecting occupied bond is also occupied instantly. The mapping from a porcolation graph to a drainage graph (also shown in Figure~ \ref{fig:porcolation-drainage}) is quite simple by assigning to a bond the maximum of the two connected pore entry pressure values, i.e. \begin{equation} p_{ij}=\max(p_{i},\,p_{j}). \label{eq_porcolarion-drainage}% \end{equation} \noindent The physical meaning of this equation is that a throat becomes occupied only if both connected pores become occupied. \section{Porcolation simulations with OpenPNM \label{section:porcolation}} The total volume of pores in the porcolation model is (the index $i$ runs through all the pores) \begin{equation} V_\mathrm{total} = \sum_{i} V_i. \end{equation} \noindent The saturation is the ratio of occupied volume and total volume \begin{equation} S(p) = \frac{1}{V_\mathrm{total} } \sum_{j} \ V_{j}, \quad \text{for all $j$ with $p_j \leq p$ and $v_j$ is accessible from the starting set.} \end{equation} The porcolation simulations were carried out in OpenPNM whose built-in drainage model is used with the correspondence described in Section \ref{subsection:por_drain}, i.e. the throat entry pressure values $p_{ij}$ were obtained by Equation \eqref{eq_porcolarion-drainage}. \subsection{Validation on regular graphs} Porcolation simulations were carried out on $1000 \times1000$ square and $100 \times100 \times100$ cubic lattices ($10^6$ vertices for both). The pore entry pressure values $p_i$ were independently, uniformly generated from $[0,\,1]$. Unit volume was assigned for each pore, i.e. $V_i = 1$. Two different starting sets were considered for both the two dimensional and three dimensional cases (Figure \ref{fig:diff_side_percolation}). \begin{figure}[th] \centering \subfigure[]{ \includegraphics[width=0.25\textwidth]{./Figures/porcolation_1_side_mono.png} \label{fig:one_side_percolation} } \quad \subfigure[]{ \includegraphics[width=0.25\textwidth]{./Figures/porcolation_4_side_mono.png} \label{fig:four_side_percolation} } \caption{Porcolation networks with different starting sets: top side of the lattice (a) and full boundary of the lattice (b).}% \label{fig:diff_side_percolation}% \end{figure} Fifty equidistant pressure steps were taken in the $p \in [0,\,1]$ range. 100 simulations were run for each case taking about 3 hours on a 3\textsuperscript{rd} generation, $3.2$ GHz Intel processor; the CPU time depends only on the number of pores and connections, it is independent of the dimension of the graphs. Since the pores have unit volume, the saturation for a given pressure $p$ is simply the ratio of the number of occupied pores and all pores. \begin{figure}[b!] \centering \includegraphics[width=0.7\textwidth]{./Figures/porcolation_large_final_line} \caption{Saturation curves for square and cubic networks}% \label{fig:porcolation_large}% \end{figure} \begin{figure}[b!] \centering \includegraphics[width=0.7\textwidth]{./Figures/saturation_histogram_around_800_fixed.pdf} \caption{The histogram of saturation for square network with one-sided invasion at $p=0.8$}% \label{fig:porcolation_fluc_5}% \end{figure} The average saturation curves are shown on Figure \ref{fig:porcolation_large}. The inflection points of these saturation curves correspond to the critical probability ($P_\text{crit}$) of ordinary percolation. The inflection point for the $1000^2$ square network for one-sided porcolation is $0.592$ (0.16\% difference from the theoretical 0.593). For the $100^3$ cubic network the inflection point is $0.308$ (1.3\% difference from the theoretical 0.312). \noindent The inflection point of the $1000^2$ square network in the case of four-sided invasion is around $0.596$~($0.51$\% difference) and the inflection point of the $100^3$ cubic network in the case of six-sided invasion is around $0.309$~($0.96$\% difference). Figure \ref{fig:porcolation_fluc_5} shows the histogram of the saturation values for $p=0.8$ (square lattice, one-sided). \clearpage \section{Porcolation on exotic graphs}\label{section:exotic-network} We investigated porcolation on Sierpi\'{n}ski triangle and Sierpi\'{n}ski carpet style graphs. Percolation simulation on the Sierpi\'{n}ski carpet were applied for financial calculations in \cite{pei2015volatility}. Finite realizations of Sierpi\'{n}ski triangle and Sierpi\'{n}ski carpet style graphs are shown in Figures \ref{fig:sierpenski-triangle-network} and \ref{fig:sierpenski-carpet-network}, where the level of the graph is the number of iterations required to build the graph from entities of the previous level. For the simulations the entry pressure values were generated from a uniform distribution in $[0,\,1]$ and the pore volumes were taken as unity. The starting set was the bottom side of the graphs. The saturation curves for porcolation simulations on Sierpi\'nski triangle with levels 8-13 are presented in Figure \ref{fig:sierpenski-triangle-sat}. The percolation thresholds are significantly higher than for simple square networks. This can be explained by a special characteristic of the Sierpi\'nski triangle. Some vertices (highlighted in Figure \ref{fig:sierpenski-triangle-4} with lighter color and bigger size) are critical from the perspective of porcolation, since there is no other path to reach the new region. These vertices form an articulation set, since if they are removed, the graph falls apart, hence it is not a robust graph. We also observe that the saturation curves are shifted towards the $p=1$ dimensionless pressure as the graph level is increased. The reason for this shift is also connected to the articulation set, since as the graph level is increased the number of such critical vertices is also increasing. The inflection points (corresponding to the percolation thresholds $P_\mathrm{crit}$) of the saturation curves are shown in Figure \ref{fig:sierpenski-triangle-inf}. The occupation of sites for the Sierpi\'nski triangle graph are shown in Figure \ref{fig:sierpenski-triangle-time} at different time steps. Saturation curves of Sierpi\'nski carpet with levels 3-6 are shown in Figure \ref{fig:sierpenski-carpet-sat}. These curves are not shifted towards $p=1$ dimensionless pressure as the graph level is increased because these graphs are more robust. The percolation thresholds remain almost the same for different graph levels, they are all in the range [0.65,~0.67] as shown in \ref{fig:sierpenski-carpet--inf}. The occupation of sites for the Sierpi\'nski carpet graph are shown in Figure \ref{fig:sierpenski-carpet-time} at different time steps. \begin{figure}[h] \centering \subfigure[]{ \includegraphics[height=3cm]{Figures/triangle-1-gray-0-level} \label{fig:sierpenski-triangle-2} } ~ \subfigure[]{ \includegraphics[height=3cm]{Figures/triangle-2-gray} \label{fig:sierpenski-triangle-3} } ~ \subfigure[]{ \includegraphics[height=3cm]{Figures/triangle-4-gray-critical} \label{fig:sierpenski-triangle-4} } \caption{The networks based on Sierpi\'nski triangle for different graph levels: (a) $0^\text{th}$ level, (b) $1^\text{st}$ level, (c) $3^\text{rd}$ level }% \label{fig:sierpenski-triangle-network}% \end{figure} \begin{figure}[h] \centering \subfigure[]{ \includegraphics[height=3cm]{Figures/1-False-gray-handmade} \label{fig:sierpenski-carpet-2} } ~ \subfigure[]{ \includegraphics[height=3cm]{Figures/2-False-gray-handmade} \label{fig:sierpenski-carpet-3} } ~ \subfigure[]{ \includegraphics[height=3cm]{Figures/3-False-gray} \label{fig:sierpenski-carpet-4} } \caption{The networks based on Sierpi\'nski carpet for different graph levels: (a) $1^\text{st}$ level, (b) $2^\text{nd}$ level, (c) $3^\text{rd}$ level } \label{fig:sierpenski-carpet-network}% \end{figure} \begin{figure}[h] \centering \subfigure[]{ \includegraphics[width=0.6\textwidth]{./Figures/sierpinski_triangle_grayscale_with_line_bigger_font_size} \label{fig:sierpenski-triangle-sat}% } ~ \subfigure[]{ \includegraphics[width=0.3\textwidth]{Figures/sierp_triangle_inf_points_bigger_font_size} \label{fig:sierpenski-triangle-inf} } \caption{(a) Saturation curves for Sierpi\'nski triangles of different levels, (b) Percolation thresholds vs. graph level }% \label{fig:sierpenski-triangle-results}% \end{figure} \begin{figure}[h] \centering \subfigure[]{ \includegraphics[width=0.28\textwidth]{./Figures/triangle5-4} \label{fig:sierpenski-triangle-p3}% } ~ \subfigure[]{ \includegraphics[width=0.28\textwidth]{./Figures/triangle5-7} \label{fig:sierpenski-triangle-p6}% } ~ \subfigure[]{ \includegraphics[width=0.28\textwidth]{Figures/triangle5-9} \label{fig:sierpenski-triangle-p8} } \caption{Fluid distribution of Sierpi\'nski triangle with level 5 for different time steps, the corresponding dimensionless pressures are: (a) $p=0.4$, (b) $p=0.7$, and (c) $p=0.9$. The gray vertices are empty, the black vertices are the occupied ones. }% \label{fig:sierpenski-triangle-time}% \end{figure} \begin{figure}[h] \centering \subfigure[]{ \includegraphics[width=0.6\textwidth]{./Figures/sierpinski_carpet_75_grayscale_with_line_bigger_font_size.pdf} \label{fig:sierpenski-carpet-sat}% } ~ \subfigure[]{ \includegraphics[width=0.3\textwidth]{Figures/sierp_carpet_false_inf_points_bigger_font_size} \label{fig:sierpenski-carpet--inf} } \caption{(a) Saturation curves for Sierpi\'nski carpets of different levels, (b) Percolation thresholds vs. graph level }% \label{fig:sierpenski-carpet-results}% \end{figure} \begin{figure}[h] \centering \subfigure[]{ \includegraphics[width=0.27\textwidth]{./Figures/carpet5-5} \label{fig:sierpenski-carpet-p3}% } ~ \subfigure[]{ \includegraphics[width=0.27\textwidth]{./Figures/carpet5-6} \label{fig:sierpenski-carpet-p5}% } ~ \subfigure[]{ \includegraphics[width=0.27\textwidth]{Figures/carpet5-9} \label{fig:sierpenski-carpet-p7} } \caption{Fluid distribution of Sierpi\'nski carpet with level 5 for different time steps, the corresponding dimensionless pressures are: (a) $p=0.5$, (b) $p=0.6$, and (c) $p=0.9$. The gray vertices are empty, the black vertices are the occupied ones.}% \label{fig:sierpenski-carpet-time}% \end{figure} \clearpage \section{Porcolation on random graphs}\label{section:irregular-graphs} Real porous medium has irregularly distributed pores, therefore we also studied porcolation on irregular (random) networks with arbitrary pore degree distributions. There are two fundamental ways to generate random networks: edge cutting (removing) and edge adding methods. The edge cutting method starts from an existing network and removes edges until the prescribed pore degree distribution is reached. The edge adding method starts from a set of nodes and adds edges until the prescribed pore degree distribution is obtained. Due to its flexibility the edge adding method was chosen. \subsection{Random pore network generation} The so-called ``configuration model'' \cite{britton2006generating} creates random networks of a given pore degree distribution. First, the desired number of pores are created with assigned coordination number (pore degree) with the given distribution. The prescribed pore degree at each pore can be imagined as attached ``stubs''. Randomly chosen stubs are connected until each vertex has the prescribed number of neighbors. In real porous media usually only spatially close pores are connected, but this characteristic is not taken into account in the configuration model. A pure graph model does not contain an inherent distance metric, a Euclidean graph (a pure graph embedded into Euclidean space) is the appropriate object to represent a real pore-throat network. \begin{figure}[h] \centering \includegraphics[width=0.7172\textwidth]{./Figures/cell_list_regular_notext} \caption{Possible cases with the modified cell list algorithm: one pore in each cell (left), some pores in each sell (middle), and all pores in one cell (right)}% \label{fig:cell_list_extreme}% \end{figure} To efficiently generate Euclidean pore networks we developed a modified version of the cell list algorithm \cite{allen1990computer}. In this method the 3-dimensional Euclidean space is partitioned into non-overlapping cells (for computational simplicity our cells were cubes). Two cells are called neighbors if their intersection has a positive area. A given number of points (representing pores) are added to each cell. Throats are then added to connect pores only in neighboring cells. There are two extreme cases of the modified cell list algorithm: when only one cell is defined, and when every cell contains only one pore. The single cell case is equivalent to the configuration model, while the case when every pore has it's own cell is equivalent to the simple cubic network. These extreme cases are depicted in 2D on the right and left side of Figure \ref{fig:cell_list_extreme}. The middle part shows the 4 pores/cell setup of modified cell list algorithm. This method is capable of generating 3D graphs as well. The pure graph created by the configuration model is a ``global'' network because there is no spatial restriction for the neighboring pores, while the one made with the developed cell list algorithm is a ``local'' network because only the spatially close pores can be connected. \subsection{Influence of locality on saturation} We can imagine a porous rock sample as pores and throats connecting them. Connected pores generally are not too far from each other, this is the localized nature of real pore networks. Graph locality can be quantified by the statistics of pores in each cell. We examined the effect of graph locality in porcolation simulations using the introduced modified cell list algorithm for graph generation. Simulations were performed on $50^3$ cubic networks (125000 pores) with one invading side and the results were averaged over 100 simulations. The entry pressure values were generated from uniform distribution in the range of [0,\,1]. The 1, 2, 3, 5, 10, and 125000 pores/cell setups were investigated, these resulted in 6, 13, 20, 34, 69, and 124999 possible pore neighbors, respectively. The saturation curves are shown in Figure~\ref{fig:local_sat_curves}. As we see, network locality has a significant role in porcolation, more local pore network means higher percolation threshold values. We also conclude that the modified cell list algorithm is capable of generating more realistic pore networks than the original Britton \textit{et al.} algorithm. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{./Figures/locality_sat_curve_neigh_6_conn_6_grayscale_finer_step} \caption{Saturation curves for examining the effect of graph locality.}% \label{fig:local_sat_curves}% \end{figure} \subsection{Experiments with different pore degree distributions} The modified cell list algorithm is also capable of generating random networks with a prescribed pore degree distribution. It was tested with 4 different pore degree distributions: $d_0$ is the uniform distribution, $d_1$ emphasizes low pore degrees, $d_2$ embraces the middle range and $d_3$ is a distribution where the high pore degrees are dominant (see Figure \ref{fig:pore_degree_dist}). The 4 pore/cell setup was used for the modified cell list algorithm to generate the pore networks. Simulations were performed on $50^3$ networks with one invading side and the results were averaged over 100 simulations, as before. The entry pressure values were generated from uniform distribution in the range of [0,\,1]. The saturation curves are shown in Figure~\ref{fig:pore_degree_sat}. The results for the $d_0$ (uniform) and the $d_2$ (middle dominant) pore degree distributions are almost the same. The results for the $d_1$ (low dominant) and the $d_3$ (high dominant) pore degree distributions are remarkably different as we expected. This is caused by the high difference between the number total connection in the pore networks. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{./Figures/Pore_degree_distributions_discrete_wo_markers} \caption{The discrete pore degree distributions}% \label{fig:pore_degree_dist}% \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{./Figures/Pore_degree_satruration} \caption{Saturation for different pore degree distributions}% \label{fig:pore_degree_sat}% \end{figure} \clearpage \section{Conclusion}\label{section:conclusion} In order to obtain saturation curves for different networks we implemented the porcolation model in OpenPNM with the built-in drainage simulation. The first porcolation simulations were carried out on square and cubic networks to validate the model: the inflection points of the saturation curves correspond well with the theoretical percolation threshold values. The saturation curves were also determined for networks based on Sierpi\'nski triangle and Sierpi\'nski carpet. If the graph level of the Sierpi\'nski triangle is increased, the inflection of the saturation curve is shifted to the higher dimensionless pressure values. This phenomenon is caused by the increasing number of vertices in the articulation set, which removal would disconnect the graph. We developed a network generation method (based on the cell list algorithm) which is capable of efficiently generating local pore networks with random pore degree distribution. We showed that the locality of pore networks has major effect on the saturation curves and we also done porcolation simulations on pore networks with random pore degree distributions. \bibliographystyle{unsrt}
1,108,101,563,731
arxiv
\section{Introduction} With a deep excitonic binding energy of 59 meV \citep{mang1995}, long-lived optical phonons \citep{millot2010}, and a polar structure, zinc oxide (ZnO) has been extensively studied among others applications for stimulated light emission \citep{tang1997,wille2016}, as a second harmonic generation based O$_2$ sensor \citep{andersen2014}, and as a matrix for diluted magnetic semiconductors \citep{gilliland2012}. Alloying with magnesium its band gap widens from 3.37 eV to 7.8 eV \citep{ohtomo1998,kumar2013,schleife2011,pantelides1974} reinforcing its use in solar-blind communication devices \citep{liu2009}. ZnO crystallizes in the hexagonal wurtzite-type structure while MgO crystallizes in the cubic rock-salt-type structure. Therefore, once the solubility limit is reached in wurtzite-type Mg$_x$Zn$_{1-x}$O solid solution, $\sim$ 4\% in bulk \citep{segnit1965} and $\sim$ 30\% or $\sim$ 50\% in thin films depending on the growth method \citep{kumar2013,redondo2012}, phase separation appears \citep{gries2015} and both rock-salt and wurtzite type phases coexist. A lot of effort has been put on trying to reach the highest incorporation limit of Mg$^{2+}$ in phase pure wurzite-type Mg$_x$Zn$_{1-x}$O. However, the study of the optical properties of Mg$_x$Zn$_{1-x}$O thin films with phase separation has been scarce \citep{thapa2013,huso2014,lopez2015}. Those previous studies find with optical absorption spectroscopy \citep{thapa2013,lopez2015} a low-energy absorption tail overlapped to the main absorption edge for Mg concentrations above a critical concentration when phase separation occurs. The origin of this absorption tail observed in Mg$_x$Zn$_{1-x}$O thin films grown by different methods, has been tentatively explained by some authors \citep{lopez2015} as due to the beginning of the absorption edge of the coexisting rock-salt phase. However, the rock-salt MgO even alloyed with Zn remains transparent up to energies quite above 4.06 eV \citep{segura2003}. Therefore, the origin of the observed low-energy absorption tail remains unclear. The study of \citet{gries2015} has successfully employed transmission electron microscopy (TEM) on thermally annealed Mg$_{0.3}$Zn$_{0.7}$O thin films grown by molecular beam epitaxy (MBE) to investigate the microscopic effect of having phase separation. They found that the coexistence of the wurtzite and rock-salt phases in Mg$_x$Zn$_{1-x}$O due to phase separation gives rise to the existence of an secondary wurtzite-type phase, with a reduced Mg content of $x \approx 0.15$ determined by TEM energy dispersive x-ray spectroscopy (EDX). The presence of some amount of this segregated wurtzite-type phase could explain the low-energy absorption tail observed by optical absorption studies on Mg$_x$Zn$_{1-x}$O thin films when phase separation occurs. However, neither x-ray diffraction (XRD) nor TEM are well suited for probing phase segregation in as grown Mg$_x$Zn$_{1-x}$O thin films with phase separation. The volume of this segretated wutzite phase was too low to be detected by XRD and it was embedded in the thin film preventing selective access with TEM. For this reason, \citet{gries2015} had to employ a buffer layer of MgO/ZnO to promote the growth of a segregated wurtzite phase in the Mg$_{x}$Zn$_{1-x}$O thin film, after annealing the sample at 950 $^{\circ}$C, in order to detect the segregated wurtzite phase by means of TEM. Here we show a spectroscopic approach to probe phase segregation in spray pyrolysis (SP) as grown Mg$_{0.3}$Zn$_{0.7}$O thin films by means of optical absorption spectroscopy and PL measurements under high pressure and ambient temperature along the composition-dependent pressure-induced wurtzite to rock-salt irreversible phase transition \citep{decremps2002,sans2004,desgreniers1998} avoiding any postgrowth treatment on the grown sample. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Figure1.pdf} \caption{\label{fig:fig1} Scanning electron microscope (SEM) images of top (a) and cross-sectional (b) views of as grown 250-nm thick Mg$_{0.3}$Zn$_{0.7}$O thin film on a $c$-oriented sapphire substrate. (c) Normalized x-ray diffractograms of Mg$_{x}$Zn$_{1-x}$O thin films of different Mg contents $x$. The hexagonal (0002)$_w$ and (10$\overline{1}$1)$_w$ reflections of the wurtzite as well as the cubic (111)$_{rs}$ reflection of rock-salt are indicated. The labels shown next to each diffractogram are the Mg concentrations measured in the sample.} \end{figure} \section{Experimental Details} Thin films of Mg$_x$Zn$_{1-x}$O with measured Mg content of $x$ = 0, 0.06, 0.09, 0.15, 0.22, 0.3, 0.35, and 0.55 were grown by the SP method on $c$-plane oriented sapphire, as explained in the work of \citet{lopez2015}, and on $c$-oriented ScAlMgO$_4$ substrates in the case of $x$ = 0.3 for studies under high pressure. The samples were characterized by scanning electron microscopy (SEM) to see their morphology and by XRD. The thicknesses of the thin films were between 150 and 500 nm depending on the Mg content. XRD diffractograms were measured using a Bruker D8 Advance A25 diffractometer with Cu K$\alpha_1$ wavelength. For the optical absorption and the PL measurements we used a deuterium lamp and an all-solid-state pulsed laser at 266 nm with a maximum power of 10 mW, respectively. The transmitted or photoemitted light were detected with a multichanel UV-enhanced spectrometer. For the high-pressure experiments a confocal system with two cassegrain objectives was employed together with the same UV-Vis spectrometer. In the high-pressure experiments we used the Mg$_{0.3}$Zn$_{0.7}$O sample grown on the ScAlMgO$_4$ substrate which was exfoliated to a thickness of around 10 $\mu$m. ScAlMgO$_4$ has been shown to have the same compressibility \citep{errandonea2011,desgreniers1998} as ZnO. The sample was loaded in a diamond anvil cell (DAC) equipped with two diamonds with 500 $\mu$m culets, in the center of a 250 $\mu$m hole made in an Inconel gasket preindented to a thickness of 45 $\mu$m. Inside the pressure chamber we placed a ruby chip for pressure calibration \citep{mao1978} and a mixture of methanol-ethanol (4:1) as pressure transmitting medium. \section{Results and Discussion} In Fig. \ref{fig:fig1} (a) and (b) we show the SEM images of the top and cross-sectional views of the Mg$_{0.3}$Zn$_{0.7}$O thin film grown on the sapphire substrate. One can appreciate that the sample presents a typical uniform and embedded leaves characteristic of ZnO grown by SP. This morphology, only shown for the $x=0.3$ sample for clarity, stays constant only evolving into a grain-like shape for higher Mg contents. As expected, Mg incorporation results into a shift of the peak position corresponding to the hexagonal (0002) reflection towards higher $2\theta$ angles as the result of the contraction of the $c$ lattice parameter observed before \citep{lopez2015,kumar2013}. Regarding the (10$\overline{1}$1) reflection peak position, since it is contributed by both lattice parameters and $a$ expands differently to $c$ with Mg incorporation, it remains almost unaffected by Mg incorporation. Above $x= 0.3$ the peak corresponding to the (0002) reflection broadens as an indicative of compositional disorder and the cubic rock-salt reflection (111) peak emerges and grows with Mg incorporation. This confirms that our samples are phase pure wurtzite up to $x= 0.3$ when the onset of the phase separation occurs. The presence of spinel (Mg,Zn)Al$_2$O$_4$ known to appear as a spurious phase in some processes of synthesis was not detected neither by TEM \citep{lopez2015} nor XRD. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Figure2.pdf} \caption{\label{fig:fig2} (a) Photoluminescence (PL) spectra of Mg$_{x}$Zn$_{1-x}$O of some of our thin films with different Mg content $x$. (b) Dependence of the maximum of the PL peak and (c) FWHM of the PL peak with Mg content obtained from the fit to an asymmetric lorentzian function. Continuous lines are guides for the eye. The empty dots are the maximum of the extra PL peak that emerges at lower energy for $x >$ 0.3 eV. All measurements were performed with the same integration time in transmittance mode.} \end{figure} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Figure3.pdf} \caption{\label{fig:fig3} (a) Absorption edge of the Mg$_{0.3}$Zn$_{0.7}$O sample deposited on ScAlMgO$_4$ at different pressures. (b) Simulated absorption spectrum (continuous line) of Mg$_{0.3}$Zn$_{0.7}$O at 0.3 GPa according to Elliot-Toyozawa theory \citep{elliot1957,toyozawa1958,goni1990} together with the experimental spectrum (black dots). (c) Energy derivative of the absorption spectra at different pressures. Red continuous lines show the fit to two gaussians while the dots are the experimental data. (d) PL of the Mg$_{0.3}$Zn$_{0.7}$O sample deposited on ScAlMgO$_4$ at different pressures.} \end{figure*} The PL spectra of Mg$_{x}$Zn$_{1-x}$O thin films with different measured Mg content are shown in Fig. \ref{fig:fig2} together with the dependence of the peak maximum and the full width at half maximum (FWHM) with Mg concentration. As expected, the PL peak of wurtzite-type Mg$_{x}$Zn$_{1-x}$O blueshifts from 3.27 eV for $x=0$ to 3.91 eV for $x=0.35$ [Fig. \ref{fig:fig2} (b)]. Also, from $x=0.3$, under Mg incorporation the FWHM of the main PL peak [Fig. \ref{fig:fig2} (c)] starts to broaden due to the disorder caused by the presence of the phase separation found with XRD [Fig. \ref{fig:fig1} (c)]. According to \citet{gries2015} the presence of phase separation in Mg$_{x}$Zn$_{1-x}$O would give rise to the appearance of segregated wurtzite phase with less Mg content. This would result into the appearance of a second PL peak at lower energies. We do not observe the additional PL peak for $x=0.3$, when phase separation starts in our samples, but above $x=0.3$ an additional peak emerges at $\sim$3.4 eV. An energy that approximately corresponds to the PL of a wurtzite-type Mg$_{x}$Zn$_{1-x}$O sample with measured Mg content of $x=0.09$ [Fig. \ref{fig:fig2} (a)]. Similarly to \citet{gries2015} we do not find within our resolution any energy change of the PL peak at 3.4 eV with Mg content indicating that in our thin films the equilibrium Mg concentration of the segretated wurtzite phase is $x \approx 0.09$. However, the question that arises is why we only find the PL peak at $\sim$3.4 eV above $x=0.3$ if phase separation already starts at $x=0.3$ in our as grown thin films according to XRD [Fig. \ref{fig:fig1} (c)]. We shall address this issue below. As commented before, previous optical absorption spectroscopy studies \citep{thapa2013,lopez2015} on as grown wurtzite-type Mg$_{0.3}$Zn$_{0.7}$O thin films show that for this Mg concentration a low-energy absorption tail appears overlapping to the main absorption edge of the sample. In Fig. \ref{fig:fig3} (a) we show the absorption edge of our wurtzite-type Mg$_{0.3}$Zn$_{0.7}$O thin film as grown on $c$-oriented ScAlMgO$_4$ with a thickness of 150 nm. Although the presence of the excitonic absorption indicates the high crystalline quality of the sample, the low-energy absorption tail is clear when we compare the experimental data with the calculated absorption edge according to the Elliot-Toyozawa's theory \citep{elliot1957,toyozawa1958,goni1990} considering a single absorption edge [Fig. \ref{fig:fig3} (b)]. Since the rock-salt phase is transparent in this energy range, the low-energy absorption tail is most probably due to the segregated wurtzite-type with $x\approx 0.09$ proven with PL for $x=0.35$ and $x=0.55$ in our as grown thin films. However the uncertainty in the determination of the band gap that could give rise to the low-energy absorption tail due to the strong overlapping and the presence of defects that cannot be disregarded when phase separation starts prevent us to conclude what the origin of the low-energy absorption tail is. \citet{sans2004} showed that the pressure coefficient of the band gap of wurtzite-type Mg$_{x}$Zn$_{1-x}$O increases with $x$ and the transition pressure $P_T$ at which the wurtzite phase transforms into the rock-salt phase decreases with $x$. Therefore, if we study the band gap of the wurtzite Mg$_{0.3}$Zn$_{0.7}$O sample by optical absorption spectroscopy and PL under high pressure we would be able to i) determine the pressure coefficient of the low-energy absorption tail and ii) isolate the segregated phase which having a lower Mg content would persist in the wurtzite phase when the wurtzite phase with $x\approx 0.3$ transforms to rock-salt. This would allow us to confirm in as grown Mg$_{0.3}$Zn$_{0.7}$O the origin of the low-energy absorption tail which should present a d$E_g$/d$P \approx 25$ meV/GPa \citep{sans2004} if due to a segregated wurtzite phase with $x\approx 0.09$ and unveil the PL from the segregated phase that should be present as a consequence of the phase separation that exists for this Mg concentration [Fig. \ref{fig:fig1} (c)]. The optical absorption spectra of the as grown Mg$_{0.3}$Zn$_{0.7}$O thin film are shown at different pressures in Fig. \ref{fig:fig3} (a). Up to 7.3 GPa the shape of the absorption edge, including the low-energy tail, is kept while the absorption edge shifts to higher energies due to the volume contraction. This indicates that both the main absorption edge and the tail have a similar pressure dependence confirming that the origin of the low-energy absorption tail is due to a wurtzite phase with lower Mg content. At 7.8 GPa the absorbance of the main absorption edge decreases as a consequence of the onset of the wurtzite to rock-salt phase transition while the absorbance of the low-energy tail persists up to 10.3 GPa when only the absorption tail of the rock-salt band gap is observed indicating the end of the phase transition. The similar pressure dependence of the low-energy tail and the main absorption edge confirm that the origin of the absorption tail is related to the band gap absorption of a wurtzite phase with lower Mg content. However, the strong overlapping does not allow us to reliably quantify its band gap and thus estimate its Mg content. The energy derivative of the absorption spectra can provide an estimation of the relative proportions of both wurtzites with different Mg content and the pressure dependencies of their band gaps. In Fig. \ref{fig:fig3} (c) we show a collection of d$\alpha$/d$E$ spectra at different pressures. At 0.3 GPa two Gaussian peaks can be clearly observed at $\sim$3.5 and $\sim$4 eV. These peaks in the derivative spectrum correspond to the inflection point of the absorption edge, which roughly occurs at the band gap minus the width of the electronic transition. With this reservation, we can reliably assign the derivative peaks to the absorption edges \citep{lopez2015} for $x = 0.09$ and $x = 0.3$ supporting our previous conclusion that the low-energy tail is due to the absorption edge of the segregated phase with $x \approx 0.09$. Under pressure, the peak due to $x\approx 0.3$ blueshifts faster than the peak due to $x\approx 0.09$ up to 7.8 GPa when the intensity of the high-energy peak drops until becoming comparable to the intensity of the low-energy peak which remains constant up to 9.3 GPa. This indicates that wurtzite with $x\approx 0.3$ starts transforming to rock-salt at around 7.8 GPa while the segregated wurtzite phase remains unaffected up to 9.3 GPa at least. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Figure4.pdf} \caption{\label{fig:fig4} Pressure dependence of the band gap $E_g$ (black) and PL peak (red) of Mg$_{0.3}$Zn$_{0.7}$O. Circles are from the wurtzite with higher Mg content and squares are from the segregated phase. The red square is the PL energy of the segregated phase ($x \approx 0.09$) only visible once wurtzite Mg$_{0.3}$Zn$_{0.7}$O has transformed to rock-salt. The solid symbols represent the band gap obtained by Elliot-Toyozawa theory and the empty symbols are the band gap obtained by the energy derivative of the absorption spectra. Red continuous lines are fits to the data points obtaining a d$E_g$/d$P$ of 25 meV/GPa for $x\approx 0.09$ in good agreement to \citep{sans2004} and 29 meV for $x\approx 0.3$.} \end{figure} In Fig. \ref{fig:fig2} we have found that while the PL signal from the segregated wurtzite phase is clearly observed for samples with $x=0.35$ and $x=0.55$, in the case of the sample with $x=0.3$ there is no detectable PL signal from the segregated phase. With the optical absorption experiment performed on the as grown Mg$_{0.3}$Zn$_{0.7}$O sample we have demonstrated the existence of a segregated phase with a concentration of $x \approx 0.09$. All this indicates that the amount of segregated wurtzite phase in the as grown Mg$_{0.3}$Zn$_{0.7}$O sample, though visible with optical absorption, cannot be detected with PL. The reason why this occurs might be that the PL signal of the segregated phase is too weak and appears to be masked by the signal of the dominant peak corresponding to the wurzite phase with $x\approx 0.3$. If this is the case, at around 7.3 GPa when, according to the optical absorption study the wurzite phase with $x\approx 0.3$ starts to transform into the rock-salt phase, the signal from the segregated wurtzite phase should emerge. This is what can be seen in Fig. \ref{fig:fig3} (d). The PL peak of wurzite with $x\approx 0.3$ blueshifts with pressure up to 7.4 GPa when the phase transition occurs and the intensity of the main PL peak drops. At this pressure a weak PL peak with an energy of 3.59 eV emerges. At 9.3 GPa the PL peak from the wurzite with $x\approx 0.3$ vanishes while the PL peak at 3.59 eV stays up to 10.3 GPa when the phase transitions of both wurtzite phases with different Mg concentrations have finished and the rock-salt phase shows no PL signal. According to \citet{sans2004} wurtzite Mg$_{0.09}$Zn$_{0.91}$O thin film has a pressure coefficent of 25 meV/GPa. Considering that our segregated wurtzite phase has a PL peak at 7.4 GPa of 3.59 eV, we can extrapolate an energy at ambient pressure to 3.4 eV. That is exactly the energy of the PL peak due to the segregated phase found for $x=0.35$ and $x=0.55$ at ambient conditions, and it corresponds to a sample with $x=0.09$ [Fig. \ref{fig:fig2} (a)]. Finally, in Fig. \ref{fig:fig4} we show the pressure dependence of both the band gap and the PL peak in the wurtzite phase of Mg$_{0.3}$Zn$_{0.7}$O. The observed shift of $\sim 0.3$ eV between the band gap energy and the PL peak indicates that the origin of the PL peak is not intrinsic or excitonic and can be due to certain compositional disorder. Under pressure the shift remains constant with pressure with both techniques providing a pressure coefficient for the band gap of d$E_g$/d$P$ = 29 meV/GPa ($x = 0.3$) and 25 meV/GPa ($x=0.09$). This value extends the dependence on $x$ of d$E_g$/d$P$ from $x = 0.13$ \citep{sans2004} to $x = 0.3$ and shows that d$E_g$/d$P$ saturates for high magnesium contents. \section{Conclusions} In conclusion, we have shown with a spectroscopic high-pressure approach that phase segregation can be probed in as grown thin films of phase separated Mg$_{0.3}$Zn$_{0.7}$O even for small embedded volumes not detected by x-ray diffraction and not accessible by transmission electron microscopy except with annealed samples \citep{gries2015}. We have solved the controversy about the low-energy absorption tail usually observed overlapping with the main absorption edge for $x>0.3$. We have found that it is due to the band gap of the segregated wurtzite phase with $x \approx 0.09$ and not to the tail of the coexisting rock-salt phase \citep{lopez2015}. The present work shows the usefulness of high pressure optical studies to obtain relevant information about phase separation effects in semiconductor alloys. \section*{Acknowledgements} V.M.-B and J.R.-F. thank the Universitat de Val\`encia and the Spanish MINECO for the Atracci\'o de talent and Juan de la Cierva (IJCI-2014-20513) Programs, respectively. This paper was supported by Spanish MINECO under grants MAT2016-75586-C4-1/3-P and TEC2014-60173, and by Generalitat Valenciana under projects Prometeo II 2015/004 and ISIC/2012/008.
1,108,101,563,732
arxiv
\section{Introduction} \label{sec:intro} Most galaxies in the local Universe are found in galaxy groups \citep[e.g.][]{geller1983,eke2005,robotham2011}, where groups are typically defined as systems with three or more member galaxies and total masses $<\!10^{14}\,\mathrm{M_\odot}$ \citep[e.g.][]{mamon2007,connelly2012}. Given the large number of galaxies in these systems, understanding the impact of the group environment on galaxy properties is critical for understanding the evolution of galaxies in the local Universe. \par Compared to the low density field, galaxy groups host a higher proportion of red, passive, gas-poor, early type galaxies but groups still host more star-forming, gas-rich, late type galaxies than massive galaxy clusters \citep[e.g.][]{wilman2005,blanton2009,mcgee2011,wetzel2012,brown2017}. This makes groups an intermediate environment between clusters and the field where environment has started to affect the properties of member galaxies, but not to the extent where groups are dominated by galaxies on the red sequence. In fact, groups likely play a significant role in the build up of the cluster red sequence through the process of ``pre-processing'' \citep[e.g.][]{fujita2004}. Specifically, since structure growth is hierarchical, massive galaxy clusters are assembled through mergers with galaxy groups that deposit new galaxies into the cluster. Roughly half of present day cluster galaxies may have joined their cluster as part of a lower mass group \citep[e.g.][]{mcgee2009,delucia2012,bahe2013}. Furthermore, galaxy quenched fractions around clusters are enhanced relative to the field even at the virial radius and beyond, consistent with an environmental effect on star formation prior to cluster infall \citep[e.g.][]{vonderlinden2010,wetzel2012,haines2015,roberts2017,bianconi2018,olave2018,roberts2019}. Though it is important to note that some galaxies beyond the virial radius will not be infalling for the first time, but instead backsplashing after already passing pericentre \citep[e.g.][]{mahajan2011,oman2013}. Disentangling the contribution between infalling galaxies and backsplash galaxies is critical for constraining the effects of pre-processing in the cluster outskirts. \par One key question is whether the dominant quenching mechanisms differ in groups compared to clusters. Recently, many works have argued that ram pressure stripping (RPS) plays an important role in quenching star formation in galaxy clusters \citep[e.g.][]{muzzin2014,brown2017,vanderburg2018,maier2019,roberts2019,ciocan2020}. Ram pressure can quench galaxies either by directly stripping cold, star-forming gas from the disk \citep{vollmer2012,jachym2014,lee2017,lee2018,jachym2019,moretti2020}, or by stripping the more diffuse atomic gas \citep{kenney2004,chung2007,chung2009,kenney2015,yun2019} which will leave the galaxy quenched once it exhausts its remaining molecular gas reserves. In some examples of RPS, referred to as `jellyfish galaxies', tails (or `tentacles') of stripped material are observed trailing the galaxy opposite to the direction of motion \citep[e.g.][]{poggianti2017,boselli2018}. The strength of ram pressure scales with $\rho_\mathrm{ICM} v^2$, where $\rho_\mathrm{ICM}$ is the density of the intracluster medium (ICM) and $v$ is the relative velocity between galaxies and the ICM. On average, both the density of the ICM and galaxy velocities are higher in clusters than groups, therefore the strength of ram pressure will be stronger in massive clusters than lower mass groups. This begs the question of whether or not ram pressure is strong enough in groups to efficiently strip gas from galaxies. \par There are some examples of RPS in groups in the literature, one being the starburst galaxy NGC 2276 in the NGC 2300 galaxy group. NGC 2276 has a gas tail likely from RPS, though it is also tidally interacting with NGC 2300. The stripped tail is apparent in the radio continuum ($\sim\!1.4\,\mathrm{GHz}$, \citealt{davis1997}) and at X-ray wavelengths \citep{rasmussen2006,wolter2015}. NGC 2276 also shows a bow shock front opposite to the tail, with elevated radio continuum emission, $\mathrm{H\alpha}$ emission, and a large number of bright X-ray sources along the leading edge \citep{davis1997,wolter2015}. \citet{rasmussen2006} and \citet{wolter2015} conclude that ram pressure (along with viscous effects) is responsible for both the disturbed morphology and high star formation rate in NGC 2276. Another example of a group galaxy with a long X-ray tail is NGC 6872 in the Pavo Group. \citet{machacek2005} suggest that this $90\,\mathrm{kpc}$ tail could be a result of ram pressure and/or viscous stripping in the group environment. A few more studies have found `comet-like' \textsc{Hi} morphologies for galaxies in groups \citep{bureau2002,mcconnachie2007}, which are likely being driven by RPS. In particular, a recent MeerKAT study of the Fornax A group \citep{kleiner2021} present evidence for 9 galaxies in the midst of being pre-processed prior to accretion onto the Fornax cluster. Some of these galaxies display \textsc{Hi} deficiencies as well as \textsc{Hi} morphologies consistent with RPS \citep{kleiner2021}. Finally, evidence for RPS stripping in groups has also been presented in the form of gas disks which are truncated relative to the stellar component, consistent with RPS removing gas from the outside-in \citep{sengupta2007,vulcani2018}. \par These previous works show that RPS occurs in at least some galaxy groups, though the small number of galaxies identified thus far make it difficult to contrast the prevalence and effectiveness of RPS in groups versus clusters. Recently, \citet{roberts2021_CFIS} have performed a search for ram pressure candidates in SDSS groups and clusters with optical imaging from the Canada-France Imaging Survey \citep{ibata2017}. \citet{roberts2021_CFIS} identify $\sim\!30$ ram pressure candidates galaxies in groups ($M_\mathrm{halo} < 10^{14}\,\mathrm{M_\odot}$), but there still remain uncertainties related to the accuracy of ram pressure identifications from optical imaging alone, given that the stellar disk may not always be strongly perturbed by ram pressure. In \citet{roberts2021_LOFARclust} (hereafter \citetalias{roberts2021_LOFARclust}) we presented a sample of $\sim$100 jellyfish galaxies in nearby ($z<0.05$) galaxy clusters, identified from 144 MHz radio continuum tails in the LOFAR Two-metre Sky Survey (LoTSS, \citealt{shimwell2017,shimwell2019}). At 144 MHz, LOFAR \citep{vanhaarlem2013} is sensitive to synchrotron emission from cosmic rays accelerated by supernovae. For galaxies experiencing strong ram pressure, these cosmic rays can be stripped out of the galaxy and detected as RPS tails in the radio continuum \citep[e.g.][]{gavazzi1987,murphy2009,chen2020}, giving reliable identifications of jellyfish galaxies. The largest assets of LoTSS are its high resolution ($\sim\!6''$) and high sensitivity ($\sim\!100\,\mathrm{\mu Jy/beam}$) observations over extremely wide fields, which upon survey completion will include the entire northern extragalactic sky. Such a uniform, wide field survey is ideal for completing a comprehensive search for jellyfish galaxies in low redshift groups. Especially given the fact that jellyfish galaxies may be rarer in groups than clusters, meaning a search likely needs to cover a large number of groups in order to build a significant sample. \par The purpose of this work is twofold: (a) to perform a comprehensive search for RPS in galaxy groups and determine how common RPS is in groups compared to clusters, and (b) to test whether the properties of jellyfish galaxies in groups differ systematically from the properties of jellyfish galaxies in clusters. With the 144 MHz radio continuum from LoTSS, we identify 60 jellyfish galaxies across a sample of 498 SDSS galaxy groups. This is far and away the most comprehensive search for jellyfish galaxies in groups to date. In Section~\ref{sec:data} we describe the datasets that we use as well as the methods for identifying jellyfish galaxies. In Section~\ref{sec:jellyfish_freq} we consider how the frequency of jellyfish galaxies depends on halo mass, ranging from low-mass groups to massive clusters. In Section~\ref{sec:orbital_hist} we constrain the orbital histories of group and cluster jellyfish galaxies, both using tail orientations and positions in projected phase space. In Section~\ref{sec:sfr} we test whether the star formation enhancement observed for LoTSS jellyfish galaxies in clusters \citepalias{roberts2021_LOFARclust} is also present for jellyfish galaxies in groups. Finally, in Sections \ref{sec:disc_conc} \& \ref{sec:summary} we give a brief discussion and summarize the main conclusions from this work. Throughout, we assume a $\mathrm{\Lambda}$ cold dark matter cosmology with $\Omega_M=0.3$, $\Omega_\Lambda=0.7$, and $H_0=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}$. \section{Data \& methods} \label{sec:data} \subsection{Group and cluster samples} \label{sec:grp_sample} \begin{figure} \centering \includegraphics[width=\columnwidth]{example_overlay.pdf} \caption{Optical $grz$ (DESI Legacy Survey, \citealt{dey2019}) image with LOFAR $144\,\mathrm{MHz}$ contours overlaid for KUG 0930+342, a jellyfish galaxy in a $1\times10^{13}\,\mathrm{M_\odot}$ galaxy group. Contours correspond to $2\times$, $4\times$, $8\times$, $16\times$, and $32\times$ the $144\,\mathrm{MHz}$ rms.} \label{fig:example_img} \end{figure} In this work we follow a similar methodology to \citetalias{roberts2021_LOFARclust} but focus on lower mass galaxy groups instead of galaxy clusters. Our parent sample of galaxy groups comes from the \citet{lim2017} (hereafter \citetalias{lim2017}) SDSS group catalogue. The \citetalias{lim2017} catalogue uses a group finder similar to that from the \citet{yang2005,yang2007} group catalogs but with improved halo mass estimates, especially for low mass systems. Group masses in \citetalias{lim2017} are determined using abundance matching with a `halo mass proxy' that depends on both the stellar mass of the central galaxy and the stellar mass gap between the central galaxy and the $n$-th brightest satellite. Comparisons to mocks show that this procedure typically reproduces the true halo masses without bias and with a typical uncertainty of $0.2\,\mathrm{dex}$ \citepalias{lim2017}. From these halo masses, $M_\mathrm{halo}$, virial radii, $R_{180}$, and velocity dispersions, $\sigma$, for each group are estimated as \citepalias{lim2017} \begin{equation} R_{180} = 1.33\,h^{-1}\,\mathrm{Mpc}\,\left(\frac{M_\mathrm{halo}}{10^{14}\,h^{-1}\,\mathrm{M_\odot}}\right)^{1/3} (1 + z_\mathrm{grp})^{-1} \end{equation} \noindent and, \begin{equation} \sigma = 418\,\mathrm{km\,s^{-1}}\,\left(\frac{M_\mathrm{halo}}{10^{14}\,h^{-1}\,\mathrm{M_\odot}}\right)^{0.3367} \end{equation} \noindent For our group sample we include all groups from the \citetalias{lim2017} catalog that overlap with the $\sim\!5700\,\mathrm{deg^2}$ LoTSS DR2 (Shimwell et al. in prep.) footprint, and have: masses between $10^{12.5} < M_\mathrm{halo} < 10^{14}\,h^{-1}\,\mathrm{M_\odot}$, group redshifts of $z_\mathrm{grp}<0.05$, and galaxy memberships in the \citetalias{lim2017} catalogue of $N_\mathrm{galaxy}=5$ or more. The redshift limit of $z<0.05$ is chosen to match that of the cluster sample in \citetalias{roberts2021_LOFARclust}, which allows us to make comparisons between the properties of jellyfish galaxies in groups versus clusters. \begin{figure} \centering \includegraphics[width=\columnwidth]{z_Mh.pdf} \caption{\textit{Top:} Redshift distribution for the sample of groups (purple, solid) and clusters (red, dashed). \textit{Bottom:} Halo mass distribution for the sample of groups (purple, solid) and clusters (red, dashed).} \label{fig:z_Mh} \end{figure} \par The \citetalias{roberts2021_LOFARclust} sample consists of 29 X-ray detected clusters from \citet{wang2014b} with $M_\mathrm{halo} \ge 10^{14}\,h^{-1}\,\mathrm{M_\odot}$, $z<0.05$, and have been observed by LOFAR at 144 MHz. A detailed description of the cluster sample is given in \citetalias{roberts2021_LOFARclust}. In Fig.~\ref{fig:z_Mh} we show the distribution of redshifts and halo masses for both the groups ($M_\mathrm{halo} < 10^{14}\,h^{-1}\,\mathrm{M_\odot}$) and the clusters ($M_\mathrm{halo} \ge 10^{14}\,h^{-1}\,\mathrm{M_\odot}$) in the sample. \subsection{Galaxy samples} \label{sec:galaxy_sample} \begin{table*} \centering \caption{Number of galaxies in various samples.} \begin{threeparttable} \begin{tabular}{l c c c c} \toprule Galaxy sample & Low-mass & Intermediate-mass & High-mass & Clusters\tnote{d} \\ & groups\tnote{a} & groups\tnote{b} & groups\tnote{c} & \\ \midrule SDSS galaxies & 1122 & 1371 & 1000 & 1968 \\ LoTSS galaxies & 378 & 382 & 286 & 405 \\ Jellyfish galaxies & 14 & 15 & 31 & 77 \\ \bottomrule \end{tabular} \begin{tablenotes} \item[a] $10^{12.5} \ge M_\mathrm{halo} < 10^{13}\,\mathrm{M_\odot}$\\ \item[b] $10^{13} \ge M_\mathrm{halo} < 10^{13.5}\,\mathrm{M_\odot}$\\ \item[c] $10^{13.5} \le M_\mathrm{halo} < 10^{14}\,\mathrm{M_\odot}$\\ \item[d] $M_\mathrm{halo} \ge 10^{14}\,\mathrm{M_\odot}$ \end{tablenotes} \end{threeparttable} \label{tab:galaxy_samples} \end{table*} \subsubsection{Group member galaxies} \label{sec:group_galaxies} For galaxies, we adopt a `loose' membership criteria (similar to \citealt{roberts2020}, \citetalias{roberts2021_LOFARclust}) where we include all galaxies that are within $1\times R_{180}$ of the stellar mass weighted group centre and $3\times \sigma$ of the group redshift as group members. This ensures that we do not miss satellite galaxies at large velocity offsets, as is the case for many jellyfish galaxies \citep[e.g.][]{yoon2017,jaffe2018}. Any galaxies that pass the membership criteria for multiple groups (this is only the case for <3\% of the galaxy sample) are assigned as members to the group that they are closest to in units of $R_{180}$. To ensure a pure sample of galaxies in groups (i.e. $M_\mathrm{halo} < 10^{14}\,h^{-1}\,\mathrm{M_\odot}$), we also exclude any galaxies that are within $3 \times R_{180}$ in angular separation and $3000\,\mathrm{km\,s^{-1}}$ in redshift of any cluster in the \citetalias{lim2017} catalog (where we consider clusters to have $M_\mathrm{halo} \ge 10^{14}\,h^{-1}\,\mathrm{M_\odot}$). For the galaxy sample we use stellar masses and star formation rates (SFRs) from the GSWLC-2 catalog \citep{salim2016,salim2018} that are determined by fitting galaxy SEDs with the \textsc{cigale} code \citep{boquien2019} that include UV, optical, and mid-IR fluxes. This paper focuses on actively star-forming galaxies which we define as those galaxies with specific star formation rates $>\!10^{-11}\,\mathrm{yr^{-1}}$ (where, $\mathrm{sSFR} = \mathrm{SFR} / M_\mathrm{star}$). In total, the above selections amount to a sample of 3493 star-forming `SDSS group galaxies' across 498 groups. \par From this sample of SDSS group galaxies, we use the forthcoming LoTSS DR2 source catalog (see \citealt{williams2019} for a description of the public LoTSS DR1 source catalogs) to find those galaxies that are also detected in LoTSS at 144 MHz. We cross match the positions of SDSS group galaxies with the positions of LoTSS sources and keep any matches with separations $<\!3''$, which corresponds to the HWHM of the LoTSS beam. This gives a sample of 1048 star-forming group galaxies with LoTSS detections, and we will refer to these galaxies as `LoTSS group galaxies'. \subsubsection{Cluster member galaxies} \label{sec:cluster_galaxies} The same membership criteria of $R < 1 \times R_{180}$ and $\Delta v < 3 \times \sigma$ is applied to the cluster sample from \citetalias{roberts2021_LOFARclust}, which gives 1968 star-forming `SDSS cluster galaxies' in 29 clusters ($M_\mathrm{halo} \ge 10^{14}\,\mathrm{M_\odot}$). Star formation rates and stellar masses for cluster galaxies are also taken from the \citet{salim2016,salim2018} catalogue. Star-forming SDSS cluster galaxies are cross matched with the LoTSS source catalog in the same way as for group galaxies. This gives a sample of 405 `LoTSS cluster galaxies'. In Table~\ref{tab:galaxy_samples} we summarize the size of the SDSS galaxy sample, the LoTSS galaxy sample, and the Jellyfish galaxy sample as a function of host halo mass. \subsubsection{Field galaxies} \label{sec:field_galaxies} We also construct a sample of isolated `field' galaxies. The field sample consists of all galaxies in single-member groups from the \citetalias{lim2017} catalogue with $M_\mathrm{halo} < 10^{12.5}\,\mathrm{M_\odot}$ (i.e. consistent with an individual galaxy halo) and $z < 0.05$. We then apply an isolation criteria (similar to \citealt{roberts2017}) and only include galaxies which are separated by at least $1000\,\mathrm{kpc}$ and $1000\,\mathrm{km\,s^{-1}}$ from the nearest galaxy with $M_\mathrm{star} \ge 10^{9.7}\,\mathrm{M_\odot}$. $M_\mathrm{star} = 10^{9.7}\,\mathrm{M_\odot}$ corresponds to the SDSS stellar mass completeness at $z=0.05$ (\citealt{weigel2016}; \citetalias{roberts2021_LOFARclust}), therefore by only considering galaxy neighbours with $M_\mathrm{star} \ge 10^{9.7}\,\mathrm{M_\odot}$ we ensure that the strictness of this isolation criteria is independent of redshift (over the redshift range of our sample). That said, it does mean that the galaxies in our field sample may not be isolated with respect to galaxies with stellar masses below this limit -- though we reiterate that none of the galaxies in the field sample were assigned to a group by the \citetalias{lim2017} algorithm. \par These criteria give a sample of 8044 star-forming SDSS field galaxies. Again, matching these galaxies to sources in the LoTSS DR2 source catalog within $3''$ gives 2274 `LoTSS field galaxies'. \subsection{Jellyfish galaxy selection} \label{sec:jellyfish_selection} We take a two step approach to identifying jellyfish galaxies. First, an automated pre-selection of `jellyfish candidates', and then second, by-eye classifications on all of the jellyfish candidates. We pre-select jellyfish candidates with the shape asymmetry parameter ($A_S$, \citealt{pawlik2016}) applied to the LoTSS 144 MHz maps for all LoTSS group galaxies. The shape asymmetry measures the rotational asymmetry of the binary detection maps (segmentation maps) for sources, and is calculated as \begin{equation} A_S = \frac{\sum | X_0 - X_{180} |}{2\times\sum | X_0 |}, \end{equation} \noindent where $X_0$ is the source segmentation map and $X_{180}$ is the segmentation map rotated by $180^\circ$. The shape asymmetry is a non-flux-weighted version of the commonly used CAS asymmetry \citep{abraham1996,conselice2003}, making it particularly sensitive to low surface brightness features such as ram pressure stripped tails. \par For each LoTSS group galaxy we create 144 MHz segmentation maps with the \texttt{photutils.detect\_sources} function in \textsc{Python} with a $3\sigma$ threshold. We then pre-select jellyfish candidates as all LoTSS group galaxies with $A_S > 0.3$. \citetalias{roberts2021_LOFARclust} show that this threshold of $A_S > 0.3$ includes $\sim\!85$\% of visually identified LoTSS jellyfish galaxies in clusters, while excluding $\sim\!70$\% of LoTSS sources in clusters which are not identified as jellyfish. This pre-selection gives 271 jellyfish candidates which we then visually inspect to build our final sample of LoTSS jellyfish galaxies in groups. We also include all LoTSS field galaxies which have $A_S > 0.3$. `True' field galaxies should not be affected by RPS, so including field galaxies acts as a test of the methodology. For the visual classifications we include field galaxies with $A_S>0.3$ randomly alongside group galaxies with $A_S>0.3$, such that the classifier does not know whether they are inspecting a group galaxy or a field galaxy. Therefore if we are effective at selecting jellyfish galaxies associated with RPS in dense environments, very few field galaxies should pass this visual inspection. \par For visual inspections we follow \citetalias{roberts2021_LOFARclust} and make $100\,\mathrm{kpc} \times 100\,\mathrm{kpc}$ $g$-band cutout images from PanSTARRS and overlay 144 MHz flux contours from LoTSS. LoTSS contours are only shown above $2 \times \mathrm{rms}$, where the rms noise is estimated locally from the LoTSS cutouts with sigma-clipped statistics. As in \citetalias{roberts2021_LOFARclust} we identify jellyfish galaxies as star-forming group galaxies which show `144 MHz emission which is resolved and clearly asymmetric with respect to the stellar disk of the galaxy (as traced by the g-band flux)'. We reiterate that we only visually inspect galaxies with $A_S>0.3$, and we only inspect star-forming galaxies and therefore do not expect strong contamination from AGN emission \citepalias{roberts2021_LOFARclust}. We also note that our selection is not sensitive to galaxies with stripped tails along the line-of-sight, as such galaxies may not show clearly asymmetric radio continuum emission when projected in the plane of the sky. This is a source of incompleteness for our sample that is not easily remedied with imaging data alone. Finally, any galaxies that show clear signatures of galaxy-galaxy interactions in their optical images are not included in the jellyfish sample, the same is true for for galaxies with close companions on the sky that are at the same redshift as the primary galaxy. This is done to limit the galaxies selected with tails due to tidal interactions as opposed to RPS. While we cannot say that our sample is completely free of such cases, the results of this work, and of \citetalias{roberts2021_LOFARclust}, are consistent with RPS being the primary driver of tail production in these galaxies. Of the 271 jellyfish candidates in groups, 60 are identified as jellyfish galaxies through visual inspection. This is the largest sample of RPS galaxies in groups identified to date. In Fig.~\ref{fig:example_img} we show an example optical+radio image of a jellyfish galaxy in a $10^{13.1}\,\mathrm{M_\odot}$ group, where we have overlaid the LoTSS 144 MHz flux contours. We show the PanSTARRs+LoTSS overlay images for all of the group jellyfish galaxies in Appendix~\ref{sec:img_appendix}. \par Of the LoTSS field galaxies, 2\% were classified as `jellyfish galaxies' by visual inspection. While this is a non-zero fraction, the proportion of `jellyfish galaxies' in the field sample is clearly below that for the group and cluster samples (see Fig.~\ref{fig:jellyfish_Mh}). Some of these field galaxies may be true jellyfish galaxies in small groups which have been mis-classified by the \citetalias{lim2017} group finder. Alternatively, RPS may be possible, to some extent, in cosmic filaments \citep[e.g.][]{edwards2010,benitez-llambay2013} which could encompass some of our field sample. Incorrect source association or emission from AGN could also give rise to asymmetric 144 MHz emission in field galaxies. The purpose of this exercise is not to explain the origin of these `jellyfish galaxies' in the field sample (though there are plausible explanations, see above), but instead to get a sense of the false-positive rate of these visual inspections and understand the limits of this technique. \par Finally, we also include the sample of LoTSS cluster jellyfish galaxies from \citetalias{roberts2021_LOFARclust}. These jellyfish galaxies are also identified from visual inspections in an analogous fashion to the group sample. We only include jellyfish galaxies from \citetalias{roberts2021_LOFARclust} with $A_S>0.3$ (where $A_S$ is measured using the exact method described above) to ensure homogeneity with the group jellyfish galaxies in this work. \section{How common are jellyfish galaxies in groups versus clusters?} \label{sec:jellyfish_freq} \begin{figure} \centering \includegraphics[width=\columnwidth]{jellyfish_Mh.pdf} \caption{The jellyfish galaxy fraction (relative to all star-forming LoTSS sources) as a function of group/cluster halo mass. Purple triangles show the jellyfish galaxies in groups identified in this work and the red star shows the cluster jellyfish galaxies from \citetalias{roberts2021_LOFARclust}. Vertical error bars are 68\% binomial confidence intervals from \citet{cameron2011} and horizontal error bars show the width of each halo mass bin. The horizontal line shows the fraction of LoTSS sources in the field sample that passed our jellyfish galaxy criteria (see Sect.~\ref{sec:jellyfish_selection}), along with the 90\% confidence region (shaded band).} \label{fig:jellyfish_Mh} \end{figure} In galaxy clusters, on average, both the ICM density and the relative velocities are larger than for groups (for simplicity, we use 'ICM' to refer to both the intra-cluster medium and the intra-group medium), therefore ram pressure stripping should be most prevalent in the cluster environment. With the large sample of jellyfish galaxies that we have identified in groups, we can directly test this prediction. \par In Fig.~\ref{fig:jellyfish_Mh} we plot the fraction of jellyfish galaxies as a function of halo mass, for low-mass groups ($10^{12.5} \le M_\mathrm{halo} < 10^{13}\,h^{-1}\,\mathrm{M_\odot}$), intermediate-mass groups ($10^{13} \le M_\mathrm{halo} < 10^{13.5}\,h^{-1}\,\mathrm{M_\odot}$), high-mass groups ($10^{13.5} \le M_\mathrm{halo} < 10^{14}\,h^{-1}\,\mathrm{M_\odot}$), and galaxy clusters ($M_\mathrm{halo} \ge 10^{14}\,h^{-1}\,\mathrm{M_\odot}$, \citetalias{roberts2021_LOFARclust}). The jellyfish galaxy fraction, $F_\mathrm{jellyfish}$, is defined for each halo mass bin as \begin{equation} \label{eq:Fjelly} F_\mathrm{jellyfish} = \frac{N_\mathrm{jellyfish}}{N_\mathrm{LoTSS}} \end{equation} \noindent where $N_\mathrm{jellyfish}$ is the number of LoTSS jellyfish galaxies and $N_\mathrm{LoTSS}$ is the number of star-forming galaxies detected in LoTSS. We define the jellyfish fractions relative to the number of LoTSS sources in each halo mass bin instead of the number of SDSS member galaxies in each halo mass bin, due to the different stellar mass completeness between SDSS and LoTSS. The majority of star-forming low-mass galaxies ($M_\mathrm{star} \lesssim 10^{9.5}\,\mathrm{M_\odot}$) in SDSS fall below the sensitivity limit of LoTSS (see \citetalias{roberts2021_LOFARclust} for a more complete discussion), therefore by defining the jellyfish fraction relative to LoTSS sources we are ensuring that both the numerator and denominator in Equation~\ref{eq:Fjelly} have similar stellar mass completeness. That said, we have confirmed that when defining $F_\mathrm{jellyfish}$ in terms of SDSS group/cluster galaxies instead of LoTSS group/cluster galaxies, the qualitative trend shown in Fig.~\ref{fig:jellyfish_Mh} still holds. Therefore our choice of denominator in Equation~\ref{eq:Fjelly} is not driving the results from this section. \par In Fig.~\ref{fig:jellyfish_Mh} we see that the jellyfish fraction steadily increases with halo mass, with a factor of $\sim\!4$ difference between low-mass groups and galaxy clusters. This indeed suggests that ram pressure stripping is more prevalent in more massive halos. The stellar mass distribution for LoTSS sources is very similar across the halo mass bins in Fig.~\ref{fig:jellyfish_Mh}, therefore it is unlikely that the observed trend with halo mass is being influenced by any stellar mass biases. The trend levels off for low-mass groups, as the jellyfish fraction is similar in each of the two lowest halo mass bins. We note that halo mass uncertainties will be highest for the lowest mass groups, this could lead to systems artificially scattering between the two lowest mass bins, which may contribute to the lack of observed trend for those masses. For all halo mass bins the jellyfish fraction is larger than the ``false-positive'' rate of 2\% that we find from the field sample, though the jellyfish fractions for the lowest-mass halos do come close this value. This suggests that while RPS does occur even in these very low mass groups, the vast majority of star-forming galaxies in such systems are not strongly affected. The results in Fig.~\ref{fig:jellyfish_Mh} are consistent with previous works finding a higher fraction of galaxies undergoing RPS in more massive halos. For ram pressure candidates identified from rest-frame optical imaging, \citet{roberts2021_CFIS} find a factor of two increase in the frequency of ram pressure candidates from groups to clusters, and in the Illustris simulation, \citet{yun2019} find a similar halo mass trend for simulated jellyfish galaxies. While the methodologies for identifying RPS galaxies in these studies differ from this work, the qualitative trends between lower mass groups and massive galaxy clusters are consistent throughout. \section{Orbital histories} \label{sec:orbital_hist} Given the differences in velocity dispersions and ICM densities between low-mass groups and high-mass clusters, it is natural to expect that ram pressure stripped galaxies in groups may have different orbital histories than ram pressure stripped galaxies in clusters. Previous work on jellyfish galaxies in clusters suggest that these objects begin to be stripped shortly after infall, before reaching the pericentre of their orbit (e.g. \citealt{yoon2017}; \citealt{jaffe2018}; \citetalias{roberts2021_LOFARclust}). Given weaker ram pressure in the group regime, there may be a substantial delay between galaxy infall and the onset of stripping, which is not seen in clusters. In this section we constrain the orbital histories of jellyfish galaxies in both groups and clusters using two observational tools, the orientation of stripped tails with respect to the cluster centre and the position of galaxies in projected phase space. \subsection{Tail orientations} \label{sec:tail_orient} \begin{figure} \centering \includegraphics[width=\columnwidth]{tail_orientation.pdf} \caption{Orientation of jellyfish tails with respect to the cluster centre for groups (top) and clusters (bottom). Orientations of $0^\circ$ correspond to tails aligned toward the cluster centre and orientations of $180^\circ$ correspond to tails aligned away from the cluster centre.} \label{fig:tail_orientation} \end{figure} In Fig.~\ref{fig:tail_orientation} we show the distributions of jellyfish tail orientations in groups (top) compared to clusters (bottom). For both panels tail directions are measured with the same technique (see \citealt{roberts2020}; \citetalias{roberts2021_LOFARclust}), namely, for each $100\,\mathrm{kpc} \times 100\,\mathrm{kpc}$ PanSTARRs+LOFAR overlay image, the direction of the 144 MHz tail with respect to the optical galaxy centre is given an angle between $0^\circ$ and $360^\circ$. The vector along this tail direction is then compared to the vector between the optical galaxy centre and the stellar mass weighted group centre, which gives a tail orientation relative to the group centre. A tail pointing directly toward the group centre corresponds to an orientation of $0^\circ$ and a tail pointing directly away from the group centre corresponds to an orientation of $180^\circ$. \par In Fig.~\ref{fig:tail_orientation} differences are apparent between the distributions of tail orientations for jellyfish galaxies in groups (top) versus clusters (bottom). For clusters, as shown in \citetalias{roberts2021_LOFARclust}, the distribution is clearly peaked at orientations between $120^\circ$ and $180^\circ$, consistent with galaxies being mostly stripped on first infall toward the cluster centre. For groups, the distribution instead peaks most strongly at tail orientations $<\!60^\circ$. This shows that many jellyfish galaxies in groups have tails oriented toward the cluster centre, consistent with galaxies on orbiting away from the centre after a pericentric passage. There is also a significant number of group jellyfish with tail orientations between $120^\circ$ and $180^\circ$, suggestive of a mix of jellyfish galaxies on first infall and jellyfish galaxies backsplashing in the group environment. This interpretation implies that jellyfish galaxies in groups have, on average, longer times-since-infall than jellyfish galaxies in clusters. A natural explanation for this is the stronger ram pressure in clusters, capable of stripping galaxies relatively quickly after infall. Whereas the onset of stripping in groups may be delayed due to lower ICM densities and galaxy velocities, for example, \citet{oman2021} estimate that groups strip satellites on timescales that are $\sim\!3\,\mathrm{Gyr}$ longer than for clusters based on observed star formation and \textsc{Hi} properties. The orientations in Fig.~\ref{fig:tail_orientation} hint at this picture, but there are also complications related to the interpretation of such distributions, including projection effects and uncertainties around galaxy orbital parameters. We also note that the tail orientations for the cluster sample are measured with respect to the X-ray centre, whereas tail orientations for the group sample are measured with respect to the stellar mass weighted group centre which is likely a less reliable tracer of the true minimum of the potential well. X-ray centres are only available for a small fraction of our group sample therefore using a centre estimate based on galaxy positions is the only way, despite the added uncertainties. Below we consider the distributions of group and cluster jellyfish galaxies in projected phase space (PPS), which is another tool to gain insight into group/cluster infall histories. Specifically, we test whether the phase space distributions are consistent with the picture suggested by the tail orientations; namely, longer times-since-infall for jellyfish galaxies in groups versus clusters. \subsection{Projected phase space} \label{sec:phase_space} \begin{figure*} \centering \includegraphics[width=\textwidth]{phase_space.pdf} \caption{\textit{Left:} Projected phase space diagrams for groups and clusters. Data markers correspond to jellyfish galaxies in groups (purple triangles) and clusters (red stars), and the background 2D histograms show the phase space distribution for SDSS group galaxies and SDSS cluster galaxies in their respective panels. For reference, we also show the escape velocity caustic for an NFW density profile with the dotted line \citep[e.g.][]{navarro1997,jaffe2015}.} \textit{Right:} The excess of jellyfish galaxies, relative to SDSS group/cluster galaxies, in each of the four phase space quadrants. Red stars correspond to jellyfish galaxies in clusters and purple triangles show jellyfish galaxies in groups. Error bars are $1\sigma$ statistical uncertainties following \citet{cameron2011}. \label{fig:phase_space} \end{figure*} We now consider the positions of group and cluster jellyfish galaxies in PPS (velocity offset versus projected radius). PPS distributions contain valuable information with regard to satellite galaxy infall histories, as recent infallers are typically found at large velocity offsets and/or large projected radius whereas galaxies with long times-since-infall tend to inhabit the core of PPS at small radius and small velocity offset. \par In Fig.~\ref{fig:phase_space} (left) we plot the PPS distributions for jellyfish galaxies in groups (purple triangles) and jellyfish galaxies in clusters (red stars). We also show the distribution of SDSS group/cluster star-forming galaxies as the background histogram. As in \citetalias{roberts2021_LOFARclust}, we split PPS into four quadrants divided at $\Delta v / \sigma = 1.5$ and $R/R_{180} = 0.5$. Just by-eye there are clear differences apparent between the group and cluster PPS distributions. In clusters there is a substantial population of jellyfish galaxies in quadrant 2, which should contain a high fraction of galaxies on their first infall. This population is notably missing for group jellyfish galaxies, and instead most jellyfish galaxies in groups are found at small velocity offsets and small radii. \par We quantify these trends in the right-hand panel of Fig.~\ref{fig:phase_space} where we plot the `excess' of jellyfish galaxies (relative to SDSS star-forming galaxies) for each of the phase space quadrants. The jellyfish excess is defined as the fraction of the group/cluster jellyfish galaxy sample in each quadrant divided by the fraction of the group/cluster SDSS star-forming sample in each quadrant. Functionally, this is given by \begin{equation} \mathrm{Jellyfish\;excess} = \left. \left(\frac{N_\mathrm{jellyfish}^{Q_i}}{N_\mathrm{jellyfish}}\right) \;\right/\; \left(\frac{N_\mathrm{SDSS}^{Q_i}}{N_\mathrm{SDSS}}\right), \end{equation} \noindent where $N_\mathrm{jellyfish}^{Q_i}$ is the number of LoTSS jellyfish galaxies in each quadrant, $Q_i$, and $N_\mathrm{jellyfish}$ is the total number of LoTSS jellyfish galaxies, and similarly $N_\mathrm{SDSS}^{Q_i}$ is the number of SDSS group/cluster galaxies in each quadrant, $Q_i$, and $N_\mathrm{SDSS}$ is the total number of SDSS group/cluster galaxies. \par As presented in \citetalias{roberts2021_LOFARclust}, there is a clear excess of cluster jellyfish galaxies in quadrant 2, consistent with cluster galaxies experiencing strong ram pressure shortly after infall. The same is not seen in Fig.~\ref{fig:phase_space} for jellyfish galaxies in groups. Instead, group jellyfish have a phase space distribution much more similar to the SDSS star-forming group galaxy population, with only a small fraction of galaxies at the velocity extremes in PPS. The different PPS distributions for group and cluster jellyfish galaxies are fully consistent with the picture suggested by the tail orientations in Fig.~\ref{fig:tail_orientation}, namely that cluster jellyfish galaxies are largely being stripped on their first infall whereas group jellyfish galaxies have longer times-since-infall and many have already passed their orbital pericentre. \section{Galaxy star formation} \label{sec:sfr} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{SFMS.pdf} \caption{Offset from the star-forming main sequence (SFMS) for jellyfish galaxies in groups (top, triangles) and clusters (bottom, stars). In each panel we also show the offset from the SFMS for group/cluster LoTSS galaxies. The SFMS relation is taken from \citetalias{roberts2021_LOFARclust} and the median offset from the SFMS is shown for jellyfish galaxies (solid line) and LoTSS galaxies (dashed line). Shaded regions show $1\sigma$ errors on the median estimated from 5000 random bootstrap re-samplings.} \label{fig:sfr} \end{figure} Ram pressure stripping is closely tied to galaxy star formation, not only in the sense of quenching, but also through star formation enhancements (prior to substantial gas stripping) which have been predicted by simulations and observed in cluster galaxies \citep[e.g.][]{steinhauser2012,ebeling2014,vulcani2018b,ramos-martinez2018,roberts2020,troncoso-iribarren2020,durret2021}. The origin of these star formation enhancements are often explained in terms of shocks from the ram pressure interaction which induce compression and high gas densities in the galaxy interstellar medium (ISM), in turn catalyzing strong star formation. In groups, ram pressure is relatively weak compared to clusters, therefore it is interesting to explore whether such star formation enhancements are also present in group jellyfish galaxies. For example, it could be that the relatively weak ram pressure in groups does not perturb the galaxy ISM as significantly as in clusters, and therefore comparable enhancements in star formation may not be expected. \par In Fig.~\ref{fig:sfr} we plot the offset from the SFMS for jellyfish galaxies in both groups (top, purple triangles) and clusters (bottom, red stars). We use the best fit SFMS relation from \citetalias{roberts2021_LOFARclust}, which was derived by fitting a powerlaw relationship between SFR and stellar mass for isolated field galaxies over the same redshift range as our group/cluster samples. As a reminder, SFRs for each galaxy are taken from the GSWLC-2 SED fitting catalogue (see Sect.~\ref{sec:galaxy_sample}, \citealt{salim2016,salim2018}). The offset from the SFMS for each jellyfish galaxy is shown with the data markers in Fig.~\ref{fig:sfr} (Groups: purple triangles, Clusters: red stars). We also plot offsets from the SFMS for each group/cluster LoTSS galaxy in the corresponding panel with the grey data points. Finally, the median SFMS offset for jellyfish galaxies and for LoTSS galaxies are shown in each panel with the solid line. Jellyfish galaxies in clusters are systematically above the SFMS but the same is not apparent for group jellyfish (at most jellyfish galaxies in groups are marginally above the SFMS). With these trends in mind, it is important to consider the selection effects given our prerequisite that galaxies be detected at 144 MHz. 144 MHz emission is a good tracer of galaxy star formation \citep{gurkan2018,smith2021}, therefore galaxies selected according to 144 MHz emission will tend to have high SFRs, which will contribute to the positive offsets from the SFMS in Fig.~\ref{fig:sfr}. This is particularly true for low-mass galaxies as can be seen in Fig.~\ref{fig:sfr} where the majority of low-mass galaxies ($M_\mathrm{star} \lesssim 10^{10}\,\mathrm{M_\odot}$) have positive SFMS offsets. This reflects the fact that in order to be detected at $144\,\mathrm{MHz}$, low-mass galaxies need to have SFRs near or above the SFMS. Conversely, given the correlation between stellar mass and SFR, high-mass galaxies can have SFRs that are below the SFMS but still high enough to be detected at $144\,\mathrm{MHz}$. This emphasizes the importance of constructing a comparison sample of `normal' LoTSS galaxies that are subject to the same selection effects as the LoTSS jellyfish galaxies. To properly gauge the enhancement (or lackthereof) of SFR in jellyfish galaxies, we show the median SFMS offset for non-jellyfish LoTSS group/cluster galaxies with the dashed lines in Fig.~\ref{fig:sfr}. For jellyfish galaxies in groups, the offset from the SFMS is consistent with what is seen from the non-jellyfish LoTSS galaxy sample. We do not find evidence for a true enhancement in SFR for group jellyfish galaxies, and the positive offsets from the SFMS are consistent with the selection function of the LoTSS galaxy sample. Conversely, as shown in \citetalias{roberts2021_LOFARclust} (and reproduced in Fig.~\ref{fig:jellyfish_Mh}), cluster jellyfish galaxies have SFRs which are enhanced relative to the SFMS but also are enhanced relative to LoTSS cluster galaxies. Therefore there is evidence for SFR enhancements in cluster jellyfish galaxies that are not present for jellyfish galaxies in groups. \par Previous results finding observational evidence for enhanced SFRs in RPS galaxies have focused on the galaxy cluster environment \citep{ebeling2014,poggianti2016,vulcani2018b,roberts2020,durret2021}, and these enhancements are also seen in the cluster sample from \citetalias{roberts2021_LOFARclust} and reproduced here (Fig.~\ref{fig:sfr}, bottom). That said, there has been very little work probing the SFRs of galaxies undergoing RPS in lower mass groups. The results of this work are qualitatively consistent with \citet{roberts2021_CFIS}, who show that ram pressure candidate galaxies in groups (identified from rest-frame optical imaging) have SFRs which are only marginally enhanced compared to much clearer SFR enhancements for the ram pressure candidates in their sample hosted by clusters. The origin of this difference between groups and clusters is not immediately clear, although as previously mentioned, it is possible that the more intense ram pressure in clusters can more strongly perturb the ISM in galaxies, leading to enhanced gas densities and increased star formation. This is largely speculative at this point, though this could be tested with observations of cold gas in both group and cluster jellyfish galaxies, and also through comparisons to hydrodynamic simulations of group and cluster galaxies. \section{Discussion} \label{sec:disc_conc} Here we have presented a contrast between the properties of LoTSS 144 MHz jellyfish galaxies in groups compared to clusters. We find clear differences between the two environments, all of which are consistent with a picture where galaxies in groups are less strongly affected by RPS than galaxies in clusters. Given the higher ICM densities and velocity dispersions in clusters, less efficient RPS stripping in groups is a natural expectation, however this is one of the first studies to show such clear evidence for this picture. Below, we discuss our conclusions in the context of a simple toy model for RPS, as well as the implications of these results for the pre-processing of galaxies prior to cluster infall. \subsection{Ram pressure toy model} \label{sec:disc_toymodel} A primary interpretation of the results from this work is that RPS is a more rapid process in clusters than groups. This can be seen from the tail orientations in Fig.~\ref{fig:tail_orientation} or the phase space diagrams in Fig.~\ref{fig:phase_space}, both of which are consistent with cluster jellyfish galaxies being primarily on first infall (before pericentre) whereas many group jellyfish galaxies are consistent with backsplashing orbits after a pericentric passage. The crux of this interpretation relies on the strength of RPS being relatively modest in groups, such that galaxies are not completely stripped on their first infall. In this section we present a very simple toy model of RPS in order to show that qualitative expectations from such a model are consistent with this picture. We note that this simple approach is not a complete description of RPS, instead, it is meant to show the qualitative variations in RPS timescales between low-mass groups and massive clusters. \par We follow many previous works and model ram pressure stripping through the balance between the strength of ram pressure and the gravitational potential of a galaxy \citep[e.g.][]{gunn1972,rasmussen2008,jaffe2018,roberts2019}. We take an extremely simple galaxy model consisting of a thin exponential stellar disk and a thin exponential gas disk, each with different scale lengths. We note that an exponential disk distribution should also be, roughly, true of galaxy \textsc{Hii} regions that are likely the source of stripped plasma observed in the jellyfish galaxies in this work. The ram pressure, $P_\mathrm{ram}$, and the galaxy anchoring force, $\Pi$, are then given by \begin{align} &P_\mathrm{ram}(R) = \rho_\mathrm{ICM}(R)\,v^2 \\ &\Pi(r) = 2 \pi G \Sigma_\star(r) \Sigma_\mathrm{gas}(r) \end{align} \noindent For a given value of $\rho_\mathrm{ICM}$ and $v$, one can define a `stripping radius', $r_\mathrm{strip}$, within a model galaxy corresponding to the largest galactocentric radius where the inequality, \begin{equation*} \rho_\mathrm{ICM}(R)\,v^2 > 2 \pi G \Sigma_\star(r) \Sigma_\mathrm{gas}(r) \end{equation*} \noindent is satisfied. For our model we assume that for a given $\rho_\mathrm{ICM}$ and $v$, the \textsc{Hi} gas disk is truncated at $r=r_\mathrm{strip}$ due to ram pressure, such that any gas located beyond $r_\mathrm{strip}$ is completely removed from the galaxy. \par For this toy model we consider a model galaxy orbiting through a model galaxy cluster and a model galaxy group, and the relevant parameter values are listed in Table~\ref{tab:toy_model}. We model our toy cluster after the Coma Cluster and we model our toy group after the NGC 4636 group. We select these two examples to use because they are among the most massive (Coma, $M_{180} \sim 2\times10^{15}\,\mathrm{M_\odot}$) and least massive (NGC 4636 Grp, $M_{180} \sim 2\times10^{13}\,\mathrm{M_\odot}$) systems in the \citet{chen2007} sample, and roughly span the entire halo mass range from this work. Therefore the differences in Fig.~\ref{fig:toy_infall} can be thought of as the broad differences expected between the least massive and most massive systems in our sample. \begin{table} \centering \caption{Ram Pressure Model Parameters} \begin{threeparttable} \begin{tabular}{r c c c} \toprule & Model Cluster & Model Group & Ref. \\ \midrule Modelled After: & Coma & NGC 4636 & \\ $\rho_0 \; \mathrm{(g\,cm^{-3})}$: & $5.0 \times 10^{-27}$ & $2.8 \times 10^{-26}$ & a \\ $R_c \; \mathrm{(kpc)}$: & 343 & 6 & a \\ $\beta$: & 0.654 & 0.491 & a \\ $R_{180} \; \mathrm{(kpc)}$: & 2982 & 803 & b,c \\ $\sigma_v \; \mathrm{(km\,s^{-1})}$: & 1082 & 284 & c,d \\ \midrule & Model Galaxy && \\ \midrule $M_\mathrm{star} \; \mathrm{(M_\odot)}$: & $1 \times 10^{10}$ && \\ $R_{d,\star} \; \mathrm{(kpc)}$: & 2.0 && e.g. e,f \\ $M_\mathrm{gas} \; \mathrm{(M_\odot)}$: & $3.3 \times 10^{9}$ && g \\ $R_{d,gas} \; \mathrm{(kpc)}$: & 3.4 && h \\ \bottomrule \end{tabular} \begin{tablenotes} \small \item[a] \citet{chen2007} \item[b] \citet{kubo2007} \item[c] \citet{osmond2004} \item[d] \citet{colless1996} \item[e] \citet{fathi2010} \item[f] \citet{demers2019} \item[g] \citet{brown2015} \item[h] \citet{cayatte1994} \end{tablenotes} \end{threeparttable} \label{tab:toy_model} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{toy_infall.pdf} \caption{Results from the ram pressure stripping toy model for our model cluster (top) and model group (bottom). The grid points are coloured by the fraction of gas mass stripped by the ram pressure toy model ($f_\mathrm{strip}$), and the dashed contour corresponds to a value of $f_\mathrm{strip}=0.5$. All values of $f_\mathrm{strip}$ are calculated for the model galaxy with the parameters described in Table~\ref{tab:toy_model}.} \label{fig:toy_infall} \end{figure} \par In Fig.~\ref{fig:toy_infall} we show the fraction of stripped \textsc{Hi} mass, $f_\mathrm{strip}$ (colourbar), as a function of position in projected phase space, for the model cluster (top) and model group (bottom). The dashed contour in each panel corresponds to $f_\mathrm{strip} = 0.5$. The RPS predictions clearly differ between the model group and cluster, which is driven both by the different ICM density profiles and the different velocity dispersions between the two systems. According to this simple model, substantial fractions (>50\%) of galaxy gas reserves are stripped shortly after passing $R_{180}$ in clusters. For groups this is not the case, and the only region of phase space for groups where $f_\mathrm{strip} > 0.5$ is at very small radii and very large velocity offsets. This shows how RPS can be less efficient in groups relative to clusters. The differences between groups and clusters from this toy model are consistent with our interpretation of the observed trends in this work; namely, that galaxies in clusters are primarily stripped on their first infall whereas galaxies in groups can maintain significant gas reserves beyond first pericentre. \par We reiterate that this is a very simplistic treatment of ram pressure stripping, and is not meant to realistically capture the details of stripping in groups and clusters. While we take a single group model for illustrative purposes, in reality there is likely significant scatter in the ICM densities for different groups. This scatter in ICM density, and in particular whether a group lies on the high or low density end, likely also plays an important role in determining the efficiency of ram pressure stripping in such low-mass environments. Additionally, this model likely overestimates the amount of gas stripping somewhat, given that we do not include any contributions from a stellar bulge or dark matter halo to the galaxy restoring potential and that we do not include any contribution from the more densely bound molecular component to the total gas mass. All said, the purpose of this exercise is to illustrate the broad differences in the efficiency of RPS between the group and cluster environment. Furthermore, to show that galaxies in clusters being stripped shortly after infall is a reasonable expectation, as is stripping timescales in groups extending beyond the first passage of pericentre. \subsection{Implications for pre-processing} \label{sec:disc_preproc} The presence of jellyfish galaxies in groups also has important implications for pre-processing, as RPS is likely relevant for the quenching of satellite star formation in the group regime (albeit less efficiently than for clusters). There have been a number of estimates in literature for the fraction of cluster galaxies that have been pre-processed, in other words, the fraction of galaxies on the cluster red sequence that were quenched in a lower mass group and subsequently accreted onto the cluster as a passive galaxy. As many as half of present day cluster galaxies may have been accreted as a group member \citep[e.g.][]{mcgee2009,delucia2012,bahe2013,hou2014}, though not all of those galaxies will have been pre-processed in the sense that not all galaxies infalling as group members will be quenched. More direct constraints on the fraction of pre-processed galaxies have been made, and typically fall between $\sim$10\% and $\sim$30\% \citep{haines2015,roberts2017,olave2018,vanderburg2018,roberts2019}. Depending on the group mass, we find that between 5\% and 15\% of LoTSS-detected star-forming galaxies show signs of RPS in the radio continuum (Fig.~\ref{fig:jellyfish_Mh}). Due to the LoTSS sensitivity limits we are not sensitive to the low SFRs typical of low-mass galaxies around $\sim\!10^9\,\mathrm{M_\odot}$ (assuming a typical SFMS). These low-mass galaxies are expected to be strongly impacted by RPS \citep[e.g.][]{fillingham2015,roberts2019,yun2019,baxter2021,roberts2021_CFIS}, therefore the fractions in Fig.~\ref{fig:jellyfish_Mh} would likely be larger if we could probe down to lower stellar masses. The fractions in Fig.~\ref{fig:jellyfish_Mh} also only account for galaxies which currently show morphological features consistent with RPS, and do not include `post-stripping' galaxies with symmetric gas disks that have already been truncated by ram pressure \citep[e.g.][]{sengupta2007,vollmer2007,jaffe2018,vulcani2018}. \par If RPS stripping is contributing to pre-processing in groups, this implies that galaxies infalling onto clusters as part of a group should already be \textsc{Hi} deficient, to some extent, relative to field galaxies. This is consistent with \textsc{Hi} observations from the BUDHIES survey which find \textsc{Hi} deficient galaxies in group-mass substructures surrounding the Abell 963 cluster \citep{jaffe2016}, as well as MeerKAT observations finding \textsc{Hi} deficient galaxies in the Fornax A group \citep{kleiner2021}. Other works have also reported evidence for \textsc{Hi} deficient galaxies in groups \citep[e.g.][]{huchtmeier1997,verdes-montenegro2001,denes2016,brown2017}, which based on the results of this work could be driven by RPS. \par Beyond gravitationally bound groups, galaxies may also be pre-processed in cosmic filaments prior to cluster infall. This can been seen by the fact that the fraction of red, quenched galaxies increases toward the central spine of filaments \citep[e.g.][]{kuutma2017,malavasi2017,kraljic2018,salerno2019}. Recently, \citet{bonjean2018} have shown that galaxies in the filament bridge between the Abell 399 and Abell 401 clusters have indistinguishable properties (i.e.\ early-type, passively evolving) from galaxies within the clusters. This suggests that galaxy properties are impacted by these dense filamentary environments. It has been suggested that ram pressure could affect galaxies even within cosmic filaments \citep{benitez-llambay2013,vulcani2018}, though given the relatively low gas densities in filaments compared to groups and clusters \citep[e.g.][]{edwards2010,eckert2015,tanimura2020}, the efficiency of RPS in such environments is likely low. It is also likely that the gas and galaxies in filaments are moving more coherently than in groups or clusters, meaning that the relative velocities could be lower and less conducive to RPS. While this it is not the focus of this work, it may be possible to constrain the presence, or lack thereof, of RPS in cosmic filaments with LoTSS. LoTSS DR2 covers $\sim\!5700\,\mathrm{deg^2}$ in the northern sky at both high and low galactic latitude. With such a wide area of the extragalactic sky it is possible to probe the properties of filament galaxies in a statistical fashion. Given a sample of galaxies in filaments, for example identified from SDSS spectroscopy or from filament bridges between nearby clusters, a search for potential jellyfish galaxies could then be done with similar methods as this work. \section{Summary} \label{sec:summary} In this work we present a search for radio continuum jellyfish galaxies with LOFAR across a sample of $\sim$500 low redshift galaxy groups ($10^{12.5} < M_\mathrm{group} < 10^{14}\,\mathrm{M_\odot}$). We also incorporate the sample of radio continuum jellyfish galaxies in clusters from \citetalias{roberts2021_LOFARclust}, allowing us to contrast the properties of jellyfish galaxies in groups and clusters across three decades in halo mass. The main conclusions from this work are summarized below. \begin{enumerate} \itemsep0.5em \item The frequency of jellyfish galaxies is highest in clusters and lowest in low-mass groups (Fig.~\ref{fig:jellyfish_Mh}). \item We find evidence for weaker ram pressure stripping in groups relative to clusters. Many jellyfish galaxies in groups are consistent with having already passed pericentre, which does not seem to be the case for jellyfish galaxies in clusters (Figs \ref{fig:tail_orientation} \& \ref{fig:phase_space}). \item Unlike jellyfish galaxies in clusters, jellyfish galaxies in groups do not have systematically enhanced star formation rates (Fig.~\ref{fig:sfr}). \end{enumerate} \noindent The results of this work highlight that ram pressure stripping of galaxies is occurring in groups, and that there are interesting differences between the properties of jellyfish galaxies in groups and clusters. Moving forward it will be important to obtain detailed, multiwavelength observations of group jellyfish galaxies (e.g. optical IFU, \textsc{Hi}, molecular gas) as has already been done for such galaxies in clusters \citep[e.g.][]{chung2007,poggianti2017,jachym2019,moretti2020}. This will aid in understanding the similarities and differences between the impact of ram pressure on galaxy evolution in both the group and cluster regimes. \begin{acknowledgements} IDR and RJvW acknowledge support from the ERC Starting Grant Cluster Web 804208. SLM acknowledges support from STFC through grant number ST/N021702/1. AB acknowledges support from the VIDI research programme with project number 639.042.729, which is financed by the Netherlands Organisation for Scientific Research (NWO). AI acknowledges the Italian PRIN-Miur 2017 (PI A. Cimatti). \par This paper is based on data obtained with the International LOFAR Telescope (ILT). LOFAR \citep{vanhaarlem2013} is the LOw Frequency ARray designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources) and are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Universit\'e d'Orl\'eans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland; The Istituto Nazionale di Astrofisica (INAF), Italy. This research made use of the Dutch national e-infrastructure with support of the SURF Cooperative (e-infra 180169) and the LOFAR e-infra group. The J\"ulich LOFAR Long Term Archive and the GermanLOFAR network are both coordinated and operated by the J\"ulich Supercomputing Centre (JSC), and computing resources on the supercomputer JUWELS at JSC were provided by the Gauss Centre for Supercomputinge.V. (grant CHTB00) through the John von Neumann Institute for Computing (NIC). This research made use of the University of Hertfordshire high-performance computing facility (\url{http://uhhpc. herts.ac.uk}) and the LOFAR-UK computing facility located at the University of Hertfordshire and supported by STFC [ST/P000096/1], and of the Italian LOFAR IT computing infrastructure supported and operated by INAF, and by the Physics Department of Turin University (under an agreement with Consorzio Interuniversitario per la Fisica Spaziale) at the C3S Supercomputing Centre, Italy. \par The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. \end{acknowledgements} \bibliographystyle{aa}
1,108,101,563,733
arxiv
\section{Calculation technology} The $B\to D$ TFFs relevant for the semi-leptonic decay $B \to D \ell \bar{\nu}_\ell$ can be parameterized as \begin{widetext} \begin{eqnarray} \langle D(p_D)|\bar{c}\gamma_{\mu}b|B(p_B)\rangle &=& \left[ p_{B\mu} + p_{D\mu} - \frac{m_B^2-m_D^2}{q^2}q_{\mu} \right] f_+(q^2) + \frac{m_B^2-m_D^2}{q^2} q_{\mu} f_0(q^2) \nonumber \\ &=& 2f_+(q^2)p_{D\mu} + [f_+(q^2 )+f_-(q^2)]q_{\mu}, \label{eq:matrix1} \end{eqnarray} \end{widetext} where the momentum transfer $q=(p_B - p_D)$ and the relation between $f_0(q^2)$ and $f_{\pm}(q^2)$, i.e. $f_0(q^2)=f_+(q^2)+{q^2}/{(m_B^2-m_D^2)}f_{-}(q^2)$, has been adopted. After integrating over the phase space, the differential decay width of $B{\to}D\ell \bar{\nu}_\ell$ over $q^2$ can be written as \begin{widetext} \begin{eqnarray} \frac{d}{dq^2}\Gamma(B{\to}D\ell\bar{\nu}_\ell) = \frac{{G_F^2 |\ensuremath{V_{\mathrm{cb}}}|^2 }}{192\pi^3 m_B^3} \left(1 - \frac{m_\ell^2}{q^2} \right)^2 \left[ \left( {1 + \frac{{m_\ell^2 }}{2q^2}} \right)\lambda ^{\frac{3}{2}} (q^2) |f_{+}(q^2 )|^2 + \frac{{3m_\ell^2 }}{{2q^2 }}(m_B^2 - m_D^2 )^2 \lambda ^{\frac{1}{2}} (q^2 ) |f_{0}(q^2)|^2 \right] , \label{Vcb1} \end{eqnarray} \end{widetext} where $G_F=1.166\times10^{-5} \;{\rm GeV^{-2}}$ is the Fermi constant and $\lambda(q^{2}) = (m_B^2 + m_D^2 - q^2)^2 - 4 m_B^2 m_D^2$ is the phase-space factor. For the case of $\ell = e$ or $\mu$, $m_{\ell}\to 0$, the term involving $f_0(q^2)$ shall play a negligible role. This is the so-called chiral suppression. More specifically, by taking the limit $m_{\ell}\to 0$, we have \begin{equation} \frac{d\Gamma}{dq^2}(B{\to}D\ell\bar{\nu}_\ell) =\frac{G_F^2|\ensuremath{V_{\mathrm{cb}}}|^2} {192 \pi^3m_B^3} \lambda^{\frac{3}{2}}(q^2)|f_{+}(q^2)|^2 . \label{eq:width1} \end{equation} The TFF $f_{+}(q^2)$ is an important component for the semi-leptonic decay and has been calculated by the lattice QCD approach~\cite{Lattice_1}, the pQCD approach~\cite{H_N_Li_1} and the QCD LCSR approach~\cite{ZuoFen_1}. If we have known the TFF $f_{+}(q^2)$ well, one can extract $|\ensuremath{V_{\mathrm{cb}}}|$ by comparing with the data, i.e. via the following equation \begin{equation} \frac{{\cal B}(B{\to}D\ell\bar{\nu}_\ell)}{\tau(B)} = \int_{0}^{(m_B - m_D )^2 } {dq^2 \frac{{d\Gamma (B{\to}D\ell\bar{\nu}_\ell)}}{{dq^2 }}} . \label{eq:width2} \end{equation} Here $\tau(B)$ stands for the $B$ meson lifetime and ${\cal B}(B{\to}D\ell\bar{\nu}_\ell)$ stands for the branching ratio of $B{\to}D\ell\bar{\nu}_\ell$, both of which are experimentally measurable parameters. \subsection{LCSR for the TFF $f_{+}(q^2)$} For the $B$ meson decays to light meson, its basic quantity for a LCSR calculation is the correlation function of the weak current and a current evaluated between the vacuum and a light meson. To figure out the dominant twist-2 contribution and make it a better platform for determining the properties of the twist-2 LCDA, we adopt the following chiral correlation function (i.e. the correlator) to do our calculation, \begin{widetext} \begin{eqnarray} \Pi_\mu(p_D,q)&=& i\int d^4x e^{ipx} \langle D(p_D)|{\rm {T}}\{\bar{c}(x) \gamma_{\mu} (1+\gamma_5) b(x),\bar{b}(0)i(1+\gamma_5)d(0)\}|0\rangle \nonumber\\ &=& \Pi(q^2,(p_{D}+q)^2) p_{D\mu} + \bar{\Pi}(q^2,(p_{D} +q)^2)q_\mu . \label{eq:cc} \end{eqnarray} \end{widetext} In stead of using the current $\bar{b} i\gamma_5 d$ for the pseudoscalar $B$ meson, we adopt a chiral current $\bar{b} i(1+\gamma_5) d$ as firstly suggested in Ref.\cite{huangbpi1} to do our calculation. The advantage of such a choice lies in that the contributions from the twist-3 LCDAs are eliminated exactly due to chiral correlation suppression. This treatment is at the price of introducing an extra contribution from a scalar $B$ meson with $J^{P}=0^+$ corresponding to operator $\bar{b} d$. To suppress the error caused by such treatment, one can set the continuum threshold parameter $s_0$ to be the one close to the lowest scalar $B$ meson, which is smaller than the pseudoscalar $B$ meson mass. This is the reason why for the improved LCSR approach the value of $s_0$ is usually taken to be those lower than the conventional LCSR. An uncertainty analysis on the choice of $s_0$ shall be presented in our numerical estimations. On the one hand, for large (negative) virtualities of those currents, the correlator in the coordinate-space is dominated by distances close to the light-cone ($x^2\sim0$) and can be treated within the framework of light-cone expansion. On the other hand, the same correlator can be written as a dispersion relation, in the virtuality of the current coupling to $B$ meson. Equating the light-cone expansion with the dispersion relation, and separating the lowest lying $B$ meson contribution from those of higher states via quark-hadron duality, one obtains the required LCSR for the TFFs describing $B\to$ light meson decays. In this way, the LCSR allows the calculation of the properties of nonexcited hadron-states with a reasonable theoretical uncertainty. Following such standard procedures, we can obtain the LCSR for $f_{+}(q^2)$. To shorten the paper, we only list the main results and also the new results from the $D$-meson twist-4 terms, the interesting readers may turn to Ref.\cite{ZuoFen_1} for detailed calculation technology. Up to twist-4 accuracy, the QCD LCSR for $f_{+}(q^2)$ can be written as \begin{widetext} \begin{eqnarray} {f^ + }({q^2}) &=& \frac{{m_b^2{f_D}}}{{m_B^2{f_B}}}{e^{m_B^2/{M^2}}}\left\{ {\int_\Delta ^1 {du} \exp \left[ { - \frac{{m_b^2 - \bar u({q^2} - um_D^2)}}{{u{M^2}}}} \right]\left[ {\frac{\phi_D(u)}{u}} \right.} \right.\left. { - \frac{{8m_b^2[{g_1}(u) + {G_2}(u)]}}{{{u^3}{M^4}}} + \frac{{2{g_2}(u)}}{{u{M^2}}}} \right]\nonumber\\ &&\left. { + \int_0^1 {dv} \int D {\alpha _i}\frac{{\theta (\xi - \Delta )}}{{{\xi ^2}{M^2}}}\exp \left[ { - \frac{{m_b^2 - \bar \xi ({q^2} - \xi m_D^2)}}{{\xi {M^2}}}} \right]\left[ {2{\varphi _ \bot }({\alpha _i}) + 2{{\tilde \varphi }_ \bot }({\alpha _i}) - {\varphi _\parallel }({\alpha _i}) - {{\tilde \varphi }_\parallel }({\alpha _i})} \right]} \right\} , \label{basicfq2} \end{eqnarray} \end{widetext} in which $\bar{u} = 1 - u$, $\xi = \alpha_1 + v \alpha_3$, $\bar \xi = 1 - \xi$, $G_2(u)=\int_{0}^{u}g_{2}(v)dv $ and the integration upper limit is \begin{eqnarray} \Delta &=& \frac{1}{2m_D^2}\Big[\sqrt{(s_0-q^2-m_D^2)^2+4m_D^2(m_b^2-q^2)} \nonumber\\ && \quad\quad\quad -(s_0-q^2-m_D^2)\Big] . \end{eqnarray} In addition to the leading-twist DA $\phi_D(u)$, we need to introduce two two-particle and four three-particle twist-4 DAs, which, similar to the kaonic case with $SU_f(3)$-breaking effect, can be expressed as~\cite{pballsum2} \begin{widetext} \begin{eqnarray} {g_1}(u) &=& \frac{{\bar uu}}{6}[ - 5\bar uu(9{h_{00}} + 3{h_{01}} - 6{h_{10}} + 4\bar u{h_{01}}u + 10\bar u{h_{10}}u) + {a_{10}}(6 + \bar uu(9 + 80\bar uu))] \nonumber\\ &&+ {a_{10}}{{\bar u}^3}(10 - 15\bar u + 6{{\bar u}^2})\ln \bar u + {a_{10}}{u^3}(10 - 15u + 6{u^2})\ln u, \nonumber\\ {g_2}(u) &=& \frac{{5\bar uu(u - \bar u)}}{2}[4{h_{00}} + 8{a_{10}}\bar uu - {h_{10}}(1 + 5\bar uu) + 2{h_{01}}(1 - \bar uu)].\label{twist4_2particle} \end{eqnarray} \begin{eqnarray} \varphi_{\perp}(\alpha_i) &=& 30 \alpha_3^2(\alpha_2-\alpha_1)[ h_{00}+h_{01}\alpha_3+\frac{1}{2}\,h_{10}(5\alpha_3-3)] , \nonumber\\ \widetilde{\varphi}_{\perp}(\alpha_i) &=& -30 \alpha_3^2[ h_{00}(1-\alpha_3)+h_{01}\Big[\alpha_3(1-\alpha_3)-6\alpha_1\alpha_2\Big] + h_{10}\Big[\alpha_3(1-\alpha_3)-\frac{3}{2}(\alpha_1^2 +\alpha_2^2)\Big]],\nonumber\\ {\varphi}_{\parallel}(\alpha_i) &=& 120 \alpha_1\alpha_2\alpha_3 [ a_{10} (\alpha_1-\alpha_2)],\nonumber\\ \tilde{\varphi}_{\parallel}(\alpha_i) &=& 120 \alpha_1\alpha_2 \alpha_3 [ v_{00} + v_{10} (3\alpha_3-1)],\label{twist4_3particle} \end{eqnarray} \end{widetext} where \begin{eqnarray} h_{00} & = & v_{00} =-\frac{\delta^2}{3}, \; a_{10} = \delta^2\epsilon-\frac{9}{20} a^D_2 m_D^2,\nonumber\\ v_{10} & = & \delta^2\epsilon, \; h_{01} = \frac{2}{3}\delta^2\epsilon-\frac{3}{20} a^D_2 m_D^2 \nonumber \end{eqnarray} and \begin{displaymath} h_{10} = \frac{4}{3}\delta^2\epsilon+\frac{3}{20} a_2 m_D^2 . \end{displaymath} Here, as a rough estimation of $D$-meson twist-4 contributions, we adopt $\delta^2(1{\rm GeV})=0.20 {\rm GeV}^2$ and $\epsilon(1{\rm GeV})=0.53$~\cite{pballsum2}. The uncertainties for such approximation are suppressed by the fact that the twist-4 part itself contributes less than $4\%$ of the total TFF, which will be shown latter discussions. Taking the limit of infinite quark masses, our present TFF $f^+(q^2)$ coincides with the Isgur-Wise function for the TFFs between heavy mesons~\cite{Isgur1,Isgur2}. This shows that at least at the leading order level, the LCSR for $f^+(q^2)$ are equivalent to the estimations by taking the heavy quark symmetry. At the NLO level, the heavy quark mass effect may cause changes among those two approaches, which is out of the range of the present paper. In order to conveniently compare with the experimental analysis done in the literature, we also present LCSR for the $B\to D$ TFF within the heavy quark symmetry. The non-perturbative matrix element defined in Eq.(\ref{eq:matrix1}) can be treated by taking the heavy quark limit, which shall result in the following form, \begin{eqnarray} && \langle D(p_D)|\bar{c}\gamma_{\mu}b|B(p_B)\rangle = \nonumber\\ && \sqrt{m_B~m_D}[h_+(w)(v_B+v_D)_\mu+h_-(w)(v_B-v_D)_\mu] , \label{eq:ffv} \end{eqnarray} where the four velocities $v_B={p_B}/{m_B}$ and $v_D={p_D}/{m_D}$. The relationship between $f_{+}(q^2)$ and $h_{+ (-)}(w)$ is \begin{equation} f_{+}(q^2)=\frac{m_B+m_D}{2\sqrt{m_B~m_D}}{\cal G}(w), \label{eq:f12} \end{equation} where \begin{displaymath} {\cal G}(w)=h_+(w)-\frac{m_B-m_D}{m_B+m_D}h_-(w) \end{displaymath} and $w$ stands for the product of $B$ meson and $D$ meson four velocities, which is defined as \begin{equation} w=v_B\cdot v_D=\frac{m^2_B + m^2_D-q^2}{2 m_B m_D} . \label{wrelation} \end{equation} When $q^2\to 0$, we get its maximum value, $w_{max}=(m^2_B + m^2_D)/(2 m_B m_D)$. When $q^2\to (m_B -m_D)^2$, we get its minimum value, $w_{min}=1$. Then, the LCSR for TFF ${\cal G}(w)$ takes the following form \begin{widetext} \begin{eqnarray} {\cal G}(w) &=& \frac{{2m_b^2m_D^{1/2}}}{{({m_B} + {m_D})m_B^{3/2}}}\frac{{{f_D}}}{{{f_B}}}{e^{m_B^2/{M^2}}}\bigg\{ \int_\Delta ^1 {du} \exp \bigg[ { - \frac{{m_b^2 - \bar u(m_B^2 + \bar um_D^2 - 2{m_B}{m_D}w)}}{{u{M^2}}}} \bigg]\left[ {\frac{{{\phi _D}(u)}}{u}}\right. \nonumber\\ &&\left. { - \frac{{8m_b^2[{g_1}(u) + {G_2}(u)]}}{{{u^3}{M^4}}} + \frac{{2{g_2}(u)}}{{u{M^2}}}} \right] + \int_0^1 {dv} \int D {\alpha _i}\frac{{\theta (\xi - \Delta )}}{{{\xi ^2}{M^2}}}\exp \left[ { - \frac{{m_b^2 - \bar \xi (m_B^2 + \bar \xi m_D^2 - 2{m_B}{m_D}w)}}{{\xi {M^2}}}} \right]\nonumber\\ && { \times \left[ {2{\varphi _ \bot }({\alpha _i}) + 2{{\tilde \varphi }_ \bot }({\alpha _i}) - {\varphi _\parallel }({\alpha _i}) - {{\tilde \varphi }_\parallel }({\alpha _i})} \right]} \bigg\} . \label{LCSRgw} \end{eqnarray} \end{widetext} \subsection{Models for the leading-twist $D$ meson DA} The leading-twist $D$ meson DA has the asymptotic form, $\phi^{\rm as}_D(x,\mu^2)|_{\mu\to\infty} = 6x\bar x$. In practical applications, we need to know what is the shape of $D$ meson DA at low and moderate energy scales. The DA at any scale $\mu$ can be expanded in Gegenbauer series as \begin{eqnarray} \phi_D(x, \mu^2) = 6x\bar x \sum^\infty_{n=0} a^D_n(\mu^2) C^{3/2}_n(x-\bar x), \label{DA_Gegenbauer} \end{eqnarray} where $C^{3/2}_n(x-\bar x)$ are Gegenbauer polynomials and $a^D_n(\mu^2)$ are Gegenbauer moments. If the DA shape at a scale $\mu_0$ is known, we can inversely get its Gegenbauer moments by using the orthogonality relation for Gegenbauer polynomial, i.e., \begin{eqnarray} a^D_n(\mu^2_0) = \frac{\int^1_0 dx \; \phi_D(x,\mu^2_0) C^{3/2}_n(x-\bar x)} {\int^1_0 dx \; 6x\bar x[C^{3/2}_n(x-\bar x)]^2}. \label{Gegenbauer_moment} \end{eqnarray} Then, by including the QCD evolution effect, the $D$ meson DA at any scale can be written as~\cite{TFFas} \begin{eqnarray} {\phi_D}(x,\mu^2) = 6x\bar x\sum\limits_{n = 0}^\infty {{a^D_n} (\mu_0^2)} {\left( {\ln \frac{{{\mu^2}}}{{\Lambda _{QCD}^2}}} \right)^{ - {\gamma_n}}}C_n^{3/2}(x - \bar x)\nonumber\\ \end{eqnarray} As a pQCD estimation for $B\to D$ decays, by introducing a free parameter $C_d$, Ref.\cite{H_N_Li_1} has suggested a naive model for $D$ meson DA, i.e. \begin{eqnarray} \phi_{1D}(x)=6x(1-x)[1+C_d(x-\bar x)] . \end{eqnarray} By setting $C_d=0.7$, they predicte $|\ensuremath{V_{\mathrm{cb}}}|=0.035\sim0.036$; or inversely, if taking $|\ensuremath{V_{\mathrm{cb}}}|=0.04$, they predict $C_d=0.4\sim0.5$. A larger value $C_d=0.8$ has also been suggested in Ref.\cite{lucd}. In our calculation, we shall adopt $\phi_{1D}(x)$ as the first DA model to do our discussion. On the other hand, the $D$ meson DA can be related to its light-cone wave function (LCWF) $\psi_D(x,\mathbf{k}_\bot)$ via the relation, \begin{eqnarray}\label{DA_WF} \phi_D (x,\mu_0^2) = \frac{{2\sqrt 6 }}{{f_D }} \int_{|{\bf{k}}_\bot|^2 \le \mu_0 ^2 } {\frac{{d{\bf{k}}_ \bot }}{{16\pi ^3 }} \psi_D (x,{\bf{k}}_\bot )}, \end{eqnarray} where $f_{D}$ is decay constant. Thus one could first construct a reasonable model for the $D$ meson WF and then get its DA. A proper way of constructing the $D$ meson WF/DA with a better end-point behavior at small $x$ and $k_\perp$ region is very important for dealing with high energy processes, especially for pQCD calculations. As suggested, one useful way for modeling the hadronic valence WF is to use approximate bound-state solution of a hadron in terms of the quark model as the starting point. The BHL prescription~\cite{BHL_1} of the hadronic WF is rightly obtained via this way by connecting the equal-time WF in the rest frame and the WF in the infinite momentum frame. It shows that the longitudinal and transverse distributions for the WF $\psi_D(x,\textbf{k}_\bot)$ are entangled with each other, which can be constructed as \begin{eqnarray} &&\psi_D(x,\textbf{k}_\bot) \nonumber\\ &&= A_D \varphi_D(x) \exp \left[ - b_D ^2 \left(\frac{{\bf k}_\bot^2 + m_c^2 }{x} + \frac{{\bf k}_\bot^2 + m_d^2}{\bar{x}} \right)\right] , \label{WF1} \end{eqnarray} where $A_D$ is the overall normalization constant. For the $x$-dependent part, similar to the pionic case~\cite{XGWU_1}, we can assume $\varphi_D(x) = [1 + B \times C^{3/2}_2(x - \bar x)]$, in which $B$ is the phenomenological parameter to be fixed by studying the $D$ meson involved processes. In the following, we shall show that the value of $B$ is close to the second Gegenbauer moment, $B\sim a_{2}^D$, which basically determines the broadness of the longitudinal distribution. More over, because $m_c \gg m_d$, we shall have a large non-zero first Gegenbauer moment $a_{1}^D$ as suggested in Refs.\cite{H_N_Li_1,lucd}. After integrating out the transverse momentum, we get the second model for the $D$ meson DA, i.e., \begin{widetext} \begin{equation} \phi _{2D} (x, \mu_0^2) = \frac{{\sqrt 6 A_{2D} x\bar{x}}} {{8\pi ^2 f_D b_{2D}^2 }}[1+B\times C_2^{3/2}(x-\bar x)] \exp \left[ { - b_{2D}^2 \frac{{x m_d^2 + \bar{x}m_c^2 }}{{x\bar{x}}}} \right]\left[ 1 - \exp \left( - \frac{{b_{2D}^{2} \mu_0^2 }}{{x\bar{x}}} \right) \right] , \label{phi2d} \end{equation} \end{widetext} where the constitute light quark mass $m_d\sim300$ MeV from the constitute quark model~\cite{cqm}, $A_{2D}$ and $b_{2D}$ are undetermined parameters. As a further step, we include the spin-space WF $\chi_D(x,\textbf{k}_\bot)= (\bar{x}m_c+xm_d)/{\sqrt{\mathbf{k}^2_\perp+(\bar{x}m_c+x m_d)^2}}$~\cite{Melosh_1}, into the WF, i.e. \begin{eqnarray} \psi'_D(x,\textbf{k}_\bot) = \chi_D(x,\textbf{k}_\bot) \psi_D(x,\textbf{k}_\bot). \label{WF2} \end{eqnarray} Such spin-space part comes from the Wigner-Melosh rotation~\cite{Melosh_2}, whose idea is reasonable: when one transforms from equal-time (instant-form) WF to LCWF, besides the momentum space WF transformation, one should also consider the Melosh transformation relating equal-time spin WFs and light-cone spin WFs. After integrating it over the transverse momentum dependence, we get the third model for the $D$ meson DA, \begin{widetext} \begin{eqnarray} \phi_{3D}(x,\mu_0^2 )&& =\frac{A_{3D} \sqrt{6x\bar{x}} Y} {{8\pi ^{3/2} f_D b_{3D} }}[1+B\times C_2^{3/2}(x-\bar x)] \exp \bigg[ - b_{3D} ^2\frac{{x m_d^2 + \bar{x}m_c^2 - {\rm{Y}}^2 }}{{x\bar{x}}}\bigg] \left[ {{\rm{Erf}}\Big( {\frac{{b_{3D}\sqrt{ {\mu_0 ^2 + {\rm{Y}}^2 } }}}{{\sqrt {x\bar{x}} }}} \Big) - {\rm{Erf}}\Big( {\frac{{b_{3D} {\rm{Y}}}}{{\sqrt {x\bar{x}} }}} \Big)} \right],\nonumber\\ \label{phi3d} \end{eqnarray} \end{widetext} where $A_{3D}$, $B$ and $b_{3D}$ are undetermined parameters. The error function ${\rm{Erf}}(x)$ is defined as $ {\rm{Erf}} (x)=2\int^x_0{\exp({-t^2})dt} /\sqrt{\pi} $, ${\rm{Y}} = x m_d+\bar{x}m_c$ and $\bar{x} = 1 - x $. As for the second and third WFs, we have two constraints to determine the WF parameters: \begin{itemize} \item The first one is the WF normalization condition \begin{eqnarray} \int^1_0 dx \int \frac{d^2 \textbf{k}_\bot}{16\pi^3} \psi_D(x,\textbf{k}_\bot) = \frac{f_D}{2\sqrt{6}} \;. \label{Pd_1} \end{eqnarray} \item The second one is the probability of finding the leading valence-quark state in $D$ meson ($P_D$), which is $\simeq 0.8$~\cite{PD_1,PD_2,sunyanjun}. Here the probability $P_D$ is defined as \begin{eqnarray} P_D=\int^1_0{dx \int_{|\mathbf{k}_{\perp}|^2\le \mu_0^2} {\frac{d^2 \mathbf{k}_\perp}{16\pi^3}|\psi_D(x, \mathbf{k}_{\perp} )|^2}}. \end{eqnarray} More specifically, for the above mentioned WF models (\ref{WF1},\ref{WF2}), we obtain \begin{widetext} \begin{eqnarray} {P_{2D}}&=& \frac{{A_{2D}^2}}{{32{\pi ^2}b_{2D}^2}}\int_0^1 {dx} \varphi^2(x)x\bar x\exp \left[ { - 2b_{2D}^2\frac{{m_d^2x + m_c^2\bar x}}{{x\bar x}}} \right] \left[ {1 - \exp \left( { - 2b_{2D}^2\frac{{\mu _0^2}}{{x\bar x}}} \right)} \right] , \\ P_{3D} &=& \frac{{A_{3D} ^2 }}{{16\pi ^2 }}\int_0^1 {dx}\varphi^2(x) {\rm{Y}}^2 \exp \left[ { - 2 b_{3D}^2 \frac{{m_d^2 x + m_c^2\bar x}}{{x\bar x}}} \right] \int_0^{\mu_0 ^2 } {\frac{{dk_ \bot ^2 }}{{k_ \bot ^2 + {\rm{Y}}^2 }}} \exp \left( { - 2b_{3D} ^2 \frac{{k_ \bot ^2 }}{{x\bar x}}} \right) . \end{eqnarray} \end{widetext} \end{itemize} The remaining free parameter $B$ can be fixed by comparing with the data, and then the WF/DA behavior can be determined finally. In combination with the above two constraints, it is noted that by using a proper value of $B$, most of the DA shapes suggested in the literature can be simulated. \subsection{Decay constants for the $B$ and $D$ mesons} The $B$ and $D$ decay constants are two important physical quantities for determining the $B\to D$ TFFs and the $D$ meson DA. A comparative study on the $B$ meson decay constant under several different correlation functions has been done in Ref.\cite{1002.0483}. To be consistent with our present LCSR analysis on the $B\to D$ TFF, we adopt the chiral correlation function to do the calculation, i.e. \begin{displaymath} \Pi (q^2 ) = i\int {d^4 x} e^{iq \cdot x} \langle 0|\bar q(x)(1 + \gamma _5 )b(x),\bar b(0)(1 - \gamma _5 )q(0)|0\rangle . \end{displaymath} Following the standard procedure, we can obtain the sum rule for $f_{B}$ up to NLO, \begin{eqnarray} f_B^2&&\frac{{m_B^4}}{{m_b^2}}{e^{ - m_B^2/{M^2}}} = \nonumber\\ &&\frac{3}{{4{\pi ^2}}}\int_{m_b^2}^{{s_0}} {ds\,s\,{e^{ - s/{M^2}}}} {(1 - x)^2}\left[ {1 + \frac{{{\alpha _s}({\mu _{\rm IR}}){C_F}}}{\pi }\rho (x)} \right]\nonumber\\ &&+ {e^{ - m_b^2/{M^2}}}\bigg[ {\frac{1}{6}\langle \frac{{{\alpha _s}}}{\pi }GG\rangle - \frac{{32\pi }}{{27}}\frac{{{\alpha _s}({\mu _{\rm IR}}){{\langle \bar qq\rangle }^2}}}{{{M^2}}}} \nonumber\\ &&\times {\bigg( {1 - \frac{{m_b^2}}{{4{M^2}}} - \frac{{m_b^4}}{{12{M^4}}}} \bigg)} \bigg] , \label{fbsr} \end{eqnarray} where $m_b$ stands for the $b$-quark, $\mu_{\rm IR}$ is the renormalization scale, $x=m_b^2/s$ and $C_F=4/3$. The parameters $M$ and $s_0$ stand for the Borel parameter and the effective continuum threshold respectively. The function $\rho(x)$ determines the spectral density of the NLO correction to the perturbative part, \begin{eqnarray} \rho (x) =&& \frac{9}{4} + 2{{\rm Li}_2}(x) + \ln x\ln (1 - x) - \ln (1 - x)\nonumber\\ &&+ \left( {x - \frac{3}{2}} \right)\ln \frac{{1 - x}}{x} - \frac{x}{{1 - x}}\ln x , \label{rho1} \end{eqnarray} where ${\rm{Li}}_2(x)$ means the dilogarithm function. Practically, $\rho(x)$ is firstly derived under the $\overline{MS}$ scheme, and then transformed into Eq.(\ref{rho1}) with the help of the well-known one loop formula for the relation between the $b$ quark $\overline{MS}$-mass and the pole mass, i.e. \[ \overline{m}_b(\mu_{\rm{IR}})=m_b\left[1+ \frac{ \alpha_s(\mu_{\rm{IR}}) C_F}{4\pi} \left(-4+3\ln\frac{m_b^2} {\mu_{\rm IR}^2}\right)\right]. \] By changing all $B$ meson parameters to the corresponding $D$ meson parameters, we can get similar LCSR as Eq.(\ref{fbsr}) for $f_D$. \section{Numerical results and discussions} \subsection{Input parameters} As for the heavy quark masses, we take $m_b=4.85\pm0.05$ GeV and $m_c=1.50\pm0.05$ GeV. For $B$ and $D$ mesons' masses, we take $m_B=5.279$ GeV and $m_D=1.869$ GeV~\cite{PDG}. We take the light condensates $\langle\bar{q}q\rangle$ and $\langle\frac{\alpha_s}{\pi}G_{\mu\nu}^aG^{a\mu\nu}\rangle$ as~\cite{duplan,cond} \begin{eqnarray} \langle\bar{q}q\rangle(1\;{\rm GeV}) &=& -(0.246^{+0.018}_{-0.019}\;{\rm GeV})^3\nonumber\\ \langle\frac{\alpha_s}{\pi}GG\rangle &=& 0.012^{+0.006}_{-0.012}\;{\rm GeV}^4\nonumber\\ \langle\bar{q}g\sigma\cdot Gq\rangle(1\;{\rm GeV}) &=& (0.8\pm0.2)\; {\rm GeV}^2\langle\bar{q}q\rangle(1\;{\rm GeV}), \nonumber \end{eqnarray} where $q$ denotes light $u$ or $d$ quark. \subsection{The $B$ and $D$ decay constants} \begin{table}[htb] \begin{center} \begin{tabular}{c| c c c } \hline\hline ~~$m_b/{\rm GeV}$~~ & ~~$s_0/{\rm GeV}^2$~~ & ~~$M^2/{\rm GeV}^2$~~ & ~~$f_B/{\rm GeV}$~~ \\ \hline $4.80$ & $[32.8, 35.9]$ & $[1.93, 2.36]$ & $0.160(5)$ \\ \hline $4.85$ & $[32.5, 34.9]$ & $[1.81, 2.17]$ & $0.141(4)$ \\ \hline $4.90$ & $[32.3, 33.9]$ & $[1.84, 2.00]$ & $0.121(2)$ \\ \hline \hline \end{tabular} \caption{The $B$ meson decay constant $f_B$ up to NLO for $m_b=4.85\pm0.05$ GeV. The number in the parenthesis shows the uncertainty in the last digit. } \label{tabp1} \end{center} \end{table} \begin{table}[htb] \begin{center} \begin{tabular}{c| c c c } \hline\hline ~~$m_c/{\rm GeV}$~~ & ~~$s_0/{\rm GeV}^2$~~ & ~~$M^2/{\rm GeV}^2$~~ & ~~$f_D/{\rm GeV}$~~ \\ \hline $1.45$ & $[5.07, 5.95]$ & $[0.67, 0.81]$ & $0.180(5)$ \\ \hline $1.50$ & $[5.31, 5.72]$ & $[0.59, 0.73]$ & $0.163(4)$ \\ \hline $1.55$ & $[4.92,5.01]$ & $[0.67, 0.68]~$ & $0.142(6)$ \\ \hline \hline \end{tabular} \caption{The $D$ meson decay constant $f_D$ up to NLO for $m_c=1.50\pm0.05$ GeV. The number in the parenthesis shows the uncertainty in the last digit. } \label{tab_fd} \end{center} \end{table} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\textwidth]{fB.eps} \end{center} \caption{The value $f_B$ versus $m_b\in[4.60,~4.90]$ GeV, in which the errors for some specific points are caused by the choices of $s_0$ and the Borel window within their allowable regions. } \label{fb} \end{figure} The $B$ and $D$ decay constants are usually studied via their leptonic decay channels, earlier discussions of which can be found in Ref.\cite{khlopov}. At present, we determine the $B$ and $D$ decay constants from the LCSR (\ref{fbsr}). The Borel window, i.e. the allowable range of the Borel parameter $M^2$, and the effective continuous threshold $s_0$ can be determined from three restriction conditions: I) The continuum contribution is not higher than 30\%; II) The dimension-six condensate does not exceed 15\%; III) The estimated $B$ meson mass compared with the experimental results does not exceed 1\%. A LCSR for $m_B$ can be easily derived by doing the derivative of the logarithm of Eq.(\ref{fbsr}) with respect to $1/M^2$, which can be conveniently adopted for determining the $B$ meson mass. The results are presented in Tables \ref{tabp1} and \ref{tab_fd}. Tables \ref{tabp1} and \ref{tab_fd} indicate that the value of $f_B$ or $f_D$ decreases almost linearly with the increment of $b$ or $c$ quark mass. This can be seen by Fig.(\ref{fb}), which represents the behavior of $f_B$ versus $m_b$. Here the errors are caused by varying $s_0$ within the region listed in Table \ref{tabp1} and by varying $M^2$ within the allowable Borel window. In the literature, based on the non-relativistic constituent quark model or via an application of the Dyson-Schwinger equation, it has been known that $f_{B}|_{m_b\to\infty}\propto 1/\sqrt{m_B}$~\cite{fb0,fb1,fb2,fb3,fb4}. On the other hand, under the QCD sum rule approach, such asymptotic behavior shall be altered by a certain degree when we have taken the non-perturbative terms proportional to the quark and gluon condensates into consideration~\cite{fb5}. A similar linear $m_b$ dependence has also been observed in a recent QCD sum rule analysis~\cite{fb6}. \subsection{The $D$ meson distribution amplitude} \begin{table}[tb] \begin{center} \begin{tabular}{c| c c c c c} \hline \hline ~~$m_c$~~ & ~~$\mu_0$~~ & $~~A_{3D}~~$ & $~~b_{3D}~~$ & $~~A_{2D}~~$ & $~~b_{2D}~~$ \\ \hline & 1 & 416.6 & 0.791 & 514.8 & 0.841\\ \raisebox {2.0ex}[0pt]{1.45} & 2 & 479.9 & 0.812 & 595.8 & 0.862\\ \hline & 1 & 739.9 & 0.854 & 937.2 & 0.902\\ \raisebox {2.0ex}[0pt]{1.50} & 2 & 814.1 & 0.868 & 1033 & 0.915\\ \hline & 1 & 1674. & 0.940 & 2184. & 0.985\\ \raisebox {2.0ex}[0pt]{1.55} & 2 & 1763. & 0.947 & 2301. & 0.991\\ \hline \hline \end{tabular} \caption{The WF parameters $A_{2D}$, $b_{2D}$, $A_{3D}$ and $b_{3D}$ with $m_c=1.50\pm0.05$ GeV. $B=0.00$. The value of $f_D$ is taken as the central value for each $m_c$ and we adopt two initial scales for DA, i.e. $\mu_0=1$ and $2$ GeV respectively. } \label{tabp3} \end{center} \end{table} \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{DA_2.eps} \end{center} \caption{The $D$ meson DA $\phi_{3D}(x,\mu^2_{0})$ at $\mu_0=1$ GeV with different $B$, in which we have set $B=0.00$,$\cdots$,$0.60$, respectively. } \label{DA_2} \end{figure} \begin{table}[htb] \begin{center} \begin{tabular}{c | c | c | c} \hline \hline ~~Model~~ & ~~$B$~~ & ~~$a_1^D(\mu^2_0 =1\;{\rm GeV}^2)$~~ & ~~$a_2^D(\mu^2_0 =1\;{\rm GeV}^2)$~~ \\ \hline & 0.00 & 0.625 & 0.056\\ & 0.10 & 0.618 & 0.135\\ II & 0.20 & 0.614 & 0.211\\ & 0.30 & 0.612 & 0.289\\ & 0.40 & 0.611 & 0.370\\ \hline & 0.00 & 0.586 & 0.024\\ & 0.10 & 0.581 & 0.103 \\ III & 0.20 & 0.576 & 0.180\\ & 0.30 & 0.579 & 0.258\\ & 0.40 & 0.576 & 0.341\\ \hline \end{tabular} \caption{The first and second Gegenbauer moments of the $D$ meson leading-twist DAs $\phi_{2D}(x,\mu_0^2)$ and $\phi_{3D}(x,\mu_0^2)$ for typical $B$ within the region of $[0.00,0.40]$. $m_d=0.30$ GeV, $m_c=1.50$ GeV, $P_D=0.8$ and $\mu_0=1$ GeV. } \label{Gegenbauer_moment} \end{center} \end{table} As a combination of the above mentioned two constraints, i.e. the normalization condition (\ref{Pd_1}) and the probability $P_D=0.8$, we determine the $D$ meson DA parameters. We put the results for the DA parameters $A_{2D}$, $b_{2D}$, $A_{3D}$ and $b_{3D}$ in Table \ref{tabp3}, where we have set $B=0.00$ as an explicit example and all other parameters are set to be their central values. During the calculation, the parameter $B$ could be treated as a free parameter for determining the DA models $\phi_{2D}$ and $\phi_{3D}$. We put the $D$ meson DA $\phi_{3D}$ with different choices of $B$ in Fig.(\ref{DA_2}), in which we have set the value of $B$ up to a larger value of $0.60$. It is found that by varying $B$ within a certain region, e.g. $B\in[0, 0.6]$, the $D$ meson DA shall vary from asymptotic-like to double-humps-like, then, one reproduces most of the $D$ meson DAs suggested in the literature. This agrees with our experience on pion DA~\cite{XGWU_1}. Then, inversely, by comparing the estimations with the experimental data on $D$ meson involved processes, one can obtain the possible range for the parameter $B$ and then a determined behavior of the $D$ meson DA. The first and second Gegenbauer moments $a_1^D(1{\rm GeV}^2)$ and $a_2^D(1{\rm GeV}^2)$ with varying $B\in[0.00,0.40]$ for $\phi_{2D}$ and $\phi_{3D}$ are presented in Table \ref{Gegenbauer_moment}. By setting $B\in[0.00,0.40]$, we get the steady first Gegenbauer moment, i.e. $a^{D}_{1}\sim[0.61,0.63]$ for $\phi_{2D}$ and $a^{D}_{1}\sim[0.57,0.59]$ for $\phi_{3D}$. These vales are consistent with those of Ref.\cite{H_N_Li_1}, which, at present, is a natural deduction of our present LCDA model. More over, we observe that the value of the second Gegenbauer moment $a^{D}_{2}\sim B$, which shows that the parameter $B$ does basically determine the broadness of the longitudinal distribution of $D$ meson DA. \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{DA.eps} \end{center} \caption{Three $D$ meson DAs $\phi_{1D,2D,3D}$ under two different scales, in which we have set $B=0.00$ for $\phi_{2D}$ and $\phi_{3D}$. The curves for $\phi_{2D}$ or $\phi_{3D}$ at the two scales are almost coincide with each other. As for $\phi_{1D}$, we set $C_d=0.70$~\cite{H_N_Li_1}. } \label{DA} \end{figure} As a comparison, we present the $D$ meson DAs $\phi_{1D,2D,3D}(x,\mu^2_0)$ in Fig.(\ref{DA}). It shows that the $D$ meson DA shape changes slightly by varying the scale $\mu_0$ from $1$ GeV to $2$ GeV. And as a comparison of $\phi_{2D}$ and $\phi_{3D}$, by including the spin-space WF effect, the DA end-point behavior can be further improved. \subsection{The $B\to D$ transition form factor} Using the QCD LCSR for the $B\to D$ TFF $f^+(q^2)$, we discuss its properties in detail. The TFF $f^+(q^2)$ or ${\cal G}(w)$ depends weakly on the allowable Borel window $M^2\in[15,19]\;{\rm GeV}^{2}$, and we shall fix $M^2$ to be $17\;{\rm GeV}^2$ to do our calculation. \begin{figure}[htb] \begin{center} \includegraphics[width=0.5\textwidth]{fM123.eps} \end{center} \caption{The TFF $f^+(q^2)$ for three $D$ meson DAs. The dash-dot, the dotted and the solid lines are for $\phi_{1D}$, $\phi_{2D}$ and $\phi_{3D}$, respectively. For the case of $\phi_{2D}$ and $\phi_{3D}$, we have set $B=0.00$ and $\mu_0=1$ GeV. } \label{fq2phi} \end{figure} We present the TFF $f^+(q^2)$ up to twist-4 accuracy for the $D$ meson DAs $\phi_{1D,2D,3D}$ in Fig.(\ref{fq2phi}). The shapes/trends of the three curves are similar to each other. The simplest model $\phi_{1D}$, which agrees with that of Ref.\cite{ZuoFen_1} by using the same inputs, provides much lower $f^+(q^2)$ in the whole $q^2$ region than those of $\phi_{2D}$ and $\phi_{3D}$. Thus, the previously adopted naive DA model $\phi_{1D}$ can only provide the conceptional estimation on $f^+(q^2)$. The TFF $f^+(q^2)$ for both $\phi_{2D}$ and $\phi_{3D}$ are close to each other. This is reasonable, since the TFFs are dominated by large $x$ region that is close to $1$ and $\phi_{2D}$ and $\phi_{3D}$ have similar behaviors in this region. The inclusion of spin-space WF shall lead to a more accurate estimation, so, we take $\phi_{3D}(x,\mu_0)$ as the $D$ meson DA to do our following discussions. \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{fB0123.eps} \end{center} \caption{The TFF $f^+(q^2)$ for $D$ meson DA $\phi_{3D}$ with different choice of $B$. The solid, the dashed and the dash-dot lines are for $B=0.00$, $0.10$ and $0.20$, respectively. } \label{fq2phi3B} \end{figure} The TFFs for $\phi_{3D}$ with $B=0.00$, $0.10$ and $0.20$ are presented in Fig.(\ref{fq2phi3B}). It shows that $f^+(q^2)$ increases with the increment of $B$. This agree with the trends shown in Fig.(\ref{DA_2}) that a bigger $B$ leads to a weaker suppression in the end-point region ($x\to 0$ or $x\to 1$), and shall result in a larger estimation on $f^{+}(q^2)$. \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{fq2.eps} \end{center} \caption{The TFF $f^+(q^2)$ for $\phi_{3D}$ with $B=0.00$ up to twist-4 accuracy. It shows the twist-2 part provides dominant contribution, while the twist-4 part gives quite small negative contribution. }\label{fq2} \end{figure} To compare the relative importance of different twist structures, we present the TFF $f^+(q^2)$ for the twist-2 part only and the total TFF up to twist-4 accuracy in Fig.(\ref{fq2}), where the $D$ meson DA is taken as $\phi_{3D}$ with $B=0.00$. The cases for other $B$ values are similar. As required, Fig.(\ref{fq2}) shows that the twist-2 part provides dominant contribution, while the twist-4 part gives quite small (negative) contribution. The twist-4 contribution slightly increases with the increment of $q^2$, and for $q^2=12\;{\rm GeV}^2$, the twist-4 part provides $\sim 4\%$ absolute contribution to the TFF $f^+(12)$. The twist-4 part should be taken into consideration in cases when a physical observable sizably depends on the TFF at large $q^2$ region. Fig.(\ref{fq2}) also indicates that our present treatment of $D$-meson twist-4 DAs is viable, since the twist-4 DAs for $D$ meson and kaon are similar (both are treated as heavy-and-light meson) and their differences to the total TFF, and hence to the following determined $|\ensuremath{V_{\mathrm{cb}}}|$, can be highly suppressed by the total quite small twist-4 contributions to the integrated TFF in whole $q^2$ region. \begin{table}[tb] \begin{center} \begin{tabular}{|c |c |c |c| } \hline ~~Refs.~~ & ~~\cite{Lattice_1}~~ & ~~\cite{Lattice_2}~~ & ~~\cite{Lattice_3}~~ \\ \hline ${\cal G}(1)$ & 1.026(17) & 1.074(24) & 1.058(20) \\ \hline \end{tabular} \caption{The value of TFF ${\cal G}(\omega)$ at the minimum recoil point, ${\cal G}(1)$, under the quenched lattice QCD approach~\cite{Lattice_1,Lattice_2,Lattice_3}, where the number in parenthesis shows its uncertainty in the last digit.} \label{G1-tab} \end{center} \end{table} In the literature, one always uses ${\cal G}(w)$ for pQCD and experimental analysis, especially for determining the CKM matrix element $|\ensuremath{V_{\mathrm{cb}}}|$, cf.Refs.\cite{Belle_1,Cleo_1,Cleo_2,Cleo_3}. An important input for the experimental fit is ${\cal G}(w=1)$, which is the value of TFF at the minimum recoil point (corresponding to $q^2=(m_{B}-m_{D})^2$). Theoretically, we have $h_+(1)\to 1$ and $h_-(1)\to 0$ in the framework of the heavy quark effective theory, which results in the limiting behavior ${\cal G}(1)\to 1$. The quenched lattice QCD estimation~\cite{Lattice_1,Lattice_2,Lattice_3}, cf. Table \ref{G1-tab}, shows ${\cal G}(1)\to 1$ could be a good approximation. \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{G1.eps} \end{center} \caption{The TFF ${\cal G}(1)$ by varying $s_0$ within the wide region of $[37,41]\;{\rm GeV}^2$, where the uncertainties for $m_c\in[1.45,1.55]$ GeV, $m_b\in[4.80,4.90]$ GeV and $M^2\in[15,19]\;{\rm GeV}^2$ are presented by shaded bands, respectively. The central solid line is for $m_c=1.50$ GeV, $m_b=4.85$ GeV and $M^2=17\;{\rm GeV}^2$. }\label{G1} \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=0.5\textwidth]{Gw.eps} \end{center} \caption{The QCD LCSR for TFF ${\cal G}(w)$ versus $w$, in which the allowable range for $w$ is $[1.00,1.59]$. The dash-dot, the solid and the dashed lines are for $s_0=37\;{\rm GeV}^2$, $39\;{\rm GeV}^2$ and $41\;{\rm GeV}^2$, respectively. As a comparison, we also present the parametrization (\ref{bellefit}) of Belle Collaboration~\cite{Belle_1}: the dotted line is the central value for ${\hat{\rho}}^2 _D=0.69$ and ${\hat c_D=0.00}$, the thicker shaded band shows the uncertainty of linear fit and the lighter shaded band is for quadratic fit. } \label{Gw_1} \end{figure} Using the LCSR (\ref{LCSRgw}), we put our prediction of ${\cal G}(1)$ versus the threshold parameter $s_0$ in Fig.(\ref{G1}), where the uncertainties for $m_c\in[1.45,1.55]$ GeV, $m_b\in[4.80,4.90]$ GeV and $M^2\in[15,19]\;{\rm GeV}^2$ are presented. Our central value is $[0.94,1.01]$ for $s_0\in[37,41]\;{\rm GeV}^2$. The value of ${\cal G}(1)$ is steady over the Borel window, which changes by less than $2\%$ for $M^2\in[15,19]\;{\rm GeV}^2$. Varying $w$ within its allowable range of $[1.00,1.59]$, the TFF ${\cal G}(w)$ for several continuum threshold $s_0$ is drawn in Fig.(\ref{Gw_1}). By varying $s_0$ within the wide region from $37\;{\rm GeV}^2$ to $41\;{\rm GeV}^2$, ${\cal G}(1)$ changes from $\pm7\%$ to $\pm8\%$ for $m_b\in[4.80,4.90]$ GeV and from $\left(^{+13\%}_{-6\%}\right)$ to $\left(^{+14\%}_{-7\%}\right)$ for $m_c\in[1.45,1.55]$ GeV, respectively. As a combined (squared) error for the $b$ and $c$ quark mass uncertainties, it is found that ${\cal G}(1)$ changes by $\left(^{+15\%}_{-9\%}\right)$ at $s_0=37\;{\rm GeV}^2$ and $\left(^{+16\%}_{-11\%}\right)$ at $s_0=41\;{\rm GeV}^2$. Experimentally, ${\cal G}(w)$ is usually parameterized as the following form~\cite{Cleo_1,Cleo_2,Cleo_3,Belle_1}: \begin{eqnarray} {\cal G}_D(w) &=& {\cal G}_D(1)\left[ 1 - {\hat{\rho}} _D^2(w - 1) + {\hat c_D}{(w - 1)^2} \right. \nonumber\\ && \quad\quad\quad \left. + {\cal O}((w - 1)^3) \right], \label{bellefit} \end{eqnarray} in which the undetermined parameters are taken as~\cite{Belle_1} \begin{eqnarray} {\hat{\rho}}^2_D &=& 0.69 \pm 0.14,\quad {\hat c_D} = 0.00 \end{eqnarray} for the linear fit; and \begin{eqnarray} {\hat{\rho}}^2_D &=& 0.69^{+0.42}_{-0.15},\quad {\hat c_D} = 0.00 ^{+0.59}_{-0.00} \end{eqnarray} for the quadratic fit. As a comparison of our theoretical estimations, we have also put the results for the parametrization (\ref{bellefit}) in Fig.(\ref{Gw_1}): the dotted line is the central value for ${\hat{\rho}}^2 _D=0.69$ and ${\hat c_D=0.00}$, the lighter shaded band is the uncertainty of quadratic fit and the thicker shaded band is for linear fit. Fig.(\ref{Gw_1}) shows our present prediction of ${\cal G}(w)$ is in a good agreement with the data, which also consistent with the pQCD estimation at the large recoil region~\cite{H_N_Li_1}. \subsection{The matrix element $|\ensuremath{V_{\mathrm{cb}}}|$ and its uncertainties} There are four $B\to D$ semi-leptonic processes that are frequently used to determine the CKM matrix element $|\ensuremath{V_{\mathrm{cb}}}|$, i.e. $B^0 \to D^- \ell^+ \nu_\ell$ and $\bar B^0\to D^+ \ell^- \bar\nu_\ell$, $B^+ \to \bar D^0 \ell^+ \nu_\ell$ and $B^- \to D^0 \ell^- \bar\nu_\ell$. The branching ratios and lifetimes of those processes can be grouped into two types, one is called as the ``$B^0/\bar{B}^0$-type" with~\cite{PDG} \begin{eqnarray} {\cal{B}}(B^0 \to D^- \ell^+ \nu_\ell) &=& {\cal{B}}(\bar B^0\to D^+ \ell^- \bar\nu_\ell ) \nonumber\\ &=& (2.18\pm0.12)\% ,\nonumber\\ \tau (B^0\;{\rm or}\;\bar{B}^0)&=&1.519\pm0.007 \;{\rm ps} ,\nonumber \end{eqnarray} and the other is called as the ``$B^{\pm}$-type" with~\cite{PDG} \begin{eqnarray} {\cal{B}}(B^+ \to \bar D^0 \ell^+ \nu_\ell) &=& {\cal{B}}(B^-\to D^0 \ell^- \bar\nu_\ell ) \nonumber\\ &=& (2.26\pm0.11)\% , \nonumber\\ \tau(B^\pm)&=&1.641\pm0.008 \;{\rm ps} . \nonumber \end{eqnarray} In the following, we shall adopt those two types of processes to determine $|\ensuremath{V_{\mathrm{cb}}}|$. \begin{table}[t] \begin{center} \begin{tabular}{|c | c| c | c| c|} \hline\hline $(B^0\to D^-\ell^ + \nu_{\ell})$ & ~~$|V_{cb}^{\rm Max}|$~~ & ~~$\Delta^+$~~ & ~~$|V_{cb}^{\rm Min}|$~~ & ~~$\Delta^-$~~ \\ \hline $m_b=(4.85\pm0.05)$ GeV & 44.74 & +3.45 & 37.41 & -3.88 \\ \hline $m_c=(1.50\pm0.05)$ GeV & 43.66 & +2.37 & 40.01 & -1.28 \\ \hline $s_0=(39 \pm 2)\;{\rm GeV}^2$ & 44.96 & +3.68 & 38.87 & -2.41 \\ \hline $M^2=(17 \pm 2)\;{\rm GeV}^2$ & 42.36 & +1.08 & 40.43 & -0.86 \\ \hline\hline ${\cal B}=(2.18\pm0.12)\%$ & 42.40 & +1.12 & 40.13 & -1.15 \\ \hline $\tau=(1.519 \pm 0.007)$ ps & 41.38 & +0.10 & 41.19 & -0.10 \\ \hline\hline \end{tabular} \caption{Theoretical and experimental uncertainties for $|V_{cb}|$ under the $B^0/\bar{B}^0$-type. The central value is $|V_{cb}^{\rm CV}|=41.28$, which is obtained by setting all parameters to be their cental values. The symbols CV, Max and Min stand for the central value, the maximum value and the minimum value, respectively. The conditions for the $B^{\pm}$-type are similar. } \label{uncertainty_1} \end{center} \end{table} \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{Vcb_11.eps} \includegraphics[width=0.45\textwidth]{Vcb_12.eps} \end{center} \caption{The uncertainties of $|\ensuremath{V_{\mathrm{cb}}}|$ versus $s_0$ from the QCD LCSR analysis, where the left is for $B^0/\bar{B}^0$-type and the right is for $B^{\pm}$-type. The shaded bands are for uncertainties of different parameters, which are derived by varying these parameters within their reasonable regions as $m_b=(4.85\pm0.05)$ GeV, $m_c=(1.50\pm0.05)$ GeV and $M^2=(17 \pm 2)\;{\rm GeV}^2$. The solid line stands for the central values of $|\ensuremath{V_{\mathrm{cb}}}|$. } \label{GBDw} \end{figure} \begin{table}[tb] \begin{center} \begin{tabular}{|c | c | c| } \hline\hline ~~~$B$~~~ & ~~~~~~$B^0/\bar{B}^0$-type~~~~~~ & ~~~~~~$B^{\pm}$-type~~~~~~ \\ \hline 0.00 & $41.28 {^{+5.68}_{-4.82}}~ {^{+1.13}_{-1.16}}$ & $40.44 {^{+5.56}_{-4.72}}~ {^{+0.98}_{-1.00}}$ \\ \hline 0.10 & $39.50 {^{+5.36}_{-4.68}}~ {^{+1.08}_{-1.11}}$ & $38.70 {^{+5.25}_{-4.58}}~ {^{+0.94}_{-0.96}}$ \\ \hline 0.20 & $38.00{^{+ 5.17}_{- 4.59}}~ {^ {+ 1.04}_{- 1.06}}$ & $37.22 {^{+5.06}_{-4.49}}~ {^{+0.90}_{-0.92}}$ \\ \hline\hline \end{tabular} \caption{The value of $|\ensuremath{V_{\mathrm{cb}}}|$ in unit $10^{-3}$ with varying $B$ for $D$ meson DA. Three choices of $B$, i.e. $0.00$, $0.10$ and $0.20$, are adopted. The central values for $|\ensuremath{V_{\mathrm{cb}}}|$ are obtained by setting all inputs to be their central values. The errors are calculated by theoretical and experimental errors for all inputs, similar to the case of Table \ref{uncertainty_1}. } \label{Vcb_results} \end{center} \end{table} Taking $\phi_{3D}$ with $B=0.00$ as an example, we show how the considered uncertainty sources affect $|\ensuremath{V_{\mathrm{cb}}}|$, i.e., \begin{equation} |V_{cb}|(B^0/\bar{B}^0-{\rm type})=(41.28 {^{+5.68}_{-4.82}}{^{+1.13}_{-1.16}}) \times 10^{-3} \label{Vcb_21} \end{equation} and \begin{equation} |V_{cb}|(B^{\pm}-{\rm type})=(40.44 {^{+5.56}_{-4.72}}~ {^{+0.98}_{-1.00}}) \times 10^{-3}, \label{Vcb_22} \end{equation} in which the first (second) uncertainty comes from the squared average of the mentioned theoretical (experimental) uncertainties shown in Table \ref{uncertainty_1}. That is, the theoretical uncertainty mainly comes from the $c$ and $b$ quark masses, the Borel window and the choice of the threshold parameter $s_0$. The experimental uncertainty comes from the lifetime and the decay ratio of the mentioned processes. A clear description of those uncertainties are presented in Fig.(\ref{GBDw}). Next,we discuss the variation of by taking $\phi_{3D}$ with several choices of $B=0.00$, $0.10$ and $0.20$, respectively. The results are put in Table \ref{Vcb_results}. It is noted that the value of $|\ensuremath{V_{\mathrm{cb}}}|$ decreases with the increment of $B$. To compare with the experimental estimations on $|\ensuremath{V_{\mathrm{cb}}}|$, we need a smaller $B$ and hence a smaller second Gegenbauer moment. This, in some sense, consistent with the present analysis for the pion DA, which also prefers an asymptotic behavior with small second Gegenbauer moment or small $B$ value~\cite{XGWU_1}. \begin{table}[tb] \begin{center} \begin{tabular}{|c|c| } \hline\hline ~~~ ~~~ & ~~~~~$|V_{cb}|\times 10^{-3}$~~~~~ \\ \hline BABAR \cite{Babar_1} (ULC) & $39.8(18)(13)$\\ \hline BABAR \cite{Babar_1} (SSM) & $41.6(18)(14)$\\ \hline PDG (Lattice) \cite{PDG} & $39.4(14)(13)$\\ \hline CLEO \cite{Cleo_1} & $45(6)(4)(5)$\\ \hline Belle \cite{Belle_1} & $41.9(45)(53)$\\ \hline QLC \cite{Lattice_1} & $38.4(9)(42)$\\ \hline DELPHI \cite{DELPHI} & $41.4(12)(21)$\\ \hline HQET \cite{HQET} & $40(6)$\\ \hline Our result ($B^0/\bar{B}^0$-type) & $41.28 {^{+5.68}_{-4.82}}~ {^{+1.13}_{-1.16}}$ \\ \hline Our result ($B^\pm$-type) & $40.44 {^{+5.56}_{-4.72}}~ {^{+0.98}_{-1.00}}$ \\ \hline\hline \end{tabular} \caption{A comparison of $|V_{cb}|$ with some estimations done in the literature, in which the first and second errors are for theoretical and experimental uncertainty sources, respectively. The symbol QLC means the quenched lattice calculation and the HQEF means the heavy quark effective theory. } \label{tabp2} \end{center} \end{table} As a final remark, we preset a comparison of $|V_{cb}|$ for $B=0.00$ with the present estimations done in the literature. We put such a comparison in Table \ref{tabp2}. Experimentally, the value of ${\cal G}(1)|V_{cb}|$ is determined in a combined way to short the uncertainties and the value of $|V_{cb}|$ is determined by using theoretical estimations on ${\cal G}(1)$. As for BABAR collaboration~\cite{Babar_1}, the SSM means using ${\cal G}(1)$ determined by the quenched lattice calculation based on the Step Scaling Method~\cite{Lattice_1} and the ULC means using ${\cal G}$ determined by the unquenched lattice calculation~\cite{Lattice_2}. Tables \ref{Vcb_results} and \ref{tabp2} show that our present QCD LCSR estimation on $|\ensuremath{V_{\mathrm{cb}}}|$ for a smaller $B$ shows a good agreement with the experimental estimates. \section{summary} In the present paper, by adopting several $D$ meson DA models, we have presented a detailed discussion on $B\to D$ TFF $f^{+}(q^2)$ or ${\cal G}(w)$ within the QCD LCSR approach. Based on the sum rules together with the experimental data on $B\to D$ semileptonic decays, we have analyzed the CKM matrix element $|\ensuremath{V_{\mathrm{cb}}}|$, in which a detailed error analysis has been presented. We have calculated the $B\to D$ TFF up to twist-4 accuracy by using the improved QCD LCSR with chiral current. By using chiral current in the correlator, the most uncertain twist-3 contributions can be eliminated due to chiral suppression. It shows that the twist-2 part provides dominant contributions to the form factor and the twist-4 parts only give less than $4\%$ contributions in whole $q^2$ region. Thus this provides another platform for testing the properties of twist-2 DA. We have newly suggested a convenient $D$ meson DA model (\ref{phi3d}) based on the BHL prescription together with the Wigner-Melosh rotation effect. As shown by Table \ref{Gegenbauer_moment}, its second Gegenbauer moment is dominantly determined by a parameter $B$, i.e. $a^D_2 \sim B$. The DA shapes for various $B$ are put in Fig.(\ref{DA_2}). By using a proper choice of $B$, most of the DA shapes suggested in the literature can be simulated. Then, if by comparing with the data, the value of $B$ can be fixed, the DA behavior can be determined accordingly. It is noted that to compare with the experimental result on $|\ensuremath{V_{\mathrm{cb}}}|$, a smaller $B \precsim 0.20$ shows a better agreement. By varying $B\in[0.00,0.20]$, its first Gegenbauer moment $a^{D}_1$ is about $[0.6,0.7]$, consistent with the pQCD suggestion~\cite{H_N_Li_1}. The TFF $f^+(q^2)$ have been calculated by using three different $D$ meson DAs. As shown by Fig.(\ref{fq2phi}), the usual simple model $\phi_{1D}$ shall lead to smallest $f^+(q^2)$ and can only be adopted for a conceptional estimation on $f^+(q^2)$. By using $\phi_{3D}$, with a larger $B$ value, a larger $f^{+}(q^2)$ is observed, which is due to less suppression from the DA around the end-point region. A detailed uncertainty analysis on ${\cal G}(1)$ has also been done. As shown by Fig.(\ref{Gw_1}), our present prediction of ${\cal G}(w)$ shows a better agreement with the data. The central value of ${\cal G}(1)$ is $[0.94,1.01]$ for $s_0\in[37,41]\;{\rm GeV}^2$, consistent with HQET limit ${\cal G}(1)\to 1$. The value of ${\cal G}(1)$ is steady over the Borel window, which changes by less than $2\%$ for $M^2\in[15,19]\;{\rm GeV}^2$. \begin{figure}[tb] \begin{center} \includegraphics[width=0.5 \textwidth]{Vcb_22.eps} \end{center} \caption{A comparison of $|\ensuremath{V_{\mathrm{cb}}}|$ with experimental and theoretical predictions. Our estimations for $B=0.00$, $0.10$ and $0.20$ are presented. }\label{Vcb_22} \end{figure} The matrix element $|\ensuremath{V_{\mathrm{cb}}}|$ and its uncertainties have been studied by using two types of processes, e.g. the $B^0/\bar{B}^0$-type and the $B^{\pm}$-type. For the case of $B$=0, by adding the errors for all mentioned experimental and theoretical uncertainty sources, we obtain $|V_{cb}|(B^0/\bar{B}^0-{\rm type})=(41.28 {^{+6.81}_{-5.98}}) \times 10^{-3}$ and $|V_{cb}|(B^{\pm}-{\rm type})=(40.44 {^{+6.54}_{-5.72}}) \times 10^{-3}$. As a weighted average of these two types we obtain, \begin{equation} |V_{cb}| = (40.84\pm3.11)\times 10^{-3} , \;\;(B=0.00) \end{equation} where the error stands for the standard derivation of the weighted average. Similarly, we have \begin{eqnarray} |V_{cb}| &=& (39.08\pm3.03)\times 10^{-3} , \;\;(B=0.10) , \\ |V_{cb}| &=& (37.59\pm2.89)\times 10^{-3} , \;\;(B=0.20) . \end{eqnarray} A comparison of $|\ensuremath{V_{\mathrm{cb}}}|$ with experimental and theoretical predictions is put in Fig.(\ref{Vcb_22}), in which our estimations for $B=0.00$, $0.10$ and $0.20$ are presented. We have also shown how the considered uncertainty sources affect $|\ensuremath{V_{\mathrm{cb}}}|$. The results are presented in Table \ref{Vcb_results}, in which three choices of $B$ are adopted, i.e. $B=0.00$, $0.10$ and $0.20$, respectively. Through a comparison with the experimental data, our present estimation for $|\ensuremath{V_{\mathrm{cb}}}|$ with a small $B$ shows a good agreement with the BABAR, CLEO and Belle estimates. With more and more available data for the $D$ meson involved processes, the $D$ meson DA will be finally determined by a global fit. \hspace{1cm} \noindent{\bf Acknowledgments}: This work was supported in part by Natural Science Foundation of China under Grant No.11075225 and No.11275280, by the Program for New Century Excellent Talents in University under Grant No.NCET-10-0882, and by the Fundamental Research Funds for the Central Universities under Grant No.CQDXWL-2012-Z002.
1,108,101,563,734
arxiv
\section{Introduction} \label{sec:intro} \vspace{-0.1cm} Object detection in aerial images has widely application, such as traffic monitoring and disaster search, due to flexible shooting view and wide receptive field. Many effective solutions have been proposed in nature scene detection\cite{ren2015faster,lin2017focal,cai2018cascade}. However, aerial images have special challenges different from nature images, such as MS-COCO \cite{lin2014microsoft} and Pascal VOC \cite{everingham2010pascal} datasets. When applying the same strategy as nature images, aerial detectors usually get poor performance. \begin{figure}[t] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=7.0cm]{lastChipProblem}} \end{minipage} \vspace{-0.9cm} \caption{Three problems in chips. (a) \& (b) show the scale variation. Car size varies from small to large. (b) shows a sparse chip which contains only one car. (c) displays imbalanced quantity distribution of categories in VisDrone dataset.} \label{figzeros} \end{figure} Recently, cropping-based detectors are proposed to improve performance in aerial image detection. More specifially, detectors first crop high resolution images into several subregions, denoted as chips, and detect on them. The final results is fused by the detecting of chips and original images. Many researchers have found the significance of chips in aerial image detection. In \cite{ozge2019power}, the authors splitted images uniformly. \cite{yang2019clustered} used K-means to generate object gathering regions and trained a network to predict them. The work of \cite{li2020density} introduced object density maps to discrible object distribution and cropped connected regions in the map. The approaches of \cite{zhang2019fully} predicted potential diffcult regions and detected on them. However, there are some problems in training network with chips as shown in figure \ref{figzeros}. First, severe scale variation exists among diffenert chips. Second, due to the object nonuniform distribution and the shortcoming of cropping method, some chips are object sparse samples, which contains much background but less foreground. Third, chips are class imbalanced in many cases. Therefore, these chips are not good for training detectors that can fully exploit ability. In this paper, we introduce three augmentation methods to relieve problems including scale variation, object sparsity, and class imbalance for aerial detectors based on the cropping idea. We propose an adaptive cropping module, which dynamically enlarges or reduces the chip size according to the object average scale, narrowing the scale variation. For sparse chips, we introduce mosaic \cite{bochkovskiy2020yolov4} to augment datasets, combining multiple sparse sample subregions into a new image. To balance class, we paste object masks in chips through panoramic segmentation. We abbreviate our network as AMRNet due to three augmentation methods: adaptive cropping, mosaic augmentation, and mask resampling. In summary, our work contributions are as follows: \vspace{-0.1cm} \begin{itemize} \item We propose a scale adaptive cropping method, which is compatible with existing cropping method, relieving scale variation problem in training stage. \item We first introduce mosaic augmentation into aerial image detection, validating its effectiveness and alleviating object sparsity problem. \item We present the mask resampling method, pasting and adjusting masks based on local context information to relieve class imbalance problem. \item We achieve state-of-the-art object detection performance on VisDrone \cite{zhu2018vision} and UAVDT \cite{du2018unmanned} datasets. \end{itemize} \vspace{-0.5cm} \section{Related work} \label{sec:format} \vspace{-0.1cm} In this section, we first review relevant aerial image detection methods, and then dicuss the differences and associations among existing approaches and ours. \textbf{Subregion detection.} Many reseachers have detected objects on image subregions and studied how to crop images reasonably \cite{ozge2019power,li2020density, zhang2019fully, gao2018dynamic, zhang2019dense}. For example in \cite{ozge2019power,zhang2019dense}, images are partitioned uniformly into the same size chips for detection. The method in \cite{gao2018dynamic} proposed a dynamic zooming strategy for small object with reinforcement learning. The work of \cite{yang2019clustered} generated object clusters by K-means and predicted these regions in inference. The method in \cite{li2020density} introduced object density maps and cropped connected regions. In \cite{zhang2019fully}, the authors trained a network to predict difficult regions. Above works reduce intro-sample scale variation by cropping images into chips, but not consider inter-sample scale variation. \textbf{Data augmentation.} Some special data augmentation have proposed in aerial image detection. The method of \cite{zhang2019dense, chen2019rrnet} splitted images into uniform chips to enlarge the dataset. The approach of \cite{kisantal2019augmentation} pasted small object randomly in images to improve object detection performance. In \cite{chen2019rrnet}, the authors took advantage of semantic segmentation to paste object on road regions, avoiding the mismatch of semantic information. Motivated by \cite{bochkovskiy2020yolov4}, we introduce mosaic, combining subregions into a new image, to augment datasets and relieve object sparsity problem. \textbf{Class imbalance.} Some researchers have offered some solutions for this problem. In \cite{zhang2019fully}, the authors used IOU (Intersection over Union) balanced sampling and balanced L1 loss to alleviate class imbalance. The work of \cite{zhang2019dense} divided class into two parts and trained expert detectors seperately. The approach of \cite{chen2019rrnet} pasted object ground truth boxes on road regions obtained from semantic segmentation. We propose mask resampling method to paste mask in images. Different from \cite{chen2019rrnet}, we only paste instance pixel instead of the whole ground truth box to get more accurate semantic match. In addition, we consider the pasted strategies of object scale, illumination and categories. \begin{figure}[t] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{adaptive_cropping_last.png}} \end{minipage} \caption{Adaptive Cropping Augmentation. The red boxes represent original chips from uniform cropping. Chips will be splitted uniformly (top path) or enlaged (bottom path) to get scale adaptive chips (yellow boxes).} \label{figone} \vspace{0.1cm} \end{figure} \section{PROPOSED METHOD} \label{sec:pagestyle} \vspace{-0.2cm} Cropping images into chips and performing detection are a common method to improve performance in aerial image detection. However, some problems exist in the process of training detectors with these chips. In this section, three augmentation methods are proposed to relieve scale variation, object sparsity, and class imbalance problems. We take uniform cropping as example to discrible our approachs. \vspace{-0.5cm} \subsection{Adaptive Cropping} \label{ssec:subhead} \vspace{-0.2cm} A prominent feature of aerial images is a wide range of object scale. Due to the change of shooting angle and elevation, objects have 20 times scale variation in VisDrone \cite{zhu2018vision}. Chips inherit the similar characteristic from images, which is not conducive to network training \cite{singh2018analysis, singh2018sniper}. Therefore, we propose a scale adaptive cropping method to relieve the inter-chip scale variation problem. As shown in figure \ref{figone}, chips from uniform cropping are fed into the scale adaption module to reconstruct the training dataset. We denote the average scale of object in chips as the chip scale. Partition or padding operation will be exploited according to scale information. If the chip scale is small, we split it uniformly into four parts. Otherwise, we enlarge it by padding pixel from original images. Chips generated from partition operation repeat the process unless it comes from padding operation or exceeds maximum iteration number. Above processes change the coverage proportion between objects and chips. When all chips resize to a fixed resolution in training, objects in different chips are transformed into a similar scale range. We limit the maximum number of partition to avoid creating too many small chips. The detailed implementation is illustrated in algorithm 1. For each chip, we count the average scale of object in chips. Ideal and current scaling factor are calculated according to the expect scale parameter and the training resolution. Partition or padding is exploited to narrow the difference of two factors. \begin{algorithm}[!t] \caption{Adaptive cropping } \label{alg::conjugateGradient} \begin{algorithmic}[1] \Require list of chip box $B=\{b_{1},...,b_{n}\}$. dict mapping chips to the number times of partition operation $I=\{b_{1}:0,...,b_{n}:0\}$. expect scale $S$. training resolution $I{w},I{h}$. maximum partition number $maxPart$. \Ensure list of adaptive chip box $C$ \State $C\leftarrow$\{\} \While{$B\neq$empty} \For{$b_{i}$ in $B$} \State $B$ $\leftarrow$ $B-b_{i}$ \State $avg_{obj}$ $\leftarrow$ getObjAvgScale($b_{i}$) \State $c{w},c{h}$ $\leftarrow$ getBoxSize($b_{i}$) \State $\triangleright$ Calculate ideal and current zoom factor \State $f_{cur}$, $f_{ideal}$ $\leftarrow$ min($\frac{I_{w}}{c_{w}}$,$\frac{I_{h}}{c_{h}}$),$\frac{S}{avg_{obj}}$ \If{$f_{ideal} \textgreater f_{cur}$} \State $\triangleright$ Do partition operation \State num $\leftarrow$ $I[b_{i}]$ \If{num$\geq$$maxPart$} \State $C \leftarrow C \cup b_{i}$ \Else \State $C_{i} \leftarrow$ partition($b_{i}$) \State $B \leftarrow B \cup C_{i}$ \State $I[C_{i}] \leftarrow I[b_{i}] + 1$ \EndIf \Else \State $\triangleright$Do padding operation \State $C_{i} \leftarrow$ padding($b_{i}$) \State $C \leftarrow C \cup C_{i}$ \EndIf \EndFor \EndWhile \end{algorithmic} \end{algorithm} \vspace{-0.5cm} \subsection{Mosaic Augmentation} \vspace{-0.1cm} \begin{figure}[ht] \label{ssec:subhead} \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=6.5cm]{mosaicLast.png}} \end{minipage} \caption{Mosaci augmentation. Some chips from uniform cropping are object sparse. The subregions of sparse samples are combined into a mosaic image.} \label{figtwo} \end{figure} Some chips have less foreground information, which leads to the low efficiency of network training. About one fifth chips are sparse samples which contain less three objects when we split images into six parts uniformly in VisDrone \cite{zhu2018vision}. Thus, we introduce mosaic \cite{bochkovskiy2020yolov4} to solve the problem. As shown in figure \ref{figtwo}, we crop out the region of interest (ROI) containing foreground from sparse samples, and combine multiple regions into a new image. To avoid intra-chip scale variation caused by the too small or too large object in ROI, we first zoom in/out chips and then use sliding windows to choose appropriate regions where objects are in a reasonable scale range. $f_{ideal}$ in algorithm 1 is adopted as the zoom factor. We extend the idea to all training samples to augment the dataset. For general samples, we not rescale them and directly choose appropriate regions because they contain more objects than that in sparse samples. Comparing with original images, objects in mosaic have more complicate background, which helps to detect objects in different context. For example, mosaic augmentation relieve similar background problem in UAVDT \cite{du2018unmanned}, where images have similar semantic information because they come from series adjacent video frames. \vspace{-0.6cm} \subsection{Mask Resampling} \label{ssec:subhead} \vspace{-0.2cm} Another notable problem in aerial datasets is class imbalance. For example, the number of cars is over 30 times than that of tricycles in VisDrone \cite{zhu2018vision} dataset. In order to alleviate class imbalance problem, mask reasmpling is proposed. We create a mask pool and pasted (road) regions by panoramic segmentation, and paste masks into chips. We also consider pasted strategies for mask category, scale and lumination. To build a mask pool, images are fed into a COCO \cite{lin2014microsoft} pretrained panoramic segmentation network to get object instance masks. If the IOU of mask and ground truth box (GT) is greater than a certain threshold, the GT category will be assigned to the mask. To ensure semantic correctness, we only collect road masks generated by segmentation to construct pasted regions. Pasted positions are randomly determined from road regions. We choose the object mask which category is compatible with the nearest object from pasted position. For example, compatible categories of van includes truck, bus. Because it is reasonable for these objects appear in a local region. The scale of the pasted object P can be calculated by a simple linear function according to the neighbour object N. \vspace{-0.3cm} \begin{equation}\label{equ8} \begin{split} S_p &= \frac{\overline{S_{pcls}}}{\overline{S_{ncls}}} *S_n \\ \overline{S_{cls}} &= \frac{1}{m}\sum_{i =1}^{m}S_{cls}^{i} \end{split} \end{equation} \vspace{-0.3cm} where $S_p,S_n$ is scale of the pasted object and the neighbour object, $\overline{S_{icls}}$ is the class average scale corresponding the class of object i. $\overline{S_{cls}}$ is the class average scale. We adjust the pasted mask lumination closed to that of the neighbour object in the hsv color space before pasting. \vspace{-0.2cm} \section{experiment} \label{sec:typestyle} \vspace{-0.1cm} \begin{table}[!ht] \begin{center} \caption{Quantitative results for VisDrone dataset. The $\bigstar$ denotes the multi-scale inference.}\label{tab4} \vspace{-0.4cm} \begin{tabular}{c|c|cccc} \hline \hline Mehod&Backbone&$AP$&$AP_{s}$&$AP_{m}$&$AP_{l}$\\ \hline ClustDet\cite{yang2019clustered}&ResNet50&26.7&17.6&38.9&51.4\\ ClustDet\cite{yang2019clustered}&ResNet101&26.7&17.2&39.3&54.9\\ ClustDet\cite{yang2019clustered}&ResXt101&28.4&19.1&40.8&54.4\\ \hline DMNet\cite{li2020density}&ResNet50&28.2&19.9&39.6&55.8\\ DMNet\cite{li2020density}&ResNet101&28.5&20.0&39.7&57.1\\ DMNet\cite{li2020density}&ResXt101&29.4&21.6&41&56.9\\ \hline AMRNet&ResNet50&31.7&23.0&43.4&58.1\\ AMRNet&ResNet101&31.7&22.9&43.4&59.5\\ AMRNet&ResXt101&\textbf{32.1}&\textbf{23.2}&\textbf{43.9}&\textbf{60.5}\\ \hline ClustDet\cite{yang2019clustered}$\bigstar$&ResXt101&32.4&-&-&-\\ AMRNet$\bigstar$&ResXt101&\textbf{36.1}&\textbf{29.0}&\textbf{45.5}&\textbf{60.9}\\ \hline \end{tabular} \end{center} \vspace{-0.8cm} \end{table} \begin{table}[!htb] \begin{center} \caption{Quantitative results for UAVDT dataset}\label{tab5} \begin{tabular}{c|c|cccc} \hline \hline Mehod&Backbone&$AP$&$AP_{s}$&$AP_{m}$&$AP_{l}$\\ \hline ClusDet\cite{yang2019clustered}&ResNet50&13.7&9.1&25.1&31.2\\ DMNet\cite{li2020density}&ResNet50&14.7&9.3&26.2&35.2\\ HFEA\cite{zhang2019fully}&ResNet50&15.1&-&-&-\\ Baseline&ResNet50&15.2&9.4&26.3&\textbf{36.8}\\ Base+Mosaic&ResNet50&16.8&\textbf{10.7}&29.8&31.8\\ AMRNet&ResNet50&\textbf{18.2}&10.3&\textbf{31.3}&33.5\\ \hline \end{tabular} \end{center} \vspace{-0.2cm} \end{table} \subsection{Implementation Details} \label{ssec:subhead} We use average precision (AP) as evaluation metric to validate our methods in two public datasets: VisDrone \cite{zhu2018vision} and UAVDT \cite{du2018unmanned}. Unless specified, we use retinaNet \cite{lin2017focal} as object detector and set input resolution as 800 $\times$ 1,500. Images are uniformly cropped into 6 and 4 chips as basline on VisDrone \cite{zhu2018vision} and UAVDT \cite{du2018unmanned}. We use three scales 1,000, 1,500, 2,000 in multiple scale testing. Detector is trained for 12 and 6 epochs respectively with a batch size of 2. On VisDrone \cite{zhu2018vision} dataset, learning rate sets 0.01 and decreases 0.1 times after the 8th and 11th rounds. On UAVDT \cite{du2018unmanned} dataset, learning rate sets 0.005 and decreases 0.1 times after 4th and 5th rounds. For two datasets, the expect scale parameter in adaptive cropping is 100 and 60 with most one partition operation. The reason we set the parameter to twice the average scale of objects is that in inference stage, chips are scaled double on average. Object scale in mosaic limits over 50 and 30. The number of mosaic is 20k whenever not specified. In mask resampling, we paste all categories except the car class. \vspace{-0.3cm} \subsection{Quantitative Results} \label{ssec:subhead} For fair comparisons, we train the network under the same configuration with \cite{li2020density} in two datasets. Table \ref{tab4} shows the results on VisDrone \cite{zhu2018vision}. It is noted that we surpass previous best AP with only resnet 50 as backbone. We get a high boost when detectors use multi-scale testing. We think the adptive cropping module performs well in multiple scale, so the gain in multiple scale almost catch up that in single scale. Table \ref{tab5} shows the results of different methods on UAVDT \cite{du2018unmanned}. Images are similar because they come from adjacent frames. We sample images with a step of five in frames and split uniformly in 2 $\times$ 2 chips to reconstruct the training set. Faster RCNN \cite{ren2015faster} with FPN \cite{lin2017feature} is trained on the new dataset as baseline. Baseline achieve higher AP than previous methods. We conjecture it is not nesscessary to train networks with all images due to background similarity. In addition, dataset is augmented by uniform cropping to train a more powerful detector. Remarkably, mosaic augmentation boosts 1.6 points comparing with the baseline. We think mosaic augmentation, combining subregions and creating complicate images, is suitable to relieve background similarity problem. When applying all methods, we achieve 18.2 AP with the stage-of-the-art perfomance. \begin{table}[!tb] \begin{center} \caption{Ablation experiments on VisDrone dataset. AC,MA,MR represent our three augmentation methods: adptive cropping, mosaic augmentation, mask resampling. 10K images are augmented in MA. SR denotes the sparse sample replacement with mosaic. MS indicates multiple scale testing. }\label{tab3} \renewcommand\tabcolsep{2.0pt} \vspace{-0.1cm} \begin{tabular}{c|cccccccccc|c} &a&b&c&d&e&f&g&h&i&j&k\\ \hline AC&&\checkmark&&&&&\checkmark&&\checkmark&&\checkmark\\ MA&&&\checkmark&&&&&\checkmark&\checkmark&\checkmark&\checkmark\\ MR&&&&\checkmark&&&&&&\checkmark&\checkmark\\ SR&&&&&\checkmark&&&\checkmark&&&\checkmark\\ MS&&&&&&\checkmark&\checkmark&&&&\\ \hline &27.0&29.5&28.8&28.5&27.4&27.6&31.2&29.1&30.6&29.0&30.8\\ \end{tabular} \end{center} \vspace{-0.2cm} \end{table} \vspace{-0.3cm} \subsection{Ablation Study} \label{ssec:subhead} \vspace{-0.1cm} We carry out ablation experiments on VisDrone \cite{zhu2018vision} without fusing original images. Table \ref{tab3} shows ablation results. Three methods can be independent applied to detectiors and steady improve performance (column b,c,d,e). It is worth noting that the sparse replacement gains about 0.3 points even though the dataset adds 10K mosaic images for augmentation (column c\&h). In addtion, we find adaptive cropping performs well in multi-scale testing. Mutiple scale increases 0.6 and 1.7 points in network without/with AC module respectively (column f\&g). The reason is that detectors with AC module focuses on objects in a certain scale range and multiple scale transforms objects into the scale interval. We also study the joint effect between modules. We find mask resampling increases less, only 0.2 points when it unite with mosaic augmentation (column c\&j). We hypothesize mosaic images increase the number of rare class objects, avoiding too less objects for rare category, which leads the gain overlapping with mask resampling. AC and MA are the main contribution to increase AP and few gain overlapping (column i\&k). \vspace{-0.2cm} \section{Conclusion} \label{sec:typestyle} \vspace{-0.1cm} In this paper, we propose three augamenataion methods in aerial image detection. Adaptive cropping reduces inter-chip scale variation by adjusting the coverage proportion between objects and chips. Masoic augmentation combines multiple image subregion to augment dataset, relieving object sparsity problem. Mask resampling balances the object number of differnt classes by pasting instance masks. Extend results show our approaches achieve state-of-the-art performance on two popular aerial image detection datasets. All proposed methods are cost free in inference stage and easily embedded to other detectors based on cropping idea. \bibliographystyle{IEEEbib}
1,108,101,563,735
arxiv
\section{Introduction} \noindent In an important {\em Letter}, Walker (1992b) has called attention to what appeared to be a fundamental problem with our knowledge of the properties of the RR Lyrae stars: the Baade-Wesselink (BW) calibration of the RR Lyrae absolute magnitude-metallicity relation (e.g., Carney et al. 1992; see also Storm et al. 1994 for a recent discussion and a comparison between field and cluster results) was shown by him to give a distance modulus for the Large Magellanic Cloud (LMC) that is substantially shorter than indicated by the LMC Cepheids (e.g., Laney \& Stobie 1994) and by the properties of the SN1987A circumstellar envelope (e.g., Crotts et al. 1995). The difference was attributed to a problem in the zero point of the $M_V({\rm RR}) - {\rm [Fe/H]}$ relation: RR Lyrae stars would thus be brighter by $\simeq 0.3\, \mbox{mag}$ than indicated by the BW method. Among the implications of this result stand out a reduction in the ages of globular clusters (GCs) and a decrease in the value of the Hubble parameter $H_0$ (van den Bergh 1995 and references therein). Independent evidence that the RR Lyrae variables should be brighter than suggested by the BW method has been presented by Saha et al. (1992), Catelan (1992), Simon \& Clement (1993), Cacciari \& Bruzzi (1993), Sandage (1993), Dorman (1993), Fernley (1994), Silbermann \& Smith (1995), etc. Castellani \& De Santis (1994) have shown that the BW luminosities cannot be reconciled with the standard models for evolution on the horizontal branch (HB), unless the helium abundance $Y$ is lower than 20\% by mass, and questioned the accuracy of BW analyses. Bono \& Stellingwerf (1994), Bono et al. (1994) and Fernley (1994) have similarly argued that some of the basic assumptions of the BW method may be in error. On the other hand, the latest analyses of statistical parallaxes of Galactic field halo stars (Layden et al. 1994) have reportedly given some support to the BW results. An explanation different from Walker's (1992b) has been proposed by van den Bergh (1995), according to whom the suggested discrepancy between the distance moduli of the LMC that are inferred through analysis of the Cepheid and RR Lyrae variables may {\em not} necessarily imply that the BW absolute magnitudes are incorrect, but rather that a ``third parameter" is acting in the LMC in such a way as to make the LMC RR Lyraes intrinsically brighter than those in the Galaxy by $\simeq 0.3\, \mbox{mag}$. Indeed, the old LMC GCs are somewhat shifted toward redder HB types in the HB morphology$-{\rm [Fe/H]}$ plane. This has often been interpreted (e.g., Walker 1992c) as evidence that the old LMC GCs are younger by a few Gyr than the Galactic globulars. We note, in passing, that the very metal-poor Galactic GCs which do not have extremely blue HB types (e.g., M15 and M68) have {\em not} generally been associated to a younger component of the Galactic halo (e.g., van den Bergh 1993; Zinn 1993), as would be required in the age interpretation of the second-parameter phenomenon. In the present {\em Letter}, we submit van den Bergh's (1995) suggestion to the critical analysis that is enabled by the extensive surveys of RR Lyrae pulsation properties in the LMC GCs by Walker (1989, 1990, 1992a,c). \section{Expected trends in second-parameter candidates} \noindent From the location of the old LMC GCs on the HB morphology-metallicity plane (see, e.g., Fig. 1 in Catelan \& de Freitas Pacheco 1993), several possibilities emerge for the sense of variation in the known second-parameter candidates: a {\em younger age}; a {\em smaller amount of mass lost during the red giant branch (RGB) phase}; a {\em lower Y}; a {\em lower helium-core mass at the helium flash} ($M_{\rm c}$); or a {\em higher relative abundance of the $\alpha$-capture elements}. These trends are well known from studies of the evolution of HB stars (e.g., Sweigart \& Gross 1976). The trend of variation of the HB morphology with [$\alpha$/Fe] was inferred from the analysis of Salaris et al. (1993). Quantitative estimates of the required changes in these candidates are, unfortunately, difficult to obtain (Catelan \& de Freitas Pacheco 1993). \noindent -- {\em Age}: the primary effect of age variations is upon the masses attained by HB stars. As far as the absolute magnitudes of the RR Lyrae variables are concerned, age changes are essentially irrelevant. Thus, the age interpretation of the second-parameter phenomenon would not naturally produce brighter HBs in the LMC; \noindent -- {\em Mass loss on the RGB}: like age, mass loss by stellar winds from the envelopes of RGB stars acts only to reduce the expected masses on the HB phase, and has but little impact upon RR Lyrae luminosities; \noindent -- {\em Helium abundance}: a higher $Y$ in HB stars would produce brighter RR Lyrae variables in the LMC, in comparison with the Galactic ones, but also bluer HB types; \noindent -- {\em Helium-core mass}: a higher $M_{\rm c}$ in the LMC, in comparison with the Galactic values, might originate, for instance, in higher stellar rotation rates (see discussion and references in Catelan et al. 1996). This would produce brighter RR Lyrae variables, but would also lead to bluer HB morphologies; \noindent -- {\em Abundances of the $\alpha$-elements}: since an overabundance of the $\alpha$-elements may, in a first approximation, be interpreted in terms of a higher metallicity $Z$ for a given [Fe/H] ratio, it follows that a smaller [$\alpha$/Fe] ratio might account for a brighter HB in the LMC. It may be noted that existing chemical evolution models (cf. Fig. 4 in Matteucci \& Brocato 1990) suggest that, at low metallicities, [$\alpha$/Fe] may be lower in the LMC than in the Galaxy. Observational element ratios at the low metallicities that characterize LMC GCs are badly needed. However, a smaller [$\alpha$/Fe] ratio would also lead to bluer HBs. To be sure, there are possible combinations of variations in these parameters that could account for brighter RR Lyraes and redder HB types simultaneously. It is conceivable, for instance, that a higher $M_{\rm c}$ could lead to a brighter HB in the LMC, {\em provided that} the LMC GCs were {\em much} younger than their Galactic counterparts (so as to match their observed HB types). To put more stringent constraints on variations in the second-parameter candidates, analysis of the RR Lyrae pulsation properties is necessary. \section{Constraints from RR Lyrae pulsation properties} \subsection{Mean pulsation periods} \noindent Light curves have been obtained by A. Walker for several GCs of the LMC. We have compiled data for the RR Lyrae-rich objects NGC 2257 (Walker 1989), NGC 1841 (Walker 1990), Reticulum (Walker 1992a), and NGC 1466 (Walker 1992c). This homogeneous database, supplemented by a few entries from Nemec et al. (1985) for NGC 2257, will be employed in the present discussion. \begin{table*} \caption[]{Pulsation properties of LMC and Galactic GCs} \begin{flushleft} \begin{tabular}{lllllllll} \noalign{\smallskip} \hline \noalign{\smallskip} & Cluster & [Fe/H] & $(B-R)/(B+V+R)$ & $\langle \log P_{\rm f} \rangle $ & $\langle \log P_{\rm ab} \rangle$ & $\langle \Delta \log P(T_{\rm eff}) \rangle$ & $N_{\rm RR}$ & $N_{\rm ab}$ \\ \hline LMC & Reticulum & $-1.71$ & $-0.04$ & $-0.283$ & $-0.260$ & $+0.022$ & 31 & 22 \\ & NGC 2257 & $-1.8 $ & $+0.49$ & $-0.291$ & $-0.245$ & $+0.041$ & 31 & 15 \\ & NGC 1466 & $-1.85$ & $+0.40$ & $-0.273$ & $-0.234$ & $+0.037$ & 39 & 23 \\ & NGC 1841 & $-2.11$ & $+0.72$ & $-0.209$ & $-0.172$ & $+0.072$ & 22 & 17 \\ Galaxy & M3 & $-1.66$ & $+0.08$ & $-0.276$ & $-0.259$ & $+0.025$ & 179 & 148 \\ & M15 & $-2.15$ & $+0.72$ & $-0.250$ & $-0.188$ & $+0.060$ & 67 & 29 \\ \hline \end{tabular} \end{flushleft} \end{table*} \begin{figure*} \rule{0.4pt}{14.5cm \hfill \parbox[b]{4.25cm}{\caption{ $P - T_{\rm eff}$ plane for RR Lyrae variables in old LMC GCs. The mean period shifts obtained at fixed $T_{\rm eff}$ with respect to the lower envelope of the M3 distribution (bottom panel) are given, together with the cluster [Fe/H] ratios. The lower envelope of the M3 distribution is reproduced in each panel for clarity }}% \label{none}% \end{figure*} Mean pulsation periods for these GCs can be found in Table 1. Both mean ``fundamentalized" periods (obtained by scaling the RRc periods as $\log P = \log P_{\rm c} + 0.13$) and mean RRab Lyrae periods are given, together with the number of variables employed in the analysis and the HB morphology. The [Fe/H] ratios were obtained from Walker 1992c or (in the cases of Reticulum and NGC 1841) from Suntzeff et al. 1992. Mean period shifts over all ab-type RR Lyraes, obtained at fixed effective temperature with respect to the lower envelope of the M3 distribution (cf. Fig. 1), are also displayed. For comparison purposes, also given are the corresponding values for the Galactic GCs M3 and M15, for which the mean periods were drawn from Castellani \& Quarta 1987. From Catelan's (1993) synthetic HB models for $Z = 4 \times 10^{-4}$ and $\sigma_M = 0.02\, M_{\sun}$ (where $\sigma_M$ is the mass dispersion on the HB), one finds that \begin{equation} { {\rm d} \langle \log P_{\rm f} \rangle \over {\rm d} Y} \approx 1.6 \,\,\mbox{ and } \,\, { {\rm d} \langle \log L({\rm RR}) \rangle \over {\rm d} Y} \approx 1.8. \end{equation} \noindent Uncertainties in such slope values are typically estimated to be of order 10\%. In order to produce a $\Delta M_{\rm bol} = 0.3\, \mbox{mag}$, this suggests that the helium abundance in the LMC should be larger than in the Galaxy by \begin{equation} \Delta Y \approx +0.07. \end{equation} \noindent According to Eqs. (1), this would imply an increase in the mean fundamentalized periods, in comparison with the Galactic values, by \begin{equation} \Delta \langle \log P_{\rm f} \rangle \approx +0.11. \end{equation} \noindent Table 1 does seem to rule out such a large difference in the mean periods. Similar arguments apply to an increase in $M_{\rm c}$. For instance, from the Caputo et al. (1987) synthetic HB models for $Y_{\rm MS} = 0.20$ and $Z = 4 \times 10^{-4}$, one finds that \begin{equation} { {\rm d} \langle \log P_{\rm f} \rangle \over {\rm d} \Delta M_{\rm c} } \approx 2.3 \,\, \mbox{ and } \,\, { {\rm d} \langle \log L({\rm RR}) \rangle \over {\rm d} \Delta M_{\rm c}} \approx 3.1. \end{equation} \noindent In order to produce a $\Delta M_{\rm bol} = 0.3\, \mbox{mag}$, $M_{\rm c}({\rm LMC})$ should thus be larger than $M_{\rm c}({\rm Galaxy})$ by \begin{equation} \Delta M_{\rm c} \approx +0.04 \, M_{\sun}. \end{equation} \noindent Equations (4) then show that this would imply an increase in the mean fundamentalized periods by \begin{equation} \Delta \langle \log P_{\rm f} \rangle \approx +0.09. \end{equation} \noindent This again appears to be ruled out by the data. Catelan's (1993) synthetic HB models also suggest that \begin{equation} { {\rm d} \langle \log L({\rm RR}) \rangle \over {\rm d} \log Z} \approx -0.07. \end{equation} \noindent In order to produce a $\Delta M_{\rm bol} = 0.3\, \mbox{mag}$, extrapolation of Eq. (7) toward very low metallicities suggests that the metal abundance in the LMC (for a given [Fe/H] ratio) should be smaller than in the Galaxy by \begin{equation} \Delta \log Z \approx -1.7. \end{equation} \noindent Irrespective of its impact upon RR Lyrae mean periods, such a change is clearly unrealistic. \subsection{Period shifts at fixed effective temperature} \noindent Another way to place constraints on the variations in second-parameter candidates is to analyze the relative positions of the cluster RR Lyraes on the period-effective temperature plane. Of course, the determination of temperatures for these stars is a rather problematic issue. Caputo \& De Santis (1992) have advanced a method whereby the ``mass-to-light ratio" of RRab Lyrae variables can be obtained in a reddening-independent way, from periods and blue amplitudes alone. From the period-mean density relation, in turn, this may be employed to derive RR Lyrae temperatures, and then to determine period shifts at fixed temperature with respect to the reference Galactic GC M3. Full details are given elsewhere (Catelan 1996), together with applications of the method to several Galactic globulars and samples of field stars. The $\log P - \log T_{\rm eff}$ diagrams thus obtained for the studied LMC GCs are displayed in Fig. 1. The M3 distribution (from Catelan 1996) is reproduced in the bottom panel. The period shifts were measured with respect to the lower envelope of the M3 distribution (cf. Catelan 1996), which is reproduced in all panels for the sake of clarity. From the definition of period shift, it follows that \begin{equation} \Delta \log P (T_{\rm eff}) = 0.84\, \Delta \log L (T_{\rm eff}) - 0.68\, \Delta \log M (T_{\rm eff}). \end{equation} \noindent Assuming, as a first approximation, the RR Lyrae masses at a given temperature in Galactic and LMC globulars to be the same, one finds that a shift in the RR Lyrae magnitudes by $0.3\, \mbox{mag}$ should actually imply \begin{equation} \delta \Delta \log P (T_{\rm eff}) \approx +0.10. \end{equation} \noindent Both Fig. 1 and Table 1 clearly show that such a shift is not allowed for by the observational data. A null LMC -- Galaxy period shift could be produced if the RR Lyrae masses were higher, in the LMC, by $\delta \Delta \log M (T_{\rm eff}) \approx +0.15$ [cf. Eq. (9)]. For a mean mass $\langle M \rangle ^{\rm RR}_{\rm Galaxy} \simeq 0.73\, M_{\sun}$ (as sugested by the Catelan 1993 synthetic HB models for $Y_{\rm MS} = 0.20$, $Z = 4 \times 10^{-4}$, $\sigma_{M} = 0.02 \, M_{\sun}$ and not-too-blue HB types), this would actually demand RR Lyrae masses as high as $\langle M \rangle ^{\rm RR}_{\rm LMC} \simeq 1.03\, M_{\sun}$ in the LMC -- which is probably unrealistic. \section{Conclusions} The present analysis of the observational data for RR Lyrae variables in RR Lyrae-rich LMC GCs suggests that the discrepancy in the distance moduli of the LMC that are inferred from Cepheid and RR Lyrae variables cannot be entirely ascribed to variations in second-parameter candidates. This does {\em not} mean that such variations are not present, but rather that they would not be sufficient to reconcile the discrepant distance moduli. A satisfactory explanation of the problem has yet to be found. \acknowledgements The author acknowledges critical readings of the manuscript by D. A. VandenBerg, H. J. Rocha-Pinto, and J. E. Horvath. Financial support by FAPESP is also acknowledged (grant 92/2747-8).
1,108,101,563,736
arxiv
\section*{Introduction} Gamma-Pareto convolutions (GPC), the convolution of a gamma distribution with some type of Pareto distribution, are increasingly used for modelling diverse random processes like traffic patterns \cite{nadarajah2007convolution}, flood rates, fatigue life of aluminium, confused flour beetle populations \cite{alzaatreh2012gamma}, and extreme rainfall events \cite{hanum2015modeling}. Although there are multiple possible GPC models and different nomenclatures used to describe them, a natural classification would arise from Pareto distribution classification, types I through IV, and the Lomax distribution, a type II subtype, which is the classification scheme of reference \cite{kotz2004continuous} and the Mathematica computer language \cite{Mathematica}.\footnote{Wolfram Research, Inc., (2021) Mathematica, Version 12.3, Champaign, IL \\\indent https://reference.wolfram.com/language/ref/ParetoDistribution.html} Convolution was first introduced to pharmacokinetics in 1933 by Gehlen who used the convolution of two exponential distributions, \begin{equation} \text{EDC}(b,\beta ;t)=b\, e^{-b\,x}* \beta e^{-\beta\,x}\,(t)\\ = \begin{cases} \begin{array}{ll} b\, \beta \frac{ e^{-\beta t}-e^{-b\, t}} {b-\beta }&b\neq \beta\\ b^2 t\, e^{-b\, t}&b=\beta \end{array} \Bigg\} & t\geq 0\\ \;\;0\hspace{9em}\}&t<0 \end{cases}\;, \end{equation} \noindent to describe plasma concentration-time data, and as originally developed in 1910 by Bateman to model radioactive decay \cite{bateman1910solution,gladtke1988history,gehlen1933wirkungsstarke}. Much later, in 2006, the Bateman equation was generalised as an exact gamma-gamma convolution (GDC) by Di Salvo \cite{di2006exact}. Ten years later, this was then applied to 90 min continuous recordings of radioactivity in human thyroid glands following injection of $^{99m}$Tc-MIBI \cite{Wesolowski2016GDC}.\footnote{740 MBq technetium-99m labeled hexakis-methoxy-isobutyl-isonitrile.} In 1919, Widmark identified integration of a monoexponential as a model for constant infusion \cite{widmark1919studies}. That integration from zero to $t$ to find a constant infusion model can be applied not just to exponential functions, but applies equally well to any area under the curve (\textit{AUC}) scaled density-function (pdf)\footnote{\label{note1}We retain the acronym pdf without a probability;\textit{ p}, but use $f(t)$ preferentially. Concentration models are the product of area-under-the-curve of concentration and density functions whose total area-under-the-curve is 1 (dimensionless dose fraction). This balances the classical mechanical units of Mass, Length, and Time, as follows, $C(t)=\textit{AUC}\, \times\,f(t) \;;\;\;\;\left[ \frac{M}{L^{3}}=\frac{M\;T}{L^{3}}\;\times \,\frac{1}{T}\right]\,.$} model, was shown subsequently by Wesolowski \textit{et al.} \cite{Wesolowski2016PLoS,Wesolowski_2020}. Recently, the disposition of metformin was described precisely using the type I GPC model, which as it is asymptotically a Pareto function, has a power function tail \cite{Wesolowski_2020}. Using direct comparison rather than classification, power function tails were shown to be always heavier than exponential tails, see the Appendix Subsection entitled \textit{Relative tail heaviness} of reference \cite{Wesolowski_2020}. A power function tail, in turn, implies an underlying fractal structure, where fractal in this context signifies a scale invariant model of vascular arborisation \cite{west1999fourth}. The GPC computer algorithm used in 2019 had long run times and was not accurate beyond 96 h \cite{Wesolowski_2020}. These problems were corrected in order to make predictions for multiple dosing over longer times \cite{Tucker2020,Tucker2020a}. Since computer implementation of new functions is highly specialised, not easily arrived at by induction and yet indispensable for any practical application, documentation of a more practical type I GPC algorithm may facilitate its more widespread implementation. Accordingly, we now present a series acceleration computer implementation of a more generally applicable GPC type I function, with markedly shorter run times. \section*{Background} \subsection*{The gamma-Pareto convolution distribution function family} A classification for gamma-Pareto convolutions (GPC) is proposed that arises from the types of Pareto distributions \cite{kotz2004continuous}. These are types I through IV plus the Lomax distribution, a subtype of II. The Pareto type I distribution is \begin{equation}\label{eq:PD} \textnormal{PD}(t; \alpha, \beta)= \dfrac{\alpha}{t} \left(\dfrac{\beta}{t}\right) ^{\alpha } \theta(t-\beta)\;, \end{equation} \noindent where $\alpha$ is the shape parameter, $\beta$ is a scale parameter and $\theta(\cdot)$ is the unit step function such that $\theta(t-\beta)$ is the unit step function time-delayed by $\beta$, and is used to make a product that is non-zero only when $t> \beta$.\footnote{The unit step function, $\theta(x)$, is zero for $x<0$ and 1 for $x\geq 0$, such that $\theta(x)$ is continuous everywhere except at $x= 0$. When $x=t-\beta$ and $\beta>0$, then $\theta(t-\beta)$ is a unit step function shifted to later time (i.e., to the right) by $\beta$ units in the new coordinate system; $t$. The unit step function is faster for numerical computations than the Heaviside theta function, which later is sometimes also symbolised as $\theta(x)$. The Heaviside theta is more mathematically useful when it is continuous everywhere such that its derivative and Laplace transform are defined.} A type II Pareto distribution can be written as \begin{equation}\text{PD}_{\text{II}}(t;\alpha,\beta,\mu)=\frac{\alpha }{\beta }\left(1+\frac{t-\mu }{\beta }\right)^{-\alpha -1}\theta(t-\mu)\;.\end{equation} \noindent Setting $\mu=0$, this becomes the Lomax distribution; $\text{PD}_{\text{Lomax}}(t;\alpha,\beta)=\frac{\alpha }{\beta }\big(1+\frac{t }{\beta }\big)^{-\alpha -1}\theta(t),$ which was used to derive a Lomax gamma-Pareto distribution \cite{nadarajah2007convolution}. The relevance of this is that the GPC type I and Lomax GPC derivations are similar. As yet, the type II (not Lomax) through type IV gamma-Pareto convolutions have not been published. These convolutions are likely to be infinite sums and may require series acceleration to be of practical use. By substitution and reduction of the number of parameters, there are closed form GPC-like expressions, types II through IV, that are different distributions \cite{alzaatreh2012gamma}. As a full set of solutions for the entire GPC function family has not been characterised, it is not known what additional applications there could be for the GPC family of functions. Unlike the Lomax GPC, the GPC type I does not start at $t=0$, but at $t=\beta$. For pharmacokinetic modelling, $\beta>0$ is a measure of the circulation time between injection of an intravenous bolus of drug ($t=0$), and its arrival at a peripheral venous sampling site ($t=\beta$). The four-parameter gamma Pareto (type I) convolution (GPC) density function was developed to model the disposition of metformin in dogs, which exhibited an unexpectedly heavy tail poorly described by an exponential decay \cite{Wesolowski_2020}. This heavy tail implies a prolonged buildup of the body burden of the drug \cite{tucker1981metformin,Tucker2020} that may require dose tapering on long-term use, especially in patients with renal impairment \cite{Tucker2020a}. \subsection*{The Gamma-Pareto type I convolution and related functions} \textbf{GPC type I:} To form a GPC type I model, the type I Pareto distribution, Eq.~\eqref{eq:PD}, is convolved with a gamma distribution, \begin{equation}\label{eq:GD} \text{GD}(t; a,b) = \,\dfrac{1}{t}\;\dfrac{e^{-b \, t}(b \, t)^{\,a} }{\Gamma (a)}\theta(t)\;, \end{equation} \noindent where $a$ is a dimensionless shape parameter, $b$ is a rate per unit time, is the reciprocal of its scale parameter, and $\Gamma(\cdot)$ is the gamma function.\footnote{The gamma function, or generalised factorial, is $\Gamma(z)=\int_0^{\infty } \frac{t^{z-1}}{e^t} \, dt;\, \Re(z) >0$} This yields the GPC function, \begin{equation}\label{eq:GPC} \text{GPC}(t)=\theta (t-\beta)\;\frac{b^a\, \alpha\, \beta ^{\alpha } }{\Gamma(a)}t^{a-\alpha -1}\sum _{n=0}^{\infty } \frac{(-b\, t)^n }{n!}B_{1-\frac{\beta }{t}}\left(a+n,-\alpha \right)\;, \end{equation} \noindent where $B_z(\cdot,\cdot)$ is the incomplete beta function.\footnote{The incomplete beta function is $B_z(a,b)=\int_0^z t^{a-1} (1-t)^{b-1} \, dt;\,\Re(a)>0\land \Re(b)>0\land | z| <1$} This is a density function (a pdf, or more simply an $f$, with units per time; $t^{-1}$). Equation \eqref{eq:GPC} is from convolution following Maclaurin series expansion of $e^{-b\,t}$, i.e., it is analytic. An analytic function has any number of sequential multiple integrals and derivatives, as illustrated in the following equations. Compared to their prior expression \cite{Wesolowski_2020}, the equations that follow have been put in simpler terms. \newline \newline \noindent\textbf{GPC type I integral:} Equation \eqref{eq:CDF} is the cumulative density function (CDF) of the GPC, symbolised by $F$, the integral of the $f(t)$ density; $F(t)=\int_0^t f(\tau ) \, d\tau$, \begin{equation}\label{eq:CDF} \text{GPC}_{F}(t) =\theta (t-\beta)\;\frac{b^a \alpha\, \beta ^{\alpha }}{\Gamma(a)}t^{a-\alpha } \sum _{n=0}^{\infty} \frac{(-b\, t)^n}{(a+n) n!} B_{1-\frac{\beta }{t}}\left(1+a+n,-\alpha\right)\;. \end{equation} \noindent This equation, because it is a CDF, expresses the dimensionless fraction of a unit drug dose eliminated from the body as a function of time, and was used to calculate a prolonged retention of metformin in dogs and to explain its incomplete urinary recovery at 72 h following intravenous injection in humans \cite{Wesolowski_2020,tucker1981metformin,Tucker2020}. \newline \newline \textbf{GPC type I double integral:} Equation \eqref{eq:SCD} is the double integral of the density function, $f$, which is also the single integral of $F$, the CDF, and is sometimes called a "super-cumulative" distribution \cite{Avdis_2017}. It is symbolised by $\mathcal{F}$, i.e., $\mathcal{F}(t)=\int_0^t F(\tau )\, d\tau=\int_0^t \int_0^\tau f(x) \,d x\, d\tau $. The GPC$_{\mathcal{F}}$ in least terms is \begin{equation}\label{eq:SCD} \text{GPC}_{\mathcal{F}}(t)=\theta (t-\beta)\;\frac{b^a \alpha\, \beta ^{\alpha }}{\Gamma(a)}t^{a-\alpha +1} \sum _{n=0}^{\infty } \frac{(-b\, t)^n\text{ }}{(a+n)(1+a+n) n!}B_{1-\frac{\beta }{t}}\left(2+a+n,-\alpha \right)\;. \end{equation} \noindent This equation (units $t$) was used to construct an intravenous bolus multidose loading regimen that maintains the same mean amount of metformin in the body during successive dose intervals \cite{Wesolowski_2020} and to predict metformin buildup during constant multidosing in humans both with normal renal function and with renal insufficiency \cite{Tucker2020a}. A further use of this equation is to predict the cumulative distribution function following a period of constant infusion given only its bolus intravenous-concentration, fit function. \newline \newline \noindent\textbf{GPC type I derivative:} Equation \eqref{eq:dgpc} is the derivative of the GPC density, GPC$'$, or in general an $f'$, \begin{equation}\label{eq:dgpc} \text{GPC}'(t)=\theta (t-\beta)\;\frac{b^a \alpha\, \beta ^{\alpha } }{\Gamma(a)}t^{a-\alpha -2}\sum _{n=0}^{\infty } \frac{(a+n-1) (-b \,t)^n }{n!}B_{1-\frac{\beta}{t}}\left(a+n-1,-\alpha \right)\;. \end{equation} \noindent This equation (units $t^{-2}$) is useful for finding the peaks of the GPC function by searching for when it equals zero, and for calculating disposition half-life from its general definition, \[t_{1/2};f(t) \myeq -\ln(2)\mfrac{f(t)}{f'(t)}\;,\] which is Eq.~(6) of reference \cite{Wesolowski_2020}. Note that there is a pattern in the sequential integrals and derivatives that illustrates the analyticity of the GPC function. The integrals and derivatives above follow directly from integration or differentiation of the GPC formula, for which the following identity from integration by parts\footnote{The parts are $U(x)=\frac{1}{\text{A}}\big(\frac{1}{1-x}\big)^{-\text{A}} (1-x)^\text{B}$, and $V(x)=\big(\frac{x}{1-x}\big)^\text{A}$. The identity is listed elsewhere: Wolfram Research Inc. (2021), Champaign, IL. http://functions.wolfram.com/06.19.17.0001.01.} $$B_z(\text{A}+1,\text{B})=\frac{\text{A}}{\text{A}+\text{B}}B_z(\text{A},\text{B})-\frac{z^\text{A} (1-z)^\text{B}}{\text{A}+\text{B}}\;\;,$$ is useful for simplifying the results. \section*{Methods, algorithms for GPC type I series acceleration and their computation} \subsection*{Data sources and regression methods}\label{data} The source data for regression analysis and subsequent algorithm testing consists of seven intravenous bolus metformin studies performed in healthy mixed-breed dogs \cite{johnston2017pharmacokinetics}. The 19 to 22 samples per case drawn between 20 min to 72 h postinjection are listed as open data in \textit{Supplementary material 1 (SLSX 49kb)} in \cite{Wesolowski_2020}.\footnote{https://link.springer.com/article/10.1007/s10928-019-09666-z\#Sec220} The regression target was the so-called $1/C^2$ weighted ordinary least squares (OLS) method, implemented as minimisation of the proportional norm, which is also the relative root mean square (rrms) error, as per the \textit{Concentration data and fitting it} Appendix Subsection of \cite{Wesolowski_2020}.\footnote{https://link.springer.com/article/10.1007/s10928-019-09666-z\#appendices} The loss function chosen to be minimised agreed with the error type the measurement system assay calibration curve. Both the metformin assay (5.2\% rrms) \cite{michel2015}, and the GPC residuals (8.6\% rrms) exhibited proportional error. The reuse of assay loss functions for regression loss functions is systemically consistent and appears in these references \cite{Wesolowski_2020, Wesolowski2016GDC}. The regression method used was Nelder-Mead \textit{Constrained Global Numerical Minimisation} as implemented in Mathematica, a global search technique \cite{Mathematica}.\footnote{https://reference.wolfram.com/language/tutorial/ConstrainedOptimizationGlobalNumerical.html\#252245038} For 20 significant figure results for all parameters used was the Mathematica routine NMinimize with the options: PrecisionGoal $\to$ 30, \mbox{AccuracyGoal $\to$ 30,} \mbox{WorkingPrecision $\to$ 65,} \mbox{MaxIterations $\to$ 20010,} Method $\to$ \{"NelderMead", "PostProcess" $\to$ False\}. Post processing is disallowed because it launches a constrained convex gradient solution refinement protocol; the interior point method, which does not converge. The use of parameter starting value ranges close to the solution helps speed up convergence. Note that regression can start with 65 significant figure accuracy but finish with less than half of that for some parameter values due to error propagation from the fit function itself and/or the regression process. In order to calculate the confidence intervals (CI) of the parameters, model-based bootstrap \cite{bollen1992bootstrapping} was performed, as follows. Care was taken to verify the normality of fit residuals and the homoscedasticity of residuals---see \cite{Wesolowski_2020}---as suggested by \cite{zhang2016bootstrapping}. Those conditions allow for the residuals to be randomly sampled with replacement, then added to the model at the sample-times to create synthetic data having the same properties as the original data, but which have altered regression parameter solutions. The bootstrap parameter values so obtained can provide more information than gradient method parameter CV's, as the latter only provides single case-wise estimates, which are not as statistically useful as case-wise distributed parameter information \cite{Wesolowski_2020}. Table \ref{params} shows both case-wise and population-wise coefficients of variation from an early version of a GPC algorithm. The table was amalgamated from Tables 1, 3, and 12 of \cite{Wesolowski_2020} representing 24 h of 8-core parallel processing of 42 time-sample serum curves. There is thus an obvious need for a faster algorithm for regression analysis. \begin{table}[ht] \centering \captionsetup{justification=justified,margin=0cm} \caption {Shown are parameters from gamma densities (GD), Pareto densities (PD) and both from Gamma-Pareto convolution (GPC) fitting of concentrations data for 7 dogs with model-based bootstrap root mean square case-wise (Case\%, $n=5$) and population-wise (Pop.\%, $n=35$) coefficients of variation, and fit error.} \label{params} \begin{tabularx}{\textwidth}{lccccccccccccc} \hline \small{Functions}\!\!&\multicolumn{4}{c}{GD} &\multicolumn{4}{c}{PD} &\multicolumn{5}{c}{GPC}\\ \hhline{~----~~~~-----} \small{Parameters}\!\!& $a$ && $b$ && $\alpha $ && $\beta $ && \emph{AUC} && $\mathit{CL}$ &&Fit error\\ \hhline{-~~~~----~~~~~} Units$^\text{ a}$ & none &\%&$\frac{1}{\text{h}}$&\%& none &\%&s &\%& $\frac{\text{mg}\cdot \text{h}}{\text{L}}$ &\%& \hspace*{-.4em} $\frac{\text{ml}}{\text{min}\cdot \text{kg}}$ &\% &\%\\[.2em] \Xhline{2\arrayrulewidth} Dog 1 & 0.3493 &8.09& 0.7318 &4.84& 0.2644 &5.13& 25 &---& 31.16 &6.22& 9.8 &6.42 &8.7\\ Dog 2 & 0.8112 &10.5& 0.9993 &6.89& 0.1365 &4.90& 25 &---& 28.18 &1.09& 11.5 &1.09&6.3 \\ Dog 3 & 0.6689 &8.67& 0.9107 &5.95& 0.2010 &3.81& 25 &---& 12.15 &3.77& 26.7 &3.72&5.9\\ Dog 4 & 0.6092 &22.3& 0.8062 &15.9& 0.1726 &9.80& 25 &---& 16.73 &5.65& 19.4 &5.89&13.8\\ Dog 5 & 0.6435 &20.6& 1.1035 &11.4& 0.1199 &6.11& 25 &---& 26.21 &5.37& 12.4 &5.14&9.5\\ Dog 6 & 0.5194 &7.42& 0.6137 &4.93& 0.1929 &5.85& 30 &---& 28.43 &2.15& 11.4 &2.10&6.1\\ Dog 7 & 0.7629 &17.7& 1.0518 &20.3& 0.1571 &5.82& 30 &---& 22.10 &2.31& 14.7 &2.34&10.0\\ Case\% & &14.8& &11.5& &6.17& & --- & &4.22& &4.26&8.2\\ Pop.\% && 29.7 && 25.0 && 25.9 && --- && 28.8 && 39.5 &7.5$^{\text{ b}}$\\ \hline \end{tabularx} \begin{tabularx}{1\textwidth}{X} $^\text{a }$Units row: \emph{None} means \emph{dimensionless}. As $\beta$ was constrained to be within 25 to 30 s, its variability for the 5 realisations per case is not meaningful.\\ $^\text{b }$Geometric mean (GM) was used to calculate group error of fit because the 35 model-based bootstrap fit errors errors were log-normally distributed for which the GM was 7.5\%, not significantly different from the original data GM error of 8.2\%. The original data values are case-wise listed for fit error, and for the parameter values. \end{tabularx} \end{table} \subsection*{GPC type I primary definition: The short-\textit{t} algorithm} \noindent The primary definition of a gamma-Pareto type I convolution, Eq.~\eqref{eq:GPC}, is \begin{equation}\label{eq:GPC2} \begin{split} \textnormal{GPC}\arraycolsep=1.2pt\def.7{.7} \left(\begin{array}{cc} a&b\\ \alpha&\beta \end{array} \Big|\,t\right)&=\textnormal{GD}( a,b;x)\ast \textnormal{PD}(\alpha , \beta;x) \;(t)\\ &=\left[\dfrac{(b \, x)^{\,a}\,e^{-b \, x} }{x\,\Gamma (a)}\theta(x)\right]* \left[\dfrac{\alpha}{x} \left(\dfrac{\beta}{x}\right) ^{\alpha } \theta(x-\beta)\right]\;(t)\\ &=\theta (t-\beta)\; t^{a-\alpha-1} \frac{\alpha\, b^a\beta^\alpha}{\Gamma (a)}\sum _{n=0}^{\infty } \frac{(-b\,t)^n}{n!} B_{1-\frac{\beta}{t}}(a+n,-\alpha) \end{split}\;\;. \end{equation} \noindent This contains alternating terms in the summation such that the sum is rapidly convergent for $t$ not much greater than its lower limit, $\beta$. However, for sufficiently large values of $t$, the individual terms of the summation both alternate in sign and become extremely large in \textit{magnitude} (i.e., absolute value) before absolute series convergence. For absolute convergence of an alternating series the infinite sum of the absolute values is bounded above, which permits rewrite of the summation sequence of infinite sums. This, and the ratio test \cite{laugwitz1994riemann} for it are shown in the \nameref{short} Appendix Subsection. Thus, the order of infinite summation can be changed to obtain shorter run times when $t\gg\beta$, and the algorithm is accelerated through an algebraic rewrite of Eq.~\eqref{eq:GPC2} as Eq.~\eqref{eq:accelgpc} below. Alternating infinite series with large magnitude terms occurring before absolute convergence are common, for example, the infinite-series, primary definition of $\sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\frac{x^9}{9!}-\cdots$ has that same property for larger magnitude $x$-values. Acceleration for the sine function could include converting the $x$-values to be principal sine values ($-\frac{\pi}{2}$ to $\frac{\pi}{2}$), and adjusting the output accordingly.\footnote{$\sin(12)$ executes to 65 decimal places in 19 microseconds in the Mathematica language on an 2.3 GHz 8-Core Intel Core i9 processor. Current acceleration algorithms for routine functions are many generations beyond what is outlined here.} For the GPC$(t)$ function a similar result, i.e., adjusting the algorithmic behaviour to be accelerated for long-$t$ values, can be obtained as follows. \subsection*{GPC type I secondary definition: The long-\textit{t} algorithm} \newtheorem{1em}{Theorem} \begin{1em}\label{longT}The long-$t$ algorithm is \begin{equation}\label{eq:accelgpc} \begin{split} \textnormal{GPC}\arraycolsep=1.2pt\def.7{.7} \left(\begin{array}{cc} a&b\\ \alpha&\beta \end{array} \Big|\,t\right)=&-\theta (t-\beta) \frac{ \alpha b^a }{\Gamma (a)}t^{a-1}\sum _{k=1}^{\infty }\left(\mfrac{\beta }{t}\right)^k \frac{(1-a)_k}{k! (k-\alpha )} \, _1F_1(a,a-k;-b t)\\ &+\theta (t-\beta )\left[ \frac{b^a }{\Gamma (a)} e^{-b t} t^{a-1}-\pi \csc (\pi \alpha )\frac{b^a \beta ^{\alpha } }{\Gamma (\alpha )} t^{a-\alpha -1} \, _1\tilde{F}_1(a,a-\alpha ;-b t)\right] \end{split}\;\;\;. \end{equation}\end{1em} \noindent \begin{proof} This is shown by substitution of the identities,\footnote{$\,$http://functions.wolfram.com/06.19.17.0008.01 and http://functions.wolfram.com/06.18.02.0001.01} $B_z(A,B)=B(A,B)-B_{1-z}(B,A)$, and $B(A,B)=\frac{\Gamma (A) \Gamma (B)}{\Gamma (A+B)}$ into the incomplete beta function of Eq.~\eqref{eq:GPC2} above and yields, \begin{equation}\label{eq:3} B_{1-\frac{\beta }{t}}(a+n,-\alpha )=\frac{\Gamma (-\alpha ) \Gamma (a+n)}{\Gamma (a+n-\alpha )}-B_{\frac{\beta }{t}}(-\alpha ,a+n)\;. \end{equation} \noindent Substituting this into the right hand side of Eq.~\eqref{eq:GPC2} yields, \begin{equation}\label{eq:4} \theta (t-\beta ) \frac{\alpha\, b^a \beta ^{\alpha } }{\Gamma (a)}t^{a-\alpha -1} \sum _{n=0}^{\infty } \left[\frac{(-b t)^n} {n!}\frac{(\Gamma (-\alpha ) \Gamma (a+n))} {\Gamma (a+n-\alpha )}-\frac{(-b\, t)^n}{n!}B_{\frac{\beta }{t}}(-\alpha ,a+n)\right]\;\;, \end{equation} \noindent the left hand summand of which simplifies to a GPC asymptote for long times, $t$, $$ \theta (t-\beta )\alpha\, b^a \beta^\alpha \Gamma (-\alpha) t^{a-\alpha-1} \, _1\tilde{F}_1(a,a-\alpha;-b t)\;, $$ \noindent where $ \, _1\tilde{F}_1(\cdot,\cdot ;z)$ is the regularised confluent hypergeometric function.\footnote{where $\, _1\tilde{F}_1(a;b;z)=\, _1F_1(a;b;z) /\Gamma (b)$, where $\, _1F_1(a;b;z)=\sum _{k=0}^{\infty } z^k (a)_k/[k! (b)_k]$ is the not regularised version, and where $ (a)_k=\Gamma (a+k)/\Gamma (a)$ is the Pochhammer, also called the descending factorial.} The above formula, as\footnote{http://functions.wolfram.com/06.05.16.0001.01 and http://functions.wolfram.com/06.05.16.0002.01} $-\pi \csc (\pi \alpha )=\Gamma (-\alpha ) \Gamma (\alpha +1)=\alpha \Gamma (-\alpha ) \Gamma (\alpha )$, can be written alternatively as \begin{equation}\label{eq:asy}-\theta (t-\beta ) \pi \csc (\pi\, \alpha )\frac{b^a \beta ^{\alpha } }{\Gamma (\alpha )} t^{a-\alpha -1} \, _1\tilde{F}_1(a,a-\alpha ;-b t)\;, \end{equation} \noindent which obviates having to use a $\Re[\Gamma(-\alpha)]$ computer command to truncate a zero magnitude imaginary machine number carry, e.g., $\Re(x+0\times i)=x$, such that Eq.~\eqref{eq:GPC2} can be rewritten as \begin{equation}\label{eq:proved} \begin{split} \textnormal{GPC}\arraycolsep=1.2pt\def.7{.7} \left(\begin{array}{cc} a&b\\ \alpha&\beta \end{array} \Big|\,t\right) =&-\theta (t-\beta) \frac{\alpha\, b^a \beta^\alpha}{\Gamma (a)}t^{a-\alpha-1} \sum _{n=0}^{\infty } \frac{(- b\,t)^{n} } {n!} B_{\frac{\beta}{t}}(-\alpha,a+n)\\ &-\theta (t-\beta ) \pi \csc (\pi\, \alpha )\frac{b^a \beta ^{\alpha } }{\Gamma (\alpha )} t^{a-\alpha -1} \, _1\tilde{F}_1(a,a-\alpha ;-b\, t) \end{split}\;\;\;, \end{equation} \noindent where $\alpha\neq0,1,2,3,\dots$, which is Eq.~(25) of the first type I GPC publication \cite{Wesolowski_2020}. Note that not only is the summation of the above absolutely convergent, but as the second line above is an asymptote for $t\to \infty$ of the GPC function \cite{Wesolowski_2020}, the summation converges to zero as $t\to \infty$ relatively more rapidly than the asymptote. The summation terms, $$-\frac{\alpha\, b^a \beta^\alpha}{\Gamma (a)}t^{a-\alpha-1} \sum _{n=0}^{\infty } \frac{(- b\,t)^{n} } {n!} B_{\frac{\beta}{t}}(-\alpha,a+n)\;,$$ \noindent are rearranged for acceleration at long times using the infinite series definition of the incomplete beta function,\footnote{$\,$http://functions.wolfram.com/06.19.06.0002.01} \begin{equation}\label{eq:betaidentity} B_{\frac{\beta }{t}}(-\alpha ,a+n)=\left(\frac{\beta }{t}\right)^{-\alpha } \sum _{k=0}^{\infty } \frac{\left(\frac{\beta }{t}\right)^k (1-a-n)_k}{k! (k-\alpha )}\text{ for }\left| \frac{\beta }{t}\right| <1\text{ and } \alpha\neq0,1,2,3,\dots\;, \end{equation} \noindent by substituting it into the summation, and simplifying to yield, \begin{equation}\label{eq:re1} -\frac{\alpha\, b^a }{\Gamma (a)}t^{a -1}\sum _{n=0}^{\infty } \frac{(-b\, t)^n }{n!}\sum _{k=0}^{\infty } \frac{\left(\frac{\beta }{t}\right)^k (1-a-n)_k}{k! (k-\alpha )}\;. \end{equation} \noindent Given absolute convergence (\nameref{short} Appendix Subsection) the order of infinite summation can be changed with impunity by distributing the outer sum over the inner sum, and factoring, as follows, \begin{equation*}-\frac{\alpha\, b^a }{\Gamma (a)}t^{a -1}\sum _{k=0}^{\infty } \sum _{n=0}^{\infty } \frac{(-b\, t)^n }{n!}\frac{\left(\frac{\beta }{t}\right)^k (1-a-n)_k}{k! (k-\alpha )}\;,\end{equation*} \begin{equation}\label{eq:re2}-\frac{ \alpha\, b^a }{\Gamma (a)}t^{a-1}\sum _{k=0}^{\infty } \frac{\left(\frac{\beta }{t}\right)^k}{k! (k-\alpha )}\sum _{n=0}^{\infty } \frac{(-b\, t)^n (1-a-n)_k}{n!}\;.\end{equation} \noindent Fortunately, the inner sum in the above formula simplifies to a closed form, allowing it to be rewritten as \begin{equation}\label{eq:few} -\frac{ \alpha\, b^a }{\Gamma (a)}t^{a-1}\sum _{k=0}^{\infty } \frac{\left(\frac{\beta }{t}\right)^k}{k! (k-\alpha )}(1-a)_k \, _1F_1(a,a-k;-b\, t)\;. \end{equation} \noindent The $k=0$ term of that sum simplifies to be the gamma distribution function part of the GPC convolution. Splitting off that term and adjusting the lower summation index from $k=0$ to $k=1$ yields, \begin{equation}\label{eq:few2} \frac{b^a }{\Gamma (a)} e^{-b\, t} t^{a-1} -\frac{ \alpha\, b^a }{\Gamma (a)}t^{a-1}\sum _{k=1}^{\infty } \frac{\left(\frac{\beta }{t}\right)^k}{k! (k-\alpha )}(1-a)_k \, _1F_1(a,a-k;-b\, t)\;. \end{equation} Next, the quickly convergent sum term, Eq.~\eqref{eq:few2}, is added to the gamma distribution plus asymptotic formula Eq.~\eqref{eq:asy} to create a series accelerated algorithm rewrite of Eq.~\eqref{eq:GPC} for long $t$-values, \[\begin{split} \textnormal{GPC}\arraycolsep=1.2pt\def.7{.7} \left(\begin{array}{cc} a&b\\ \alpha&\beta \end{array} \Big|\,t\right)=&-\theta (t-\beta) \frac{ \alpha\, b^a }{\Gamma (a)}t^{a-1}\sum _{k=1}^{\infty }\left(\mfrac{\beta }{t}\right)^k \frac{(1-a)_k}{k! (k-\alpha )} \, _1F_1(a,a-k;-b t)\\ &+\theta (t-\beta )\left[ \frac{b^a }{\Gamma (a)} e^{-b\, t} t^{a-1}-\pi \csc (\pi \alpha )\frac{b^a \beta ^{\alpha } }{\Gamma (\alpha )} t^{a-\alpha -1} \, _1\tilde{F}_1(a,a-\alpha ;-b t)\right] \end{split}\;\;\;.\] \noindent This is identically Eq.~\eqref{eq:accelgpc}, which completes the proof of the long-$t$ theorem. \end{proof} The second line of the above equation is an asymptote of the GPC function. The above equation's first line when written as a list of terms to be summed has all negative elements when $k>\alpha$, which was the case for metformin \cite{Wesolowski_2020}. If $k<\alpha$ for the first few $k$, then the simplified summation terms are initially positive until $k>\alpha$, but in any case the magnitude of those terms is strictly monotonically decreasing such that increasing precision to sum those terms is unnecessary. The confluent hypergeometric functions in those terms and their effects on convergence are presented in detail in the \nameref{long} Appendix Subsection, which shows that the absolute value of the ratio of the $(k+1)$th to $k$th terms is approximately $\frac{\beta}{k\,t}$, where the $k$ in the denominator insures that the absolute values of the simplified terms of the summand for the above formula are monotonically decreasing, and that each $(k+1)^{\text{st}}$ term is many times closer to zero than the $k^{\text{th}}$ term, such that it is unnecessary to test for convergence using the sum to infinity of all the remainder terms, i.e., in practice it is sufficient to test the absolute value of the last term and to stop the summation when that magnitude is less than the desired precision (e.g., $<10^{-65}$). \subsection*{Other long-\textit{t} functions; the integrals and derivative} \textbf{GPC type I long-\textit{t} integral:} The derivation of a similarly accelerated series for $t\geq4\,\beta$ of the CDF of GPC, i.e., its $0\text{ to } t$ integral, GPC$_F$, follows from its primary definition, Eq.~\eqref{eq:CDF}, using the same procedure as Eqs.~\eqref{eq:GPC2} to \eqref{eq:accelgpc}, leading to, \begin{equation}\label{eq:fastgpc} \begin{split} \textnormal{GPC}_F\arraycolsep=1.2pt\def.7{.7} \left(\begin{array}{cc} a&b\\ \alpha&\beta \end{array} \Big|\,t\right)=&-\theta (t-\beta) \frac{ \alpha\, b^a }{\Gamma (1+a)}t^{a}\sum _{k=1}^{\infty }\left(\mfrac{\beta }{t}\right)^k \frac{(-a)_k}{k! (k-\alpha )} \, _1F_1(a,a-k+1;-b\, t)\\ &+\theta (t-\beta ) \left[1-Q(a,b\,t)-\pi \csc (\pi\, \alpha )\frac{ b^a \beta ^{\alpha } }{\Gamma (\alpha )} t^{a-\alpha}\, _1\tilde{F}_1(a,a-\alpha +1;-b\, t)\right] \end{split}\;\;\;, \end{equation} \noindent where $Q(a,b\,t)=\frac{\Gamma(a,b\,t)}{\Gamma(a)}$ is the regularised upper incomplete gamma function, and is the complementary cumulative density function (CCDF$=1-$CDF) of the gamma distribution.\footnote{CCDF is sometimes loosely referred to as a survival function, $S(t)$.} Note that GPC$_F$ is a CDF, such that the upper limit of Eq.~\eqref{eq:fastgpc} as $t$ increases is 1 or 100\% of initial dose eliminated from the body. \newline \newline \textbf{GPC type I long-\textit{t} double integral:} Similarly, the super cumulative distribution, i.e., the integral from $\tau=0\text{ to }t$ of the CDF is, \begin{equation} \begin{split} \textnormal{GPC}_\mathcal{F}&\arraycolsep=1.2pt\def.7{.7} \left(\begin{array}{cc} a&b\\ \alpha&\beta \end{array} \Big|\,t\right)=-\theta (t-\beta)\frac{\alpha\, b^a t^{a+1} }{\Gamma (a+1)}\sum _{k=2}^{\infty}\left(\frac{\beta }{t}\right)^k \frac{(-a)_k }{k! (k-\alpha )(a-k+1)} \, _1F_1(a,a-k+2;-b \,t)\\ &+\theta (t-\beta ) \bigg\{\frac{t\, e^{-b \,t} (b \,t)^a}{\Gamma (a+1)}-\frac{\alpha\, \beta }{\alpha -1}[1-Q(a,b \,t)]+\left(t-\frac{a}{b}\right) [1-Q(a+1,b \,t)]\\ &\hspace{5em}-\pi\, \csc (\pi\, \alpha )\frac{b^a \beta^\alpha }{\Gamma (\alpha )}\,t^{a-\alpha+1 }\, _1\tilde{F}_1(a,a-\alpha +2;-b \,t)\bigg\} \end{split}\;\;\;. \end{equation} \noindent Note that the sum term is now indexed from $k=2$, for which each simplified summation element has a negative value when $k>\alpha$, and a multiplied out positive first term when $\alpha<2$. \newline \newline \textbf{GPC type I long-\textit{t} derivative:} The GPC derivative's algorithm for $t>4\beta$, i.e., long-$t$, is \begin{equation} \begin{split} \textnormal{GPC}\,'&\arraycolsep=1.2pt\def.7{.7} \left(\begin{array}{cc} a&b\\ \alpha&\beta \end{array} \Big|\,t\right)=\theta (t-\beta)b^a t^{a-2}\Bigg\{\frac{ a-b\,t-1}{\Gamma (a)}e^{-b\,t}\\ &+ \frac{\pi \alpha \csc (\pi\, \alpha ) \left(\frac{\beta }{t}\right)^{\alpha } }{\Gamma (\alpha +1)}\left[(\alpha +1) \, _1\tilde{F}_1(a;a-\alpha ;-b \,t)-a \, _1\tilde{F}_1(a+1;a-\alpha ;-b\, t)\right]\\ &+\frac{\alpha }{\Gamma (a)} \sum _{k=1}^{\infty } \frac{(1-a)_k \left(\frac{\beta }{t}\right)^k }{k! (k-\alpha )}\left[b\, t \, _1F_1(a;a-k;-b\, t)-(a-k-1) \, _1F_1(a-1;a-k-1;-b\, t)\right] \Bigg\} \end{split}\;. \end{equation} \subsection*{The combined short- and long-\textit{t} algorithm for GPC series acceleration} There are now two algorithms, an algorithm that converges quickly only for short $t$-values, and another that converges quickly only when $t$-values are long. This section describes how the algorithms are combined to produce a new accelerated algorithm for any value of $t$. A full set of functions for the derivative and integrals of the GPC algorithm follows the same pattern as the \nameref{alg} Appendix Subsection. The two algorithms are combined by choosing $t=4\,\beta$ as the floor (least) value for use of the long-$t$ algorithm, makes the next term at worst approximately 1/4 that of the current term. Given a next term fraction of $\frac{\beta}{k\,t}$ times the current term, the $t=4\,\beta$ floor value is not critical, the trick is to avoid second to first term ratios that initially approach 1 as $t\to\beta$, for which the short-$t$ algorithm has fewer terms and converges faster. See the \nameref{choose} Appendix Subsection for further information. \begin{figure}[ht] \centering \begin{tikzpicture}[node distance=1.5cm] \node (start) [beginend, xshift=2cm] {Start}; \node (in1) [io, right of=start, xshift=1cm] {Input $t;a,b,\alpha,\beta$}; \node (dec1) [decision, right of=in1, xshift=1.4cm] {$t\leq\beta$ ?}; \node (dec2) [decision, right of=dec1, xshift=1.8cm] {$t<4\,\beta$ ?}; \node (pro2b) [process, below of=dec1, yshift=-.3cm] {GPC$(t)=0$}; \node (pre1) [draw,rectangle split, rectangle split horizontal,rectangle split parts=3,minimum height=1cm, right of =dec2, xshift=1.8cm] {\nodepart{two}\shortstack{Long-$t$\\algorithm}} \node (pre2) [draw,rectangle split, rectangle split horizontal,rectangle split parts=3,minimum height=1cm, below of =dec2,yshift=-.3cm] {\nodepart{two}\shortstack{Short-$t$\\algorithm}}; \node (out1) [io, below of=pre2] {Output GPC$(t)$}; \node (stop) [beginend, below of=out1] {Stop}; \draw [arrow] (start) -- (in1); \draw [arrow] (in1) -- (dec1); \draw [arrow] (dec1) -- (dec2); \draw [arrow] (dec1) -- node[anchor=south] {no} (dec2); \draw [arrow] (dec1) -- node[anchor=east] {yes} (pro2b); \draw [arrow] (dec2) -- node[anchor=south] {no} (pre1); \draw [arrow] (dec2) -- node[anchor=east] {yes} (pre2); \draw [arrow] (pre2) -- (out1); \draw [arrow] (pre1.south) .. controls +(down:2.8cm) .. (out1.east); \draw [arrow] (pro2b.south) .. controls +(down:1.cm) .. (out1.west); \draw [arrow] (out1) -- (stop); \end{tikzpicture} \caption{Standard flow chart for the \nameref{alg} Appendix Subsection. The predefined short- and long-$t$ routines are also described in the text.} \label{tikzchart} \end{figure} The program uses so-called infinite magnitude numbers such that numbers like $\pm10^{\pm100000}$ can be used without overflow or underflow (code: \$MaxExtraPrecision = $\infty$). However, there is another concern; precision. Machine precision was 53 bits, or approximately 16 significant figures. When $10^{-100}$ and 1 are added, one has to have a precision of 100 significant figures to avoid truncation. For the short-$t$ algorithm the extended precision needed is precalculated using machine precision of large numbers, which are stored as simplified terms, and are searched to find the largest magnitude number (code: Ordering[storage,$-1] [[1]]-1$). It is then that number as a rounded base 10 exponent (code: Round[Log10[Abs[outmax]]]) plus 65 significant figures that is used as the required precision of the computation. The terms of the summand are then recalculated to that high precision, then summed, such that the result has approximately 65 significant figures remaining even though the calculation itself may have needed a thousand or more significant figures to yield that result. The same approach could be used to calculate $\sin(x)$ from its infinite series definition. As mentioned above, in practice that is not used, and instead the equivalent principal sine values of $x$ are computed. For the GPC$(t)$ computation, one can invert the range related extra precision problem by reordering the series to make it increasingly less demanding to calculate long-$t$ values by direct application of Eq.~\eqref{eq:accelgpc} and that is precisely what the long-$t$ GPC type I algorithms does. The value $t=4\beta$ is used to transition between shorter $t$-values for use by the short-$t$ algorithm, and longer $t$-values for use with the long-$t$ algorithm. As mentioned, that time of transition between long and short algorithms is not critical and is more formally presented in the \nameref{choose} Appendix Subsection. \section*{Results} This Results section shows examples for GPC algorithm run times and diagnostics, of how it can and should be used including the use for extended same dose multidosing, and a subsection illustrating confidence interval (CI) and coefficient of variation (CV) diagnostic quality assurance. \subsection*{Algorithm run time analysis} The GPC combined short- and long-$t$ algorithm was defined in terms of how to calculate it efficiently, as above. Implementation of the combined short and long time algorithm using Mathematica 12.3 without parallel processing on a 2.3 GHz 8-Core Intel i9 processor allows long $t$-value execution times of around 1.2 millisecond with typical 63 to 67 decimal place accuracy. (The full range of run times is approximately from 42 to 1.2 milliseconds for $t$-values ranging 30 s to 1/2 year.) This contrasts with the short-$t$ implementation of GPC Eq.~\eqref{eq:GPC2}, which, as $t$ increases, needs more terms and higher precision to maintain a given precision of the final result, with a processing time that progressively becomes intractably long. Figure \ref{Fig-2} shows the relative performance of the algorithms in \begin{figure}[H] \centering \includegraphics[scale=.325]{Fig-2.png} \caption {Log-log plots comparing the performance of the short time (red connected open circles), the long time algorithm (green connected open triangles) and the either short or long time (blue connected open diamonds) algorithms' performance for the GPC model of metformin disposition in dog 1 of reference \cite{Wesolowski_2020}. Panel \textbf{a} shows that the long or short-$t$ algorithm is more than one million times faster than the short-$t$ algorithm when the time after injection is very long, e.g., for predicting a serum concentration at one half year (4396 h), and more than 20 times faster than the long-$t$ algorithm for $t=30$ s. Panel \textbf{b} shows the number of summed terms $(n)$ for each method. Note that for long $t$-values, the combined algorithm used the long-$t$ method and only calculates one term, whereas the short-$t$ algorithm would have used many terms. Panel \textbf{c} shows the precision needed to accommodate the largest of the terms for each algorithm. As $t$ increases, the short-$t$ function uses an increasing number of significant figures, but for the long-$t$ or combined algorithms that number increases only slightly to preserve accuracy for lower concentrations at longer times. Panel \textbf{d} shows the largest term magnitude as powers of 10 for the short- and long-$t$ algorithms. For long times, the short-$t$ algorithm alternating sign intermediate terms reach quite large magnitude, while for the long-$t$ algorithm the largest magnitude term collapses to vanishingly small values.} \label{Fig-2} \end{figure} \noindent these respects using the GPC parameters from fitting metformin data for dog 1 \cite{Wesolowski_2020}. This dog showed the median regression error of 8.7\% of the seven studied. Despite having the fastest elimination at 72 h, the concentration level for that dog was predicted to be $2 \times 10^{-7}$ of peak at one year, a small number but much larger than could be produced assuming a terminal exponential tail. For the short-$t$ algorithm the run-time to calculate concentration at one-half year following injection was 1809 s, versus 1.2 milliseconds for the new algorithm. This difference is because the short-$t$ algorithm used at long times had 8883 terms to sum, and the call to \textsf{gpcshort} was used twice; once at machine precision to find the maximum absolute value term $(1.0796*10^{1392})$ of all of the summand terms in order to calculate that 1457 place precision was required to obtain 65 place precision in the output, and once again to do the 1457 place summation arithmetic. For the combined (new) algorithm this is not needed as for short times the short-$t$ algorithm does not have large oscillating terms, and the long-$t$ algorithm has monotonically decreasing term magnitude both for each sequentially summed term, and as $t$ increases, for each first term magnitude. For example, the first (and only) term of the long-$t$ algorithm's summand at one-half year was negligible $(-1.851*10^{-1403})$. These effects are illustrated in Figure \ref{Fig-3}. \begin{figure}[H] \centering \includegraphics[scale=.32]{Fig-3.png} \caption {Individual terms of the sums for the short-$t$ and long-$t$ algorithms for the GPC model of metformin disposition in dog 1 of reference \cite{Wesolowski_2020}. These terms are stored in a list variable called, somewhat unimaginatively, \textit{storage}. Note that $f(t=4\beta\, \text{h})$, is an explicit replacement rule meaning that \textit{an $f(t)$ is evaluated $f(4\beta)$, where $t=4\beta$ h}. Panels \textbf{a} and \textbf{b} show the values of the short-$t$ algorithm's summand terms. Panel \textbf{a} shows the values for the short-$t$ algorithm at the upper limit of $t$ of its usage in the combined, new, algorithm at $t=4\beta$ (100 s in this case). The blue dots are positive values, and the red dots are negative values of the summand of Eq.~\eqref{eq:GPC}. Panel \textbf{b} shows what happens when the short-$t$ algorithm is used at 12 h. That is the oscillatory terms would have intermediate values that grow in magnitude before they converge. While this poses little problem at 12 h, this time is not used in the new, combined algorithm and in the extreme the oscillatory intermediate terms of the summand grow to very large magnitude as $t$-values increase. Thus, before the summand is calculated for long $t$-values when using the short-$t$ algorithm, preliminary calculation of the required extended precision is necessary, which prologues execution time markedly. Panel \textbf{c} shows the region of values for the long-$t$ algorithm at 12 h. Note that there are fewer terms than for the short-$t$ algorithm at that same time (panel \textbf{b}), that all terms are negative (red), and that they decrease by one or more orders of magnitude between successive terms. Panel \textbf{d} shows that by 100 h for the long-$t$ algorithm, there are fewer terms than at 12 h, and that their magnitude is very small even for the largest magnitude, first, term.} \label{Fig-3} \end{figure} \noindent For our test case example, the two algorithms, short-$t$ and long-$t$, agreed to within 63 to 67 decimal places. In practice, the short-$t$ algorithm is used for short times and the long-$t$ algorithm is used for long times. It makes little difference what cut-point time between short- and long-$t$ algorithms is used, and the time $4\beta$, albeit around 100-120 s, was chosen as a division point between algorithms short enough to ensure that extra precision padding for the short-$t$ algorithm would be unnecessary. \subsection*{Regression processing elapsed times and extended multidosing} For evaluating the 72 h data for seven dogs, the new, combined short- and long-$t$ algorithm run time for curve fitting was approximately 1:15 to 3:00 (min:s) average values, the program prior version with hardware and software accelerations for the short-$t$ algorithm and without sufficiently extended precision (despite using at least 65 place arithmetic) had run times in the approximate range of 34 to 35 min, but with occasional errors in parameter values of $\pm2\times10^{-14}$. With proper precision extension the error dropped below $10^{-20}$ for all 5 parameters and 7 cases, but the run time increased to 50 min, using a partly accelerated short-$t$ algorithm (Eq.~\eqref{eq:proved}) and 8-core hardware acceleration. The current combined short- and long-$t$ algorithm does not use those additional accelerations. Forty model-based bootstrap cases generated for the first dog's data---see next Subsection---took 49:45 (min:s), or 1:15 per case. That is a lot faster than the 33:51 per case it took to generate 35 bootstrap models using the old software (19:44:55). Overall, the run time is very approximately 27 times faster than prior, but is variable depending on the problem being solved, the computer hardware used, and the other software running on the computer at that time. For example, Figure \ref{multi_14-days}a, with a current run time of 7.1 s, could not be computed at all using the earlier software version. \begin{figure}[H] \centering \includegraphics[scale=.35]{Fig4.png} \caption {Multidosing of the GPC function for dog 1 with one 18.248 mg/kg body weight dose every 24 h. Panel \textbf{a} shows the predicted concentration curve. Note that the peak concentrations did not increase much following 14 doses (0.089\%), but that the trough values increased rather more substantially (2.48 times) For panel \textbf{b} showing number of doses retained in the body, one first sees a 1 dose peak increasing to a 1.97 dose peak for the 14$^\text{th}$ dose, whereas the trough initially at 0.117 doses, increased 8.85 fold to finish at a 1.03 dose trough.} \label{multi_14-days} \end{figure} \noindent Notice that if we wish to glean information during metformin multidosing with plasma or serum sampling, the best time to do so is just prior to the next scheduled dosing as those concentrations change for each dose interval, whereas the peak concentration change over time is very small. However, because the tissue dosage\footnote{For a single dose, body drug mass is $M(t) = \text{Dose } [1-\text{GPC}_F(t)]$} accumulates, the amount of drug in the body (Figure \ref{multi_14-days}b) cannot be predicted from serum (or plasma) concentration alone. Note that approximately one entire dose has accumulated in tissue by 14 days despite most of the cumulative dosage having been eliminated over that time. That is, during the first dose interval, the mean drug mass remaining in the body was 0.175 doses, and during the 14$^\text{th}$ dose interval the mean drug mass remaining in the body was 1.118 doses, where 12.88 dose masses were eliminated. \subsection*{Which are better, confidence intervals or coefficients of variation?}With reference to Table \ref{CIQ}, confidence intervals (CI) of the mean were extracted from model-based bootstrap with 40 cases generated for the first dog's data. For calculating CI's of the mean, the Student's-$t$ method was used (Verified assumptions: Central Limit Theorem, $n>30$, light-tailed distributions). However, as a result of extensive testing the degrees of freedom were set at $n$ rather than the more typical $n-1$, as it was found that for smaller values of $n$, physically impossible results were obtained, whereas even for $n=2$, when $n$ was used, rather than $n-1$, the results were accurate. For $n=40$ it made very little difference whether $n-1$ or $n$ were used. Also shown are CI's of the model based bootstrap (A.K.A., parametric bootstrap) results calculated directly from the $n=40$ data using the nonparametric quantile (A.K.A, percentile) method of Weibull \cite{gurland1971simple}.\footnote{This uses the Weibull method for extracting confidence intervals, which in Microsoft Excel (2007) would format for the lower tail as PERCENTILE.EXC(A1:A40,0.025) and from Mathematica 12.3 \cite{Mathematica} as Quantile[data, 0.025, $\{\{0, 1\}, \{0, 1\}\}$], https://mathworld.wolfram.com/Quantile.html} Note that the Pareto rate parameter, $\beta$ was not presented. Since many (38 of 40) of the results were are the constraint boundaries of 25 to 30 s, one already knows what the confidence interval largely is; the constraint values themselves. Another situation entirely exists for coefficients of variation (CV). Note in the table that when $n=5$ as during the prior study, that the values so obtained were too small. It is theoretically possible to use bootstrap (in our case that would be bootstrap of model-based bootstrap) to obtain confidence interval quantiles for the median CV, and although median values of CV's have shown sufficient robustness to construct confidence intervals for $n$ sufficiently large \cite{brody2002significance}, the correction for $n$-small is problematic as per Table \ref{CIQ} and the \nameref{Discussion} Section that follows. \begin{table}[ht] \centering \captionsetup{justification=justified,margin=0cm} \caption{Example parameter results and quality assurance are shown for dog 1. The relative root mean square error of fit (rrms \%) and R$^2$ values are acceptable. Confidence intervals (CI) are shown for the parameter values and the mean parameter values.} \label{CIQ} \vspace{-.5em} \setlength\tabcolsep{2.7pt} \begin{tabularx}{.8\columnwidth}{cccccccc} \hline \vspace{-0.2em}&Primary&95\% CI&Mean&95\% CI&SD&CV\%&Prior\\ &result&bootstrap&&of mean&&&CV\%\\ \Xhline{2\arrayrulewidth} $n$&1&40&40&40&40&40&5\\ rrms \%&8.71&4.72 to 9.88&7.52&7.16 to 7.89&1.15&15.3&---\\ R$^2$&0.99872&0.99773 to 0.99950&0.99872&0.99860 to 0.99884&0.00038&0.039&---\\ $a$&0.349&0.198 to 0.516&0.350&0.324 to 0.376&0.0802&22.9&8.09\\ $b$&0.732&0.558 to 0.889&0.735&0.708 to 0.761&0.0821&11.2&4.84\\ $\alpha$&0.264&0.235 to 0.322&0.268&0.261 to 0.275&0.021&7.85&5.13\\ \textit{AUC}&31.2&26.8 to 38.0&31.3&30.4 to 32.2&2.77&8.84&6.22\\ \textit{CL}&9.76&8.01 to 11.3&9.78&$\;\,$9.52 to 10.05&0.829&8.47&6.42\\ \hline \end{tabularx} \end{table} \section*{Discussion}\label{Discussion} Wise \cite{Wise1985} first proposed that power functions or gamma distributions should be used in pharmacokinetic modelling as superior alternatives to sums of exponential terms. This has been reinforced more recently, for example by Macheras \cite{Dokoumetzidis2010}. While convolution models and fractal consistent models have been shown to be superior models in some cases and find occasional use \cite{garrett1994bateman,Wesolowski2016GDC,wanasundara2016,Wesolowski2016PLoS,Wesolowski_2020} compartmental modelling software is widely available and is used by default. For example, compared to biexponential clearance evaluation of 412 human studies using a glomerular filtration rate agent, adaptively regularised gamma distribution (Tk-GV method \cite{wesolowski2010tikhonov,wesolowski2014method}) testing was able to reduce sampling from 8 to 4 h and from nine to four samples for a more precise and more accurate, yet more easily tolerated and simplified clearance test \cite{wanasundara2016}. Despite this, few institutions have implemented the Tk-GV method at present. In the case of metformin, a highly polar ionised base, the extensive, obligatory active transport of the drug into tissue produces a rate of expansion of the apparent volume of distribution having the same units as renal clearance, yielding the Pareto (power function) tail. This heavy tail, and Figure \ref{multi_14-days}, help to explain why metformin control of plasma glucose levels had delayed onset, e.g., following 4-weeks of oral dosing \cite{buse2016primary}, and provides hints concerning the lack of a direct correlation between drug effect and blood metformin concentrations \cite{stepensky2002pharmacokinetic}. Other basic drugs whose active transport dominates their disposition may show similar behaviour. The long tail in the disposition of amiodarone may be a reflection of its very high lipid solubility rather than, or in association with, active tissue uptake. Weiss \cite{Weiss1999} described its kinetics after a 10 min intravenous infusion with an \textit{s}-space Laplace transform convolution of a monoexponential cumulative distribution with an inverse Gaussian distribution and a Pareto type I density, which lacked a real or \textit{t}-space inverse transform such that the modelling information had to be extracted numerically. A real space $f(t)$ model convolution of time-limited infusion effects of a GPC type I distribution is simple to construct and would be the same as Weiss's model in the one essential aspect that matters; testing of the amiodarone power function tail hypothesis, for which a GPC derived model would have the advantage of being more transparently inspectable. Similarly, Claret \textit{et al.} \cite{Claret2001} used finite time difference power functions to investigate cyclosporin kinetics for which GPC and related model testing may be appropriate. We were able to use Nelder-Mead global search regression model-based bootstrap to provide more information and better information for parameter variability than would be available from a gradient matrix. Some readers would prefer to use the Levenberg-Marquardt algorithm convex gradient regression method, so that the gradients can be used to estimate case-wise coefficients of variation. The logarithm of sums of exponential terms is always convex. The GPC-metformin loss function is nonconvex, as shown by failure of the interior point method to improve on solutions as reported in the \nameref{data} Subsection. Constrained nonconvex gradient methods are comparatively rarely implemented; there appears to be no such implementation in Mathematica at present. Correction of standard deviation (SD) for small numbers ($n<30$) using bootstrap of model-based bootstrap and $\chi^2$ were used as mentioned elsewhere \cite{friedman2009elements}, and led to using $n$ rather than $n-1$ for Student's-$t$ degrees of freedom. Whereas variance is unbiased, when the square-root of variance is taken, the result, standard deviation becomes biased. Arising from $\chi^2$, a standard deviation from only two samples, is on average only 79.8\%, $\sqrt{\frac{2}{\pi }}$, of the population standard deviation \cite{gurland1971simple}.\footnote{\label{note1}Given only two samples, the population mean is not located midway between them, however, the midpoint (mean) is used to estimate the population mean in the standard deviation formula. The correction formula multiplier for an unbiased estimator ($\hat{\sigma}$) of population standard deviation ($\sigma$) from sample standard deviation ($s$) is $\hat{\sigma}=c_n s$, where $c_n=\sqrt{\frac{n-1}{2}} \Gamma \left(\frac{n-1}{2}\right)\Gamma \left(\frac{n}{2}\right)^{-1}$ \cite{gurland1971simple}.} Gradient methods lack \textit{pre hoc} testing of the implicit assumption of residual normality and do not \textit{post hoc} provide any parameter distribution information. From the trace of the gradient matrix, one obtains a standard deviation with degrees of freedom that are $n-p-1$ ($n$-samples, $p$ parameters) \cite{friedman2009elements}. For standard deviations in the case where $n-p-1$ is small, the corrections for standard deviation are large. Overall, the ratio between gradient based error propagation results and that from bootstrap is not unusually a factor of two larger or smaller \cite{green1987standard}. Moreover, average fit errors using any loss function >10\%, for assay methods with errors <10\% may suggest that the algorithm/data combination is suspect \cite{burger2010limited,Wesolowski2016GDC,zhang2016bootstrapping,Wesolowski_2020}, and for the metformin dog data that is the case for two- and three-compartment models, but not for the GPC model, which latter model was the only one to fit the data better than 10\% (average 8.6\% rrms with assay error of 5.2\% rrms), as well as being the only model to exhibit normality and homoscedasticity of residuals \cite{Wesolowski_2020}. When the fit error is >10\%, one should, at a minimum, test residuals for homoscedasticity and normality, and if these are not present, a better fit model should be sought for its own sake, and bootstraping becomes problematic \cite{zhang2016bootstrapping}. The use of coefficients of variation is sometimes problematic. Suppose that we have lots of data, but because $\text{CV}=\text{SD}/$mean, if by chance in a particular case especially if we have small $n$, some of the multiply generated mean values may approach zero, which injects some erratically high CV-values into a distribution of values. It is for that reason, numerically instability, that the more data one has, the worse the mean CV-value can be, with the solution being to first calculate many CV values, and then take their median value \cite{brody2002significance}. Even though the mean value may be not useful, the median may be, and confidence intervals for CV could be established using bootstrap quantiles, but not by using the gradient matrix approach because correction for $n$-small is problematic. That is, for mean values that can be rewritten as being proportional and having an established maximum range, e,g., Likert scale minus 1 variables, correcting CV underestimation for small values of $n$ is possible. However, if, as is the case here, there is no theoretical maximum CV, one needs to invent a correction based upon the observed confidence intervals of the mean \cite{smithson1982relative}, such that CI-values are unavoidable for determining the meaning of the preponderance of CV results. Finally, comparison for significant differences between parameters for one subject versus another are easy to construct using CI, but more difficult to obtain for CV. Thus, CV-values cited without explicit quality assurance should be regarded as qualitative results. \subsection*{Limitations} A major deficiency of the first article that applied and compared the gamma-Pareto type I convolution (GPC) model to other models \cite{Wesolowski_2020} was the lack of an algorithm that could be used for independent investigation and/or for application to other drugs. The accelerated algorithm presented herein is the first publication of code for a gamma-Pareto type I convolution (GPC). As such, the algorithm was kept in a simple form without using all possible acceleration tools or stopping conditions. While it could be optimised for even shorter run-times using vector addition of subsets of the summands, by using Eq.~\eqref{eq:proved} to reduce summand magnitudes for the short-$t$ algorithm and/or combining partial sums of the summands for the short- or long-$t$ algorithms, by eliminating diagnostic parameters such as run-time calculations, by compiling it, and by multiple other means. However, that would be at the expense of clarity and/or simplicity of presentation. It is complicated to compute the values of functions like the $\sin(x)$ efficiently. For example, an answer with precalculated exact precision can be quickly generated for $\sin{x}$ using the CORDIC procedure, which is optimised for binary register operations at the machine level \cite{Volder1959}. At a much higher and slower level, compared to the GPC$(t)$ short-$t$ algorithm, the $\sin(t)$ function's series expansion has even larger magnitude terms for long $t$-values. In its current form, the combined short- and long-$t$ GPC algorithm is so much faster than the previously published run times using the seven dogs 72 h data and more generally valid that it is now a practical algorithm. The current implementation is no longer limited as to how long $t$ is, and the propagated error of up to $2\times10^{-14}$ for parameter values obtained from regression of 72 h data has been reduced to $<10^{-20}$. That error demonstrates the major extent to which errors from 65 decimal place precision can propagate during processing of tens of thousands of calculations, especially during regression, which typically, by default, halves the number of significant figures---see the \nameref{data} Subsection. This does not affect any of the parameter values listed in Table \ref{params}, but the ability to quickly calculate a larger number of model-based bootstrap results would improve the parameter CI estimates. Another consideration is how to obtain exactly $n$ significant figures precision when $n$ are requested. Currently, for 65 significant figures requested, a result precise to several significant figures greater or lesser than 65 is returned and the algorithm is written only for 65 significant figure precision. Generalising this to request to obtain an arbitrary specific precision for a GPC functional height awaits the next several generations of algorithmic refinement. \section*{Conclusions} The new GPC type I algorithm consists of two parts, one for very early times, and another for late times. At times less than approximately $4\,\beta$, i.e., 100-120 s for the metformin data, the short-$t$ algorithm is actually faster than the long-$t$ algorithm. For early data, the short-$t$ algorithm has alternating sign terms of monotonically decreasing magnitude. However, when used at long times, the short-$t$ GPC algorithm required precalculation of the precision needed for later summation, which represents an improvement over the algorithm previously used \cite{Wesolowski_2020}. In the newly-proposed, combined short and long-$t$ algorithm this precalculation is unnecessary because of the long-$t$ algorithm usage for all but the shortest $t$-values, resulting in markedly accelerated convergence, and the new ability to predict concentration at any time, no matter how long. \section*{Acknowledgements} The authors thank Kunal Khadke at Wolfram Research for assistance with precision and Mathematica block structures, and William J. Jusko of the University of Buffalo for his generous advice concerning the intellectual content. \section*{Appendix} This section provides information concerning convergence of the short- and long-$t$ algorithms, when they should be used, and how to encode them in the Mathematica \cite{Mathematica} language. \subsection*{Short-\textit{t} GPC convergence}\label{short} The short-$t$ algorithm is an alternating series sum. For alternating series one can distinguish two types of convergence. Conditional convergence in which the value of the infinite sum depends on the order in which summation is performed as shown by the Riemann rearrangement theorem, and absolute convergence for which any order, or permutation, of summation process yields the same, unique, sum. Convergence is defined as conditional when an alternating series converges but its absolute value does not \cite{agana2015classical}. For example, the alternating harmonic series $\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}$ has an absolute value ratio of next term to current term of $\frac{n}{n+1}$, whose limit as $n\to\infty$ is 1. That means that as $n$ increases, the next term approaches the same size as the $n$th term, such that the absolute sum of terms is not bounded above, and the order of addition of the original series determines what the total sum is, making changes in order of summation yield different, i.e., ambiguous, results. If a limiting term ratio is less than 1, for example $\frac{1}{2}$, the series is absolutely convergent, e.g., the limiting infinite sum ratio of $\frac{1}{2}$ for some eventual term is, in binary arithmetic, $0.111111\dots_2\to1_2=1$. It is fair to call series, whose limiting absolute value term ratio is 0, eventually-rapidly convergent. In the case of the short-$t$ algorithm the infinite sum of its absolute values is, for sufficiently large values of $t$, a very large number. However, for any real valued time, $t$, no matter how large, there is a real number, $M$, that is greater than the magnitude of the infinite sum of absolute values of Eq.~\eqref{eq:GPC2}. That $M$ can be fantastically large, but never infinite, makes it difficult to use Eq.~\eqref{eq:GPC2} without precision that is explicitly extended for the purpose of accurately forming the infinite sum for certain values of the parameters and long times, but it does not make convergence conditional. \newtheorem{2em}{Lemma} \begin{2em}\label{Lemma-1}In the case of the short-t GPC algorithm, convergence is absolute and eventually rapid, such that the Riemann rearrangement theorem prohibition for resequencing infinite sums does not apply.\end{2em} \begin{proof}To show absolute convergence of the short-$t$ GPC type I algorithm, we construct the absolute value of the ratio of the $(n+1)$th to $n$th term. First, we take the infinite series definition of the incomplete beta function, \footnote{http://functions.wolfram.com/06.19.06.0002.01}\[B_z(A,B)=z^A \sum _{j=0}^{\infty } \frac{z^j (1-B)_j}{j! (A+j)}=z^A\left[\frac{1}{A}-\frac{B - 1}{A + 1} z +\frac{(B-1) (B-2) }{2! (A+2)}z^2-\dots\right];\;\;| z| <1\land \neg (-a\in \mathbb{Z}\land -a\geq 0)\;.\]Although this is an alternating sign series with a restricted range of convergence, we term by term, \textit{without permutation}, substitute into it the incomplete beta function parameters of Eq.~\eqref{eq:GPC2}'s $n$th and $(n+1)$th terms; $B_{1-\frac{\beta}{t}}(a+n,-\alpha),$ and $B_{1-\frac{\beta}{t}}(a+n+1,-\alpha),$ and substitute that into the absolute value of the $(n+1)$th to $n$th term ratio of the summand of Eq.~\eqref{eq:GPC2}, and simplify to yield,\[\frac{b\, (t-\beta )}{n+1}\;\frac{\mfrac{1}{a+n+1}+\mfrac{(\alpha +1) }{a+n+2}\left(1-\mfrac{\beta }{t}\right)+\mfrac{(\alpha +1) (\alpha +2) }{2! (a+n+3)}\left(1-\mfrac{\beta }{t}\right)^2+\dots}{\mfrac{1}{a+n}+\mfrac{(\alpha +1) }{a+n+1}\left(1-\mfrac{\beta }{t}\right)+\mfrac{(\alpha +1) (\alpha +2)}{2! (a+n+2)} \left(1-\mfrac{\beta }{t}\right)^2+\dots}<\frac{b\, (t-\beta )}{n+1}\;\;.\]As $\infty>t>\beta$, neither the infinite series numerator or denominator is alternating, thus their ratio is absolutely convergent as $\mfrac{b\, (t-\beta )}{n+1}$ is an asymptote of, and upper bound for, the ratio of consecutive absolute value terms as $n\to\infty$. While $b\, (t-\beta )>n+1$, if that occurs, for example for long $t$-values, we would expect the magnitude of the terms of the summands to increase for $n$ small enough, but as $n$ increases $b\, (t-\beta )\ll n+1$ eventually, and the $(n+1)$th relative term magnitude can be made as asymptotically close to zero as desired, and convergent by the ratio test \cite{laugwitz1994riemann}. \end{proof}Thus, the magnitude of alternating terms is eventually monotonically decreasing such that the absolute error of summation from truncating at an $n$th term for $n$ sufficiently large is less than the magnitude of the $(n+1)$th term by the alternating series remainder theorem. Moreover, the first term of the summand is some definite positive real number proportional to 1. Setting the first term to be 1, we conclude that the sum of the absolute value of the summands of Eq.~\eqref{eq:GPC2} is proportional to a number bounded above by \[M\propto\sum _{k=0}^{\infty } \frac{[b (t-\beta )]^k}{k!}=e^{b (t-\beta )}\;\;,\]such that the sum of absolute values of summands of Eq.~\eqref{eq:GPC2} is bounded above by some positive constant value times an exponential function of $t$, and Eq.~\eqref{eq:GPC2} is absolutely convergent. \subsection*{Long-\textit{t} GPC algorithm convergence rapidity}\label{long} This subsection examines the rapidity of convergence of the long-$t$ GPC algorithm. In Lemma \ref{Lemma-1} directly above, it was shown that the short-$t$ algorithm is absolutely convergent. Therefore, its infinite series rewrite as the long-$t$ Theorem \ref{longT}, Eq.~\eqref{eq:accelgpc}, is also convergent but how many summation terms are needed for convergence and which parameters determine this convergence can be clarified using the substituted definition of the confluent hypergeometric series\footnote{http://functions.wolfram.com/07.20.06.0002.01} as follows. \[\, _1F_1(a;a-k;-b\, t)=\sum _{j=0}^{\infty } \frac{(a)_j (-b\, t)^j}{j! (a-k)_j}=1-\frac{a\, b\, t}{a-k}+\frac{a (a+1) (b\, t)^2}{2! (a-k) (a-k+1)}-\dots\;\;. \] Note that in the limit as $k\to\infty$ the above equation is asymptotically ($\sim$) 1. Next, the ratio of the $(k+1)$th to $k$th term is asymptotic to $\mfrac{\beta }{k\, t}$ for $k$ sufficiently large, \[\frac{\beta}{t}\; \frac{\alpha-k }{(k+1)( \alpha-k-1)}\;\frac{1-\mfrac{a }{a-k-1}b\, t+\mfrac{a (a+1) }{2 (a-k-1) (a-k)}(b\,t)^2-\cdots}{1-\mfrac{a}{a-k}b\, t+\mfrac{a (a+1) }{2 (a-k) (a-k+1)}(b\,t)^2-\cdots}\sim\frac{\beta }{k\, t}\;\;.\] \noindent For that reason, for longer $t$-values, one can expect faster convergence of the long-$t$ algorithm with fewer terms summed. \subsection*{Choosing when to use the short-\textit{t} and long-\textit{t} algorithms}\label{choose} As above, the absolute value of the ratio of the next term to the current term for the short-$t$ algorithm is bounded above by $\mfrac{b\, (t-\beta )}{n+1}$. For the long-$t$ algorithm, the ratio of the $(k+1)$th to $k$th term approaches $\mfrac{\beta }{k\, t}$ for $k$ sufficiently large. Note that these are in the opposite direction, that is, while $t$-values increase, $\mfrac{b\, (t-\beta )}{n+1}$ increases and $\mfrac{\beta }{k\, t}$ decreases. It is not critical exactly at what $t$ value one elects to use the short- and long-$t$ algorithms, as the major cost in computational time and number of terms needed occurs at the extreme values of $t$, but in an opposite direction for each algorithm. Figure \ref{shortlong3d} shows the tradeoff for dog 1 of the metformin series between numbers of terms for summation, time following bolus injection, and the magnitude of the natural logarithm of $\text{GPC}(t)$, where GPC$(t)=\frac{C(t)}{\textit{AUC}}$. Selecting $t=4\beta$ as a cut point for switching between algorithms means that the short-$t$ algorithm absolute sum of terms is bounded above, from substitution into $e^{b (t-\beta )}$, by $e^{3b\,\beta}$ times the first term's value, not a large number, and the long-$t$ algorithm has an approximate maximum $(k+1)$th to $k$th term ratio of $\frac{1}{4k}$ for the shortest $t$-value used, which can still be made as small as desired for $k$ sufficiently large. \begin{figure}[H] \centering \includegraphics[scale=.55]{shortlong3d.png} \caption {The tradeoff between the time elapsed following bolus intravenous injection ($x$-axis), the number of terms to be summed for calculating the GPC function's value ($y$-axis), and the natural logarithm of the GPC function at that time ($z$-axis). Panel \textbf{a} shows short-$t$ algorithm performance and panel \textbf{b} shows the long-$t$ algorithm performance. Note that only for very early elapsed times does the short-$t$ algorithm have fewer terms than the long-$t$ algorithm.} \label{shortlong3d} \end{figure} \subsection*{Mathematica source code of the GPC type I accelerated algorithm}\label{alg} \textcolor{teal}{ \textsf{(*.....................The gpc$[t]$ function; a gamma-Pareto type I convolution fast calculation algorithm.............................\\ ...........................................................Copyright Carl A. Wesolowski, 2021................................................................\\ To fit disposition data, minimise the loss function using AUC $\times$ gpc$[t]$ as the model, which returns AUC as a value. To use gpc$[t]$ enter the constrained > 0 coefficients a, b, $\alpha$, and $\beta$ to 65 decimal place accuracy. a and $\alpha$ are dimensionless, b is a rate and $\beta$ is a time. For example, the metformin dog 1 GPC type I parameters using b (h$^{-1}$) and $\beta$ (h) are*)}}\\ \textsf{a = N[Rationalize[0.34931003807815571524792421542558602868248355919027496611955665616], 65];\\ b = N[Rationalize[0.73182479199387479660419087183394451163091958778927254273673996698], 65];\\ $\alpha$ = N[Rationalize[0.26437129139517680335740710070693267536710608361890151476103695922], 65];\\ $\beta$ = N[Rationalize[0.0069444444444444444444444444444444444444444444444444444444444444444, 67];}\\ \textcolor{teal}{\textsf{(*Note the explicit input precision as 65 digits, which must be specified as such for the algorithm to function properly. During regression analysis the parameters values above would change to minimise a loss function, the ones above were used as realistic values for the algorithm execution timing trial of the Results Section.*)}} {\setstretch{1.3}\\ \noindent \$\textsf{MaxExtraPrecision = }$\infty;$\\ \$\textsf{MinPrecision = 0;}\textcolor{teal}{\textsf{ (*This machine precision is adjusted in the Block commands below to $\$$MinPrecision = desirprec*)}}\\ \\ \textcolor{teal}{\textsf{(*.......................................Section for short times, $\beta<t<4\beta$, where $4\beta\approx $ 100 to 120 s.........................................*)}}\\ $\mathsf{gpcshort[\textcolor{blue}{\emph{t}}\_?NumericQ]:=Quiet\Big[Unevaluated\Big[k=0;storage=.;}$\textcolor{teal}{\textsf{ (*storage is an array for later summation*)}}\\ \indent $\mathsf{conscale= \frac{\alpha b^a \beta ^{\alpha }}{Gamma [a]}\textcolor{blue}{\emph{t}}^{\;a-\alpha -1};}$\textcolor{teal}{\textsf{ (*This is the constant multiplier of the sum and is calculated only once*)}}\\ \indent $\mathsf{target= \frac{10^{-65}}{ conscale};}$\\ \indent $\mathsf{storage=-conscale\;First@Last@Reap@While\left[Abs\left[Sow\big[ \frac{(-b \textcolor{blue}{\emph{t}})^k }{k!}Beta[1-\beta/\textcolor{blue}{\emph{t}}, a+k,-\alpha ]\big]\right]\right.}$\\ \indent \indent $\mathsf{ >target,k+\!+\Big];}$\\ \indent $\mathsf{nn= Ordering[storage,-1]\,[[1]]-1;}$ \textcolor{teal}{\textsf{ (*Finds nn, the index of the largest magnitude term in the storage array*)}}\\ \indent $\mathsf{outmax=conscale \frac{(-b \textcolor{blue}{\emph{t}})^{nn}}{nn!} Beta[1-\beta/\textcolor{blue}{\emph{t}},a+nn,-\alpha];}$\\ \indent $\mathsf{lastn=Length[storage];\; xprec=Round[Log10[Abs[outmax]]];}$\\ \indent $\mathsf{If[xprec === Indeterminate, xprec = 0];}$ \textcolor{teal}{\textsf{ (*Exact times, e.g., $t=e^{1/1000}$ for $n=1$ may need this correction.*)}}\\ \indent $\mathsf{desirprec = 65 + Abs[xprec]\Big]\Big];}$\\ \\ \textcolor{teal}{\textsf{(*...............................................$t\geq 4\beta$ section with asymptotes for long values of t...............................................*)}}\\ $\mathsf{gpcsetup[\textcolor{blue}{\emph{t}}\_?NumericQ]:=Quiet\Big[Unevaluated\Big[k=1;storage=.;}$\\ \indent $\mathsf{conscale = \frac{b^a \textcolor{blue}{\emph{t}}^{-1 + a}\alpha }{Gamma[a]};}$\textcolor{teal}{\textsf{ (*This constant multiplier is a different one than that above*)}}\\ \indent $\mathsf{target= \frac{10^{-65}}{conscale};}$\\ \indent $\mathsf{storage=-conscale\; First@Last@Reap@While\big[Abs\big[Sow[}$\\ \indent \indent $\mathsf{ \frac{(\beta/t)^k}{(k-\alpha) k!}Hypergeometric1F1[a,a-k,-b \,\textcolor{blue}{\emph{t}}\,]\; Pochhammer[1-a,k]]]>target,k+\!\!\,+];}$\\ \indent $\mathsf{nn = Ordering[Abs[storage], -1]\,[[1]];}$\\ \indent $\mathsf{outmax = -conscale \frac{(\beta/\textcolor{blue}{\emph{t}})^{nn}}{(nn - \alpha) nn!} Hypergeometric1F1[a, a - nn, -b\, \textcolor{blue}{\emph{t}}\,] \;Pochhammer[1 - a, nn];}$\\ \indent $\mathsf{lastn = Length[storage];}$\\ \indent $\mathsf{asympt[\textcolor{blue}{\emph{z}}\_?NumericQ] := \frac{b^a e^{-b \,\textcolor{blue}{\emph{z}}} \textcolor{blue}{\emph{z}}^{-1 + a}}{Gamma[a] }- \frac{\pi b^a \beta^\alpha \alpha Csc[\pi\,\alpha]}{Gamma[1 + \alpha]}\textcolor{blue}{\emph{z}}^{-1 + a - \alpha}Hypergeometric1F1Regularized[a, a - \alpha, -b \,\textcolor{blue}{\emph{z}}\,];}$\\ \indent $\mathsf{SetAttributes[asympt, Listable]; xprec = Round[Log10[Abs[outmax + asympt[\textcolor{blue}{\emph{t}}\,]\,]]];}$\\ \indent $\mathsf{desirprec = 65 + Abs[xprec]\Big]\Big];}$\\ \\ \noindent \textcolor{teal}{\textsf{(*..........................................GPC combined short and long time function call (use this one)...................................*)}}\\ $\mathsf{gpc[\textcolor{blue}{\emph{t}}\_?NumericQ] := Quiet\Big[If\Big[\textcolor{blue}{\emph{t}} \leq \beta, 0, If\big[\textcolor{blue}{\emph{t}} < 4 \beta, }$\\ \indent $\mathsf{Block\big[\{{\textcolor{blue}{\$MinPrecision}}= desirprec, {\textcolor{blue}{\$MaxExtraPrecision}} = \infty\},gpcshort[\textcolor{blue}{\emph{t}}\,]; \sum_{j=1}^{j=lastn}storage[[\,j\,]]\big],}$\\ \indent $\mathsf{ Block[{\textcolor{blue}{\$MinPrecision}} = desirprec, {\textcolor{blue}{\$MaxExtraPrecision}} = \infty \}, gpcsetup[\textcolor{blue}{\emph{t}}\,]; asympt[\textcolor{blue}{\emph{t}}\,]+\sum_{j=1}^{j=lastn}storage[[\,j\,]] \,]\big]\Big]\Big]; }$\\ \\ \noindent \textcolor{teal}{\textsf{(*..........Using the code above, the short time algorithm is............*)}} \\ $\mathsf{gpcslow[\textcolor{blue}{\emph{t}}\,\_?NumericQ] := Quiet\big[If[\textcolor{blue}{\emph{t}}\, \leq \beta, 0, Block[\{{\textcolor{blue}{\$MinPrecision}} = 0, {\textcolor{blue}{\$MaxExtraPrecision}} = \infty\}, gpcshort[\textcolor{blue}{\emph{t}}\,]];}$\\ \indent $\mathsf{Block[\{{\textcolor{blue}{\$MinPrecision}} = desirprec, {\textcolor{blue}{\$MaxExtraPrecision}} = \infty\}, gpcshort[\textcolor{blue}{\emph{t}}\,]]; \sum_{j=1}^{j=lastn}storage[[\,j\,]]\,]\big];}$\\ \\ \noindent \textcolor{teal}{\textsf{(*...........Using the code above, the long time algorithm is.............*)}} \\ $\mathsf{gpclong[\textcolor{blue}{\emph{t}}\,\_?NumericQ] := Quiet\Big[If[\textcolor{blue}{\emph{t}}\, \leq \beta, 0, }$\\ \indent $\mathsf{Block\big[\{{\textcolor{blue}{\$MinPrecision}} = desirprec, {\textcolor{blue}{\$MaxExtraPrecision}} = \infty\}, gpcsetup[\textcolor{blue}{\emph{t}}\,]; asympt[\textcolor{blue}{\emph{t}}\,]+\sum_{j=1}^{j=lastn}storage[[\,j\,]]\big]\Big]\Big];}$\\ \par} \end{onehalfspacing} \begin{small} \bibliographystyle{myvancouver2}
1,108,101,563,737
arxiv
\section{Introduction} Many real-world complex systems, which we model as networks, display disjoint vulnerability to failure or attack. These vulnerabilities make networks far less robust than they seem. It is generally assumed that redundant connections through multiple paths improves robustness \cite{stelling-cell2004,carmi-pnas2007} but if a given vulnerability affects a large set of nodes, this may not be the case. For example, if one node can not communicate with another without routing the information through routers under a given entity's control, secure communication is compromised. Similarly, in an economic network, if a firm has redundant suppliers but each supply chain includes nodes belonging to a given company, then there is an absence of competition--even if in principle there are multiple competing companies working in that sector. Similar considerations hold for nodes in a spatial network that are located near one another because transportation and economic assets in the same city will be affected by the same weather events or disasters \cite{neumayer-milcom2008,agarwal-infocom2011,berezin-scireps2015}. Disjoint vulnerabilities also appear in biological networks. Depending on the type of nutrients available, different metabolic pathways are enabled \cite{Schuster2000Metabolic,Feil2012epigenetics}. In this case, the metabolic network is disjointly vulnerable to the absence of a certain type of nutrient. Robust functionality can be guaranteed only if there are paths connecting source and target metabolite even when each distinct nutrient is removed. Gene regulatory networks exhibit similar multipath responses to environmental conditions \cite{pal-nature2006,white-cell2013}. In all of these cases, connectivity alone gives a poor picture of the network's robustness and security. However, since susceptibility to one vulnerability often precludes susceptibility to another vulnerability, we can partition the network into disjoint subsets by vulnerability. The disjoint nature of the vulnerabilities allows for robust connectivity to be established, provided the network remains connected when each subset is removed. Here we present a new framework for analyzing disjointly vulnerable complex networks and show the conditions for which--even if every node is vulnerable--robust connectivity can be maintained. We model disjoint vulnerability by assigning every node in the network exactly one color, representing exactly that vulnerability. The color may represent ownership, geographical location, reliance on a critical material or some other vulnerability. Similar to polychromatic percolation \cite{zallen-prb1977,wierman-banach1989}, we consider the components formed by nodes of different colors separately. We then develop a ``color-avoiding percolation'' theory which allows us to determine the connectivity of the network when each color (ie, the set of all nodes of a given color) is removed. The set of nodes that are mutually connectible under the removal of \textit{any} color comprise the color-avoiding giant component. The existence of this component indicates whether or not the disjoint vulnerabilities can be avoided or not. \section{Color avoiding percolation} \begin{figure*}[htb] \begin{center} \begin{minipage}{0.3\textwidth}\centering (a)\\ \includegraphics[width=0.9\textwidth]{graph_sr1.pdf} \end{minipage} \begin{minipage}{0.3\textwidth}\centering (b1)\\ \includegraphics[width=0.9\textwidth]{graph_avoid0.pdf} \end{minipage} \begin{minipage}{0.3\textwidth}\centering (b2)\\ \includegraphics[width=0.9\textwidth]{graph_avoid1.pdf} \end{minipage}\\ \begin{minipage}{0.3\textwidth}\centering (c)\\ \includegraphics[width=0.9\textwidth]{pairs_S_color.pdf} \end{minipage} \begin{minipage}{0.3\textwidth}\centering (b3)\\ \includegraphics[width=0.9\textwidth]{graph_avoid2.pdf} \end{minipage} \begin{minipage}{0.3\textwidth}\centering (b4)\\ \includegraphics[width=0.9\textwidth]{graph_Gcolor.pdf} \end{minipage}\\ \caption{\textbf{Illustration of color-avoiding connectivity.} \textbf{(a)} In this network the sender S and the receiver R are color-avoiding connected (CAC), as the green path avoids black and white nodes, and the purple path avoids blue nodes. \textbf{(b)} \textbf{Finding $\mathcal{L}_{\rm color}$, the largest color-avoiding connected component.} \textbf{(b1)} The largest components without white ($\mathcal{L}_{\bar 1}$), \textbf{(b2)} without blue ($\mathcal{L}_{\bar 2}$) and \textbf{(b3)} without black nodes ($\mathcal{L}_{\bar 3}$) are highlighted in red, in each frame. \textbf{(b4)} Considering all of the nodes which are either in the largest components without each color or connected to them, we arrive at the largest CAC component, $\mathcal{L}_{\rm color}$, the red nodes. Every pair of nodes in $\mathcal{L}_{\rm color}$ is CAC. Note that some nodes are not color-avoiding connected but are necessary to form the color avoiding components. \textbf{(c)} Estimation of the fraction of color-avoiding connected pairs $p_{\rm pair}$ for quenched graphs with different values $S_{\rm color}$. Red squares show Poisson graphs with $N=10^5$ nodes, average degrees ${\bar k}=1.6;\,1.7;1.9;4.0$ and $C=3$ colors, the green circle shows the AS network with colors representing the countries which the AS are assigned to \cite{peixoto2014hierarchical, CaidaData}. The black line indicates the case where $S_{\rm color}$ accounts for all of the color-avoiding connected nodes. Deviations are only visible for the smallest value shown, with $S_{\rm color}=570$. $p_{\rm pair}$ was approximated with samples of up to $5\times 10^5$ pairs, error-bars are smaller than the symbols where not visible. } \label{fig:avoidable_colors} \end{center} \end{figure*} On a non-colored network, if node or link failures occur with a given probability, percolation theory can be used to determine overall connectivity \cite{cohen-book2010,newman-book2010}. Percolation on complex networks has a rich history \cite{boccaletti-physicsreports2006,caldarelli-sfbook2007,cohen-book2010,newman-book2010,achlioptas-science2009}. It has been used to study the resilience of the internet \cite{cohen-2000resilience,cohen-prl2001}, its susceptibility to virus spreading \cite{pastorsatorras-prl2001} and even in probabilistic routing algorithms~\cite{sasson2003probabilistic}. It has also been used to understand word-of-mouth processes in social networks~\cite{goldenberg-physa2000,solomon-physa2000}, and the robustness of many biological networks including neural networks \cite{breskin-prl2006}, metabolic networks \cite{smart-pnas2008} and mitochondrial networks \cite{aon-pnas2004}. Here we develop a new framework based on percolation theory but not reducible to any previous percolation problems. In this framework, connectivity corresponds to the ability to avoid disjoint vulnerabilities via multiple paths. We begin with an undirected unweighted network $G$ with $N$ nodes and adjacency matrix $A_{ij}$. Every vertex $i$ is assigned a color $c_i\in\{1,2,\dots,C\}$, where $C$ denotes the total number of colors. Faced with the possible vulnerability or insecurity of all nodes of a single color, we seek a set of paths between two nodes such that \textit{no color} is required for \textit{all paths}. In non-colored graphs, a single path provides connectivity and in $k$-core percolation any $k$ paths are sufficient \cite{dorogovtsev-prl2006,goltsev-pre2006}. We now define a pair of nodes as `` color-avoiding connected'' (CAC) if, for every color $c$, there exists a path connecting this pair and \textit{avoiding} all nodes of color $c$. We assume that the source and target themselves are secure, and their colors are not included in the calculation of color-avoiding connectivity. The paths are not necessarily unique: often one path can avoid multiple colors (see Figure~\ref{fig:avoidable_colors}a). However, if $C$ paths cannot avoid all $C$ colors, then the source and target require one of the colors to be connected and adding more paths will not help. Since avoiding disjoint vulnerabilities through multiple paths is a feasible strategy only if a giant CAC component exists, we do not address optimal path problems but rather focus on the properties of CAC components. Formally, we define a ``color-avoiding connected component'' as a maximal set of nodes, where every node pair in the set is color-avoiding connected. Several examples of CAC components are shown in Fig. \ref{fig:avoidable_colors}b and Supp. Fig 1. Note that there are nodes which are not themselves part of the CAC component but are necessary for the color-avoiding connectivity of nodes which are in the component. This occurs, for example, when all of the neighbors which lead from a node to the CAC component are of the same color. In such a case, the node itself is not CAC to the system as a whole because it must pass through nodes of a certain color before it can reach elsewhere. However, in general, this node will still be necessary to form paths which avoid other colors. The fact that non-CAC nodes may be needed to create overall system color-avoiding connectivity is one indication that a new kind of percolation theory is needed to uncover this hidden structure. By studying the largest CAC component, we obtain a clear quantitative measure of the feasibility of multiple paths to avoid disjoint vulnerabilities and information on where those paths should be routed. Furthermore, this gives us a way to measure the effect of changes in network topology, link density and color distribution. To find the largest set of color-avoiding connected nodes in any network with any color distribution, we propose the following algorithm. First, for every color $c$, we delete all nodes with color $c$ and find the largest component in the remaining graph, $\mathcal{L}_{\bar c}$. Next, we define $\mathcal{L}_{\rm color}$ as the set of nodes which, for every color $c$, are either (a) in $\mathcal{L}_{\bar c}$ or (b) have at least one link to it. Condition (b) represents the assumption that the color of the source and target are not included in the calculation. If we only used condition (a), the calculation of $\mathcal{L}_{\rm color}$ from $\{\mathcal{L}_{\bar c}\}$ would be equivalent to the calculation of the mutual giant component in interdependent \cite{buldyrev-nature2010} or multiplex networks \cite{baxter-prl2012,boccaletti-physicsreports2014} and the result would always be an empty set because every node has some color $c'$ and is therefore not a member of $\mathcal{L}_{\bar c'}$. In Figure~\ref{fig:avoidable_colors}b, we illustrate this method and further technical details are discussed in Supp. Sec. 1.A. It is possible that $\mathcal{L}_{\rm color}$ does not represent the overall color-avoiding connectivity of the system due to smaller components. However, if $\mathcal{L}_{\rm color}$ scales with system size and the smaller color-avoiding connected components do not, then in the limit of large systems the overall color-avoiding connectivity is determined by $\mathcal{L}_{\rm color}$ just like the overall connectivity is determined by the size of the giant component in non-colored graphs. With $S_{\rm color}$ defined as the fraction of the total nodes which are in $\mathcal{L}_{\rm color}$ and $p_{\rm pair}$ defined as the total fraction of color-avoiding connected pairs among all node pairs, we can test if $\mathcal{L}_{\rm color}$ accounts for the bulk of color-avoiding connectivity. In Figure~\ref{fig:avoidable_colors}c we see that color-avoiding connectivity is indeed dominated by $\mathcal{L}_{\rm color}$ for random and real-world networks. When $S_{\rm color}$ is small, non-giant clusters and the trivial color-avoiding connectivity which accompanies individual links leads to deviations between $p_{\rm pair}$ and $S_{\rm color}$ but these deviations rapidly disappear as the sytem size increases. This validates the treatment of $\mathcal{L}_{\rm color}$ as a proxy for color-avoiding connectivity. We proceed to develop analytical results based on percolation theory for random networks. \section{Analytic theory for random networks} \begin{figure*}[htb] \begin{center} \begin{minipage}{0.32\textwidth} (a)\\ \includegraphics[width=0.95\textwidth]{S_color_poisson.pdf} \end{minipage} \begin{minipage}{0.32\textwidth} (b)\\ \includegraphics[width=0.95\textwidth]{S_color_sf_vark.pdf} \end{minipage} \begin{minipage}{0.32\textwidth} (c)\\ \includegraphics[width=0.95\textwidth]{S_scaling_3.png} \end{minipage} \caption{\textbf{Size of the giant color avoiding component $S_{\rm color}$ in random networks with uniformly distributed colors.} Dependence of $S_{\rm color}$ on average degree $\bar k $ \textbf{(a)} for Erd\H{o}s-R\'{e}nyi networks and \textbf{(b)} scale-free networks with different numbers of colors. Error bars are shown but barely visible for networks of size $N=10^6$. The blue lines show the corresponding analytical results. For comparison, we include the giant component size of standard percolation $S$ (black solid) and the limiting case of a system with an infinite number of colors, $S_{{\rm color},\infty}$ (black dashed). As mentioned in the text, $S_{{\rm color},\infty}$ is the same as the giant component in 2-core percolation. \textbf{(c)}: Critical exponent and finite size scaling for Erd\H{o}s-R\'{e}nyi networks with $C=3$. Note that in the critical region the theory and simulations show a slope of almost exactly 3 as predicted by Eq. \ref{eq:critical_params}. Finite size scaling is shown with the results of $>150$ realizations per size plotted individually and averaged. } \label{fig:analytics} \end{center} \end{figure*} For the analytical treatment we use the annealed approximation of networks of size $N$ described through the configuration model \cite{newman-book2010}, in which a degree distribution $p(k)$ is a conserved quantity from which an ensemble of network realizations is drawn. For a more comprehensive treatment see the supplementary information. Every node $i$ is assigned a color $c_i\in \{1,2,\dots,C\}$. The analytic framework presented here assumes that the colors are distributed uniformly at random. Hence, the color sequence $\{c_i\}$ has probability $\prod_i r_{c_i}$ with the color frequencies $r_c$ We calculate $S_{\rm color}$ in the limit of $N\to \infty$ as the probability that a single node belongs to $\mathcal{L}_{\rm color}$. Because $\mathcal{L}_{\rm color}$ is a subset of the regular giant component by construction, we begin by obtaining the solution for standard percolation on random graphs \cite{erd-1959random,newman-2001random,newman-book2010}. The size of the giant component in a non-colored random graph is $S = 1 - g_0(u)$ where $g_0(z)=\sum p_k z^k$ is the generating function of the probability distribution $p_k$. $u$ is the probability that a node is not connected to the giant component over one particular link and is computed as the solution of $u=g_1(u)$, where $g_1(z)=g_0'(z)/g_0'(1)$ is the generating function of excess degree \cite{newman-book2010}. Second, we let $\kappa_c$ be the expected number of a randomly chosen node's neighbors of color $c$ which are connected to the giant component of standard percolation. Considering $\kappa_c$ for all colors, we obtain the vector ${\vec \kappa}=(\kappa_1,\dots,\kappa_C)$ with $k'=\sum_c \kappa_c$ being the total number of links to the normal giant component. Third, the conditional probability $P_{\vec \kappa}$ that the links suffice to connect to $\mathcal{L}_{\rm color}$, given that they belong to distribution $\vec{\kappa}$ and that they already belong to the normal giant component, is: \begin{align} \label{eq:pveckappa} P_{\vec \kappa} &= \prod_{c=1}^{C}\left(1 - U_{\bar c}^{k' - \kappa_c}\right),\\ U_{\bar c} &= 1 - \frac{1-u_{\bar c}}{(1-u)(1-r_c)},\label{eq:U_c} \end{align} in which $U_{\bar c}$ denotes the conditional probability that a link fails to connect to $\mathcal{L}_{\bar c}$ given that it does connect to the normal giant component via a node having a color $c'\neq c$. We define $U_{\bar c}=1$ if $u=1$. The probability $u_{\bar c}$ that a single link does not connect to a giant $\mathcal{L}_{\bar c}$ is calculated with $u_{\bar c} = r_c + (1-r_c) g_1(u_{\bar c})$ (site percolation with a surviving fraction of nodes of $1-r_c$ \cite{newman-book2010}). Combining these terms, we obtain a formula for $S_{\rm color}$: \begin{equation} S_{\rm color} = \sum_{k=0}^{\infty}p_k \sum_{k'=0}^{k} B_{k,k'} \sum_{\kappa_1,\dots, \kappa_C=0}^{k'} M_{k',\vec \kappa} P_{\vec \kappa},\label{eq:s_color} \end{equation} where the binomial factor $B_{k,k'}$ (Supp. Eq. S7) accounts for the probability that out of $k$ links $k'$ links connect to the normal giant component. The multinomial factor $M_{k',\vec \kappa}$ (Supp. Eq. S8) gives the multinomial probability of having the color distribution ${\vec \kappa}$ among the neighbors belonging to the normal giant component. To obtain a closed-form solution for $S_{\rm color}$, we now assume that every color occurs with equal probability: $r_c = 1/C$. With $U_{\bar 1}=U_{\bar c}$ being identical for all colors we have (Supp. Eq. S20): \begin{align} S_{{\rm color},C} &= \sum_{j=0}^C (-1)^j {C \choose j} \times\nonumber \\ &\, \times g_0\left\{u+(1-u)\left[\frac{j}{C}U_{\bar 1}^{j-1} + \frac{C-j}{C}U_{\bar 1}^{j}\right]\right\}. \end{align} We now discuss the limiting cases $C=2$ and $C\to \infty$. The result for two colors can be simplified to (Supp. Eq. S17) \begin{align} S_{{\rm color},2} &=1-2 g_0(u_{\bar 1})+g_0(2 u_{\bar 1}-1) \end{align} which directly depends on $u_{\bar 1}$ only. As the number of colors tends to infinity, standard percolation \textit{is not} recovered and $S_{\rm color}$ remains smaller than the relative size of the giant component $S$ and in fact $S_{{\rm color},\infty}$ is identical to the giant component in $k$-core percolation with $k=2$ \cite{dorogovtsev-prl2006,goltsev-pre2006}. The reason that $S_{{\rm color},\infty}$ is equivalent to 2-core percolation is that--even if every node is a different color--if a node were connected via only one link, it would not be able to avoid the color of its sole neighbor. We demonstrate this directly by deriving an asymptotic form for $S_{\rm color}$ as $C\rightarrow\infty$ (Supp. Eq. S23): \begin{equation}\label{eq:scolinf} S_{{\rm color},\infty} = S - \left. (1-u)\frac{dg_0(z)}{dz}\right| _{z=u} \end{equation} which is the same result as in 2-core percolation. In Fig. \ref{fig:analytics}a we see that $S_{{\rm color},C}$ comes close to $S_{{\rm color},\infty}$ even for $C=10$, indicating that even moderate color diversity comes close to the infinite color case. We now discuss graphs with broad degree distributions with $p_k\sim k^{-\alpha}$ ($k>0$) and generating functions $g_0(z)={\rm Li}_{\alpha}(z)/\zeta(\alpha)$ and $g_1(z)={\rm Li}_{\alpha-1}(z)/[z\zeta(\alpha-1)]$, with ${\rm Li}_{\alpha}(z)$ the polylogarithm function. In Figure~\ref{fig:analytics}b we see results for $C=2$ and $C=10$ depending on the average degree $\bar k = \zeta(\alpha-1)/\zeta(\alpha)$ \cite{newman-book2010}. The limiting cases are diverging $\bar k $ for $\alpha=2$ and $\bar k =1$ for $\alpha\to\infty$. We see that $\bar k _{\rm crit}$ is not strongly affected by the number of colors but that the size of the giant CAC component is substantially smaller than in the case of Erd\H{o}s-R\'{e}nyi networks (see Figure \ref{fig:analytics}a-b). The critical connectivity can be calculated using Cohen's criterion for site percolation~\cite{cohen-2000resilience}. With the fraction $1-r_c$ of nodes surviving random removal, we obtain $1-r_c=1-1/C=\bar k /(\left<k^2\right>-\bar k )$. Since $\left<k^2\right> = \zeta(\alpha-2)/\zeta(\alpha)$, we have $\zeta(\alpha-2)/\zeta(\alpha-1)=1+C/(C-1)$. Accordingly $\bar k \approx1.254$ for two colors, and it converges to $\bar k \approx1.195$ for $C\to \infty$. We find that Erd\H{o}s-R\'{e}nyi~networks are more color-avoiding connected than scale-free networks of equal average degree, the opposite of the results for resilience to random failures \cite{albert2000error,cohen-prl2001,newman-book2010}. This follows from the difference in the 2-core envelopes; compare Figs. \ref{fig:analytics}a and \ref{fig:analytics}b. \begin{figure*} \centering \begin{minipage}{0.37\textwidth}\centering \includegraphics[width=\textwidth,trim=30 15 20 75,clip=true]{AS_Spain.pdf} (a)\\ \end{minipage} \begin{minipage}{0.37\textwidth}\centering \includegraphics[width=\textwidth,trim=30 15 20 75,clip=true]{AS_Lcolor_Spain.pdf} (b)\\ \end{minipage} \begin{minipage}{0.23\textwidth}\centering \includegraphics[width=\textwidth]{AS_top20.pdf} (c)\\ \end{minipage} \caption{\textbf{Color-avoiding connectivity of the AS-level internet.} \textbf{(a)} Here we show the routers of the AS-level internet in the Iberian peninsula as a disjointly vulnerable network with the colors determined by the country to which the router is registered. \textbf{(b)} Using these colors, we calculate the largest color-avoiding connected component. The green nodes are members of this set while the red are not. This means that these routers can take advantage of multiple paths to maintain security, as desribed in the main text. \textbf{(c)} This shows the number of CAC routers (nodes in $L_{\rm color}$) compared to total number of routers for the top 20 countries worldwide, in terms of total number of AS routers registered to that country. Data for the US has been trunctated for visibility, the total number of AS routers is 17690. We use a symmetrized version of the network of~\cite{peixoto2014hierarchical} which was generated using data from the CAIDA project~\cite{CaidaData} up to December 2013. } \label{fig:AS_data} \end{figure*} \section{Critical phenomena} We now turn to the critical behavior of $S_{\rm color}$ in Erd\H{o}s-R\'{e}nyi-graphs with $C$ uniformly distributed colors. Similar to standard percolation, we find that the size of the largest color-avoiding connected component $S_{\rm color}$ undergoes a phase transition at a specific $\bar k _{\rm crit}$, which is now determined by the number of colors see Figure~\ref{fig:analytics}. For $\bar k < \bar k _{\rm crit}$, color-avoiding connectivity is confined to clusters of finite size (zero in the limit of large $N$) and for $\bar k > \bar k _{\rm crit}$ there is a largest color-avoiding connected component $S_{\rm color}$ which scales with system size. We find that the value of $\bar k _{\rm crit}$ decreases as $C$ increases and approaches the standard percolation threshold as $C\rightarrow\infty$. Since color-avoiding connectivity requires that the giant component not be destroyed after the removal of any single color, we require that $\bar k _{crit}^{ER} = \bar k _{crit} \frac{C-1}{C}$ where $\bar k _{crit}^{ER}=1$ is the percolation threshold for ER graphs and $\frac{C-1}{C}$ is the fraction of links remaining after the removal of $1/C$ nodes. Therefore $\bar k _{\rm crit} = C/(C-1)$. To discuss the scaling and critical exponents, we return to the definition of $P_{\vec\kappa}$, Eq. \ref{eq:pveckappa}. We consider the region close but above $\bar k _{\rm crit}$ by defining $\varepsilon\equiv 1-U_{\bar 1}\approx C (\bar k -\bar k _{\rm crit})$ which holds as long as $(\bar k -\bar k _{\rm crit})\ll 1/C$ (Supp. Eq. S27). We analyze the behavior of $P_{\vec \kappa}$ for small $\varepsilon$ by expanding $(1-(U_{\bar 1})^{k'-\kappa_c})\approx (k'-\kappa_c)\varepsilon$. Plugging this approximation in to Eqs. \ref{eq:pveckappa} and \ref{eq:s_color} we obtain: \begin{align} \label{eq:scaling_relation}S_{\rm color} &\propto ({\bar k}-{\bar k}_{\rm crit})^{\beta}\\ \label{eq:critical_params}\beta&=C,\quad {\bar k}_{\rm crit} = C/(C-1). \end{align} We confirm the value of $\bar k _{\rm crit}$ and the scaling of $S_{\rm color}$ numerically in Figure \ref{fig:analytics}c for $C=3$ colors. As $C\rightarrow\infty$, we need to resolve the seeming contradiction of a divergent critical exponent $\beta=C$ and convergence towards $S_{{\rm color},\infty}$ as it appears in Eq. \ref{eq:scolinf}. For ER networks we show (Supp. Eq. S31) that $S_{\rm color,\infty}\propto ({\bar k}-1)^2$ for $\bar k $ near 1, implying $\beta=2$. The reason that we do not observe $\beta\to \infty$ as described in Eq. \ref{eq:critical_params} is that the approximation used to obtain Eq. \ref{eq:scaling_relation} is only valid in a critical region defined as $(\bar k -\bar k _{\rm crit})\ll 1/C$. As $C\rightarrow\infty$, $S_{\rm color}$ increases with the high exponent $\beta=C$. However, the shrinking critical region overpowers the diverging critical exponent and $S_{\rm color}\sim 0$ takes on unobservably small values and crosses over to $\beta=2$ scaling outside the critical region. \section{Applications} One immediate application of our framework is to secure communication in a network with no trusted nodes. Assuming $C$ router owners, each of whom eavesdrops on its routers traffic, we can securely communicate if messages are split with a \textit{secret sharing} protocol \cite{blakley1899safeguarding,shamir1979share,dolev-acm1993} and transmitted along multiple color-avoiding paths. The nodes which can take advantage of this method are exactly the elements of the largest CAC component. To study the hidden CAC structure of the internet, we use a symmetrized version of the AS-level internet prepared by \cite{peixoto2014hierarchical} which was generated using data from the CAIDA project~\cite{CaidaData} up to December 2013. We then color every router according to the country to which the router is registered, reflecting the assumption that every country is eavesdropping on its traffic but that no countries share information (Fig. \ref{fig:AS_data}a). Using the algorithm for finding the largest CAC component, we can determine which nodes are color-avoiding connectable and which are not (Fig. \ref{fig:AS_data}b). We find that overall $26228$ out of $49743$ ($\approx52.73\%$) of the routers are in the largest CAC component and that this accounts for the vast majority of CAC connected nodes (Fig. \ref{fig:avoidable_colors}c). However, we also find that these results vary greatly from country to country. For instance, only $25\%$ of the routers registered to the United States are in the largest CAC component compared to $89\%$ of routers registered to Russia (Fig. \ref{fig:AS_data}). This is partially due to the density of routers in the US which is much higher than Russia and indicates that US eavesdroppers have far greater capacity to intercept communication than their Russian counterparts. In economic trade networks, it is common that a single firm controls many others \cite{vitali-plosone2011} but each firm is controlled by only one owner. The vulnerability to correlated failures or malicious activities can undermine the overall system robustness, if they are sufficient to disrupt the global color-avoiding connectivity. We thus add color-avoiding connectivity to the concerns regarding systemic risk and government regulation of mergers and acquisitions \cite{battiston-sreps2012,tessone-jstatphys2013}. In epidemiology, many diseases spread via different strains, and individuals may become immune after recovery \cite{masuda-jtheoretbio2006}. Coloring nodes by strain, color-avoiding percolation can be used to evaluate the population's susceptibility to a multi-strain infection. \section{Discussion} We have presented here the first systematic study of disjoint vulnerabilities in complex networks and a way to maintain network robustness by utilizing multiple paths. We have shown that even a small diversity of colors can enable color-avoiding connectivity to a large fraction of nodes in a random network but that in real-world networks, uneven distribution of vulnerabilities can undermine this effect. The framework and metrics uncover a hidden structure that underlies any complex network with nodes that can be partitioned by their susceptibility to an external threat and can be used to devise new network design principles and protocols for improving robustness through redundancy. \section*{Author Contributions} All authors contributed to the idea, discussion of results and writing of the paper. S.K. and M.D. have performed simulations and S. K. developed the analytical treatment. \begin{acknowledgments} We acknowledge the MULTIPLEX (No. 317532) EU project, M.D. thanks Alan Danziger for first suggesting router software versions as a percolation problem. We also express gratitude to Shlomo Havlin, Damir Vuki\v{c}evi\'c, Marko Popovi\'c, Hrvoje \v{S}tefan\v{c}i\'c and Damir Koran\v{c}i\'{c} for helpful comments in the preparation of this manuscript. \end{acknowledgments} \onecolumngrid \section*{List of variables} {\centering \begin{tabular}{ c c } \hline \multicolumn{2}{ c } {Networks}\\ \hline $N$ & Number of nodes \\ $\bar k$ & Average degree \\ $k_i$ & Degree of node $i$ \\ $p_k$ & Degree distribution \\ $\alpha$ & Exponent of scale free degree distribution \\ $g_0$ & Generating function of degree \\ $g_1$ & Generating function of excess degree \\ \hline \multicolumn{2}{ c } {Colors}\\ \hline $C$ & Number of colors \\ $c\in 1,2,\dots C$ & A color \\ $r_c$ & Color distribution \\ \hline \multicolumn{2}{ c } {Standard percolation ingredients}\\ \hline ${\mathcal L}$ & Set of nodes in the largest component (color blind) \\ $u$ & Prob.\ of not being connected to giant comp.\ over a link \\ $S$ & Size of giant component \\ ${\mathcal L}_{\bar c}$ & Set of nodes in the largest component, after nodes of color c deleted\\ $u_{\bar c}$ & Prob.\ of not being connected to giant ${\mathcal L}_{\bar c}$ over a link \\ $S_{\bar c}$ & Size of giant ${\mathcal L}_{\bar c}$ \\ \hline \multicolumn{2}{ c } {Percolation over color avoiding paths}\\ \hline ${\mathcal L}_{\rm color}$ & Candidate set of nodes for the largest avoidable colors component\\ $S_{\rm color}$ & Size of giant ${\mathcal L}_{\rm color}$ \\ $B_{k,k'}$ & Prob.\ that out of $k$ links $k'$ connect to giant component \\ $M_{k',\vec \kappa}$ & Prob.\ that out of $k'$ links $\kappa_1$ connect to color 1 etc. \\ $P_{\vec \kappa}$ & Success probability having neighbors of colors acc. to $\vec \kappa$ \\ $U_{\bar c}$ & Prob.\ that a link fails connecting to ${\mathcal L}_{\rm color}$ which already connects to ${\mathcal L}$ and a node not having color $c$\\ $S_{{\rm color},\infty}$ & Size of the set of all nodes being connected to giant component over two links or more \\ \hline \hline $\beta$ & Critical exponent \\ ${\bar k}_{\rm crit}$ & Critical value of average degree \\ \end{tabular} } \pagebreak \section{Size of giant avoidable colors component in the configuration model} We can find analytical results for $S_{\rm color}$ for random graph ensembles with randomly distributed colors in the limit of infinite graphs. These results can be used to estimate the situation in finite quenched networks. We are able to gain a general understanding including phase transitions. This knowledge can guide our understanding of real world networks. We use the generalized configuration model graph ensemble with $N$ nodes, where each degree sequences $\{k_i\}$ occurs with probability $\prod_i p_{k_i}$, with the degree distribution $p_{k}$. Additionally we want to assign to every node $i$ a color $c_i\in 1,2,\dots,C$. The color sequence $\{c_i\}$ has probability $\prod_i r_{c_i}$ with the color distribution $r_c$. For a graph $G_N$ out of the graph ensemble, $\mathcal{L}_{\rm color}$ has a certain size $N_{\rm color}(G_N)$. For the whole graph ensemble, we have to use the average value. By considering only giant contributions growing with network size, we have \begin{align} S_{\rm color} &= \lim_{N\to \infty}\sum_{G_N} P(G_N) \frac{N_{\rm color}(G_N)}{N}, \tag{S\theequation}\stepcounter{equation} \end{align} where $P(G_N)=\prod_i p_{k_i} \omega \prod_i r_{c_i}$ is the probability to have the graph $G_N$ of size $N$, including $\omega$, the probability of the connection scheme of $G_N$ as a matching of half edges. \subsection{On the construction and maximality of $\mathcal{L}_{\rm color}$} By construction, every node pair in $\mathcal{L}_{\rm color}$ is color-avoiding connected. Furthermore, $\mathcal{L}_{\rm color}$ is maximal and therefore it is a color-avoiding connected component, if it includes for every color $c$ at least one node out of $\mathcal{L}_{\bar c}$. If for every color $c$, $\mathcal{L}_{\rm color}$ includes at least one node out of $\mathcal{L}_{\bar c}$, it is maximal and therefore it is an avoidable colors component. To prove this, assume it was not maximal. Then a node can be added which is (a) connected to every node in $\mathcal{L}_{\rm color}$ and (b) excluded from $\mathcal{L}_{\rm color}$; it does not connect to $\mathcal{L}_{\bar c'}$ for some color $c'$. Consequently, it cannot connect to the nodes in $\mathcal{L}_{\rm color}$ which belong to $\mathcal{L}_{\bar c'}$, which contradicts (a). \begin{figure} \centering \begin{minipage}{0.18\columnwidth}\centering (a)\\ \includegraphics[width=1.0\columnwidth]{graph_1.pdf} \end{minipage} \begin{minipage}{0.18\columnwidth}\centering (b)\\ \includegraphics[width=1.0\columnwidth]{graph_3.pdf} \end{minipage} \begin{minipage}{0.18\columnwidth}\centering (c)\\ \includegraphics[width=1.0\columnwidth]{graph_2.pdf} \end{minipage} \begin{minipage}{0.18\columnwidth}\centering (d)\\ \includegraphics[width=1.0\columnwidth]{graph_4.pdf} \end{minipage} \begin{minipage}{0.18\columnwidth}\centering (e)\\ \includegraphics[width=1.0\columnwidth]{graph_5.pdf} \end{minipage} \caption{ Color avoiding components may overlap, as shown in \textbf{(b)} and \textbf{(c)}. Color avoiding components can assume diverse forms. In a chain \textbf{(a)}, paths between nodes of one color exist and can be reached by connections between nodes of different colors. In \textbf{(b)}, the black node serves as an alternative path provider for the blue nodes. The graph \textbf{(d)} does not need any connection among nodes of the same color, but there is a massive overhead of nodes and connections to achieve color-avoiding connectivity of the blue nodes. A clique is a color avoiding component \textbf{(e)}.} \label{fig:my_label} \end{figure} \subsection{Question and connection to percolation theory} For calculating $S_{\rm color}$ in the random graph ensemble, we will follow ideas of Erd\H{o}s and R\'{e}nyi~\cite{erd-1959random} and Newman~\cite{newman-2001random}. For calculating the size of the giant component, they used probabilities of connections for a single node in the graph ensemble. As we have to extend the method to a gradual procedure with conditional probabilities, it is useful to introduce the original method in detail with a shifted viewpoint. \begin{figure}[htb] \begin{minipage}[b]{0.18\linewidth} \begin{center} \includegraphics[width=0.99\columnwidth]{sets_gc_all1.pdf} \end{center} \end{minipage} \begin{minipage}[b]{0.05\linewidth} \begin{center} {\large $\xrightarrow{u}$}\\ \vspace{15mm} \end{center} \end{minipage} \begin{minipage}[b]{0.18\linewidth} \begin{center} \includegraphics[width=0.99\columnwidth]{sets_gc_no1.pdf} \end{center} \end{minipage} \begin{minipage}[b]{0.05\linewidth} \ \end{minipage} \begin{minipage}[b]{0.18\linewidth} \begin{center} \includegraphics[width=0.99\columnwidth]{sets_gc_allk.pdf} \end{center} \end{minipage} \begin{minipage}[b]{0.07\linewidth} \begin{center} {\large $\xrightarrow{1-u^k}$}\\ \vspace{15mm} \end{center} \end{minipage} \begin{minipage}[b]{0.18\linewidth} \begin{center} \includegraphics[width=0.99\columnwidth]{sets_gc_gck.pdf} \end{center} \end{minipage} \caption{We base our theory on the method to calculate the size of normal giant components, as illustrated in this figure. Using a self consistency equation, the probability $u$ can be calculated. This is the probability, that a node is not connected to the giant component over a single link (see on the left). On the right, the probability for a node with $k$ links is illustrated to have at least one link connecting to the giant component. $u^k$ is the probability that all links fail.} \label{fig:gc} \end{figure} Lets denote with $\mathcal{L}$ the set of all nodes belonging to the largest component. In figure~\ref{fig:gc} on the outer left, a possible situation is illustrated. The largest component contains of a large part of the network, and the remaining nodes belong to smaller components. We have to calculate the size $S$ of the giant component, meaning the average relative size of $\mathcal{L}$ in the network ensemble in the limit of infinite network size. For this we can define the average probability $u$ that a node fails to connect to $\mathcal{L}$ over one particular link. This is illustrated in the figure with the left part. Again, the thermodynamic limit $N\to \infty$ is implied. With the definition of $u$ at hand, we can calculate $S$ in two steps: First, using a self consistency equation, $u$ is calculated. The probability $u$ is identical to the probability that the neighbor does not connect to the giant component over any of the remaining links, \begin{align} u &= g_1(u),\quad g_1(z)=\sum_k q_k z^k.\label{eq:u} \tag{S\theequation}\stepcounter{equation} \end{align} In this equation, $g_1$ is the generating function of excess degree $q_k=(k+1)p_{k+1}/\bar{k}$. For important degree distributions as e.g. Poisson or scale-free, the equation for $u$ can only be solved numerically. The second step is an averaging over nodes with various degrees $k$. The probability to connect to the giant component over any of $k$ links is $(1-u^k)$, meaning that not all links fail at the same time. This is illustrated in the figure on the right. As a node which connects to the giant component belongs to it, \begin{align} S &= \sum_{k=0}^{\infty}p_k (1-u^k) = 1-g_0(u),\quad g_0(z)=\sum_k p_k z^k.\label{eq:S} \tag{S\theequation}\stepcounter{equation} \end{align} \begin{figure}[htb] \begin{minipage}[b]{0.245\linewidth} \begin{center} \includegraphics[width=0.99\columnwidth]{sets_k_all.pdf} \end{center} \end{minipage} \begin{minipage}[b]{0.1\linewidth} \begin{center} {\Large $\xrightarrow{?}$}\\ \vspace{20mm} \end{center} \end{minipage} \begin{minipage}[b]{0.6\linewidth} \begin{center} \includegraphics[height=0.4\columnwidth]{sets_k_gc_no_2.pdf} \hspace{-1mm} \includegraphics[trim=100 0 0 0,clip,height=0.4\columnwidth]{sets_k_gc_no_3.pdf} \hspace{-1mm} \includegraphics[trim=100 0 0 0,clip,height=0.4\columnwidth]{sets_k_gc_no_1.pdf} \end{center} \end{minipage} \caption{We have to calculate the probability, if a node with $k$ links is for every color $c$ connected to the giant component $\mathcal{L}_{\bar c}$ with deleted color $c$. All connections over at least one link have to exist at the same time. We illustrate this question with the three colors red ($c=$r), green ($c=$g) and blue ($c=$b). If a link connects to $\mathcal{L}_{\bar{\rm g}}$, it for sure does not connect to $\mathcal{L}_{\bar c}$ for one of the other colors. This kind of dependence forces us to use a stepwise calculation with conditional probabilities.} \label{fig:question} \end{figure} In analogy to the procedure described above, we will calculate $S_{\rm color}$ as the probability that a randomly chosen node belongs to $\mathcal{L}_{\rm color}$. This has to be evaluated in the graph ensemble of infinite size. As we will perform an averaging over nodes with various degrees $k$, the following question has to be answered: What is the probability that a node with $k$ links connects to a giant $\mathcal{L}_{\bar c}$ for all colors $c$ at the same time. This is illustrated in figure~\ref{fig:question}. On the left, the situation for a graph with colors on the nodes is illustrated. Nodes of all colors might be in the largest component. After deleting all nodes of one color $c$, the remaining largest component $\mathcal{L}_{\bar c}$ might still contain a large part of all nodes in $\mathcal{L}$. The condition for the node belonging to $\mathcal{L}_{\rm color}$ is illustrated on the right of the figure. We will use the same two steps to attack this problem, as described for calculating the giant component above. First, we provide some single link probabilities which can be used as primitives for the further calculations. Second, we combine the single link probabilities to calculate $S_{\rm color}$. \subsection{Single link probabilities} \begin{figure}[htb] \begin{center} \includegraphics[width=0.2\columnwidth]{sets_1_all.pdf}\qquad \includegraphics[width=0.2\columnwidth]{sets_1_all.pdf}\qquad \includegraphics[width=0.2\columnwidth]{sets_1_all.pdf}\qquad \includegraphics[width=0.2\columnwidth]{sets_1_no_2_gc.pdf}\\ \vspace{1mm} { \hspace{4mm} $\downarrow u$\hspace{36mm}$\downarrow r_{\rm g}$ \hspace{36mm}$\downarrow u_{\bar{\rm g}}$ \hspace{36mm}$\downarrow U_{\bar{\rm g}}$}\\ \vspace{1mm} \includegraphics[width=0.2\columnwidth]{sets_1_no_gc.pdf}\qquad \includegraphics[width=0.2\columnwidth]{sets_1_c2.pdf}\qquad \includegraphics[width=0.2\columnwidth]{sets_1_gc_no_gc_no_2.pdf}\qquad \includegraphics[width=0.2\columnwidth]{sets_1_no_2_gc_no.pdf} \end{center} \caption{Probabilities for a single link to connect to different parts of the network. We use these probabilities as primitives to calculate the probability for many links. While $u$, $r_c$ and $u_{\bar c}$ can be calculated with standard methods invented for the configuration model before, the conditional probability $U_{\bar c}$ can be calculated as a combination of the others.} \label{fig:primitives} \end{figure} We already gave equation~\ref{eq:u} for calculating the probability $u$. In the case of colors on the nodes, as illustrated in figure~\ref{fig:primitives} on the left, the colors can simply be ignored. We further need the probability to connect to a node of color $c$ which is simply $r_c$. This is illustrated in the second column of the figure. We further introduce $u_{\bar c}$, the probability that a single link does not connect to a giant $\mathcal{L}_{\bar c}$. See the third column of the figure for an illustration. This can be calculated using percolation theory for random attack by solving \begin{align} u_{\bar c} &= r_c + (1-r_c) g_1(u_{\bar c}).\label{eq:u_c} \tag{S\theequation}\stepcounter{equation} \end{align} \begin{figure}[htb] \begin{minipage}[b]{0.2\linewidth} \begin{center} \includegraphics[width=0.99\columnwidth]{sets_1_all.pdf}\\ \vspace{42mm} \end{center} \end{minipage} \begin{minipage}[b]{0.15\linewidth} \begin{center} $(1-u)(1-r_{\rm g})$\\ {\large $\rightarrow$\\} \vspace{20mm} $1-u_{\bar{\rm g}}$\\ {\large $\searrow$\\} \vspace{27mm} \end{center} \end{minipage} \begin{minipage}[b]{0.2\linewidth} \begin{center} \includegraphics[width=0.99\columnwidth]{sets_1_no_2_gc.pdf}\\ \vspace{1mm} {\large $\downarrow$} $1-U_{\bar{\rm g}}$\\ \vspace{1mm} \includegraphics[width=0.99\columnwidth]{sets_1_gc_no_2.pdf}\\ \end{center} \end{minipage} \caption{This figure illustrates the calculation of $U_{\bar{\rm g}}$ using the equality $(1-u)(1-r_{\rm g})(1-U_{\bar{\rm g}})=1-u_{\bar{\rm g}}$. For that, we have assumed independence of the qualities of the link under consideration, especially of the color it connects to and if it connects to the giant component. } \label{fig:U_c} \end{figure} Unfortunately, $u_{\bar c}$ cannot be used directly for calculating $S_{\rm color}$. If we look at the same link, the probabilities $u_{\bar c}$ are dependent for different colors. The most obvious argument is that always $\Pi_c (1-u_{\bar c})=0$, as a link must at least miss one of the $\mathcal{L}_{\bar c}$. Instead, we will use the conditional probability $U_{\bar c}$, as illustrated with the outer right column of the figure. The precondition is that a link connects to the giant component and the node it connects to has not color $c$. $U_{\bar c}$ is the probability that such a link connects to $\mathcal{L}_{\bar c}$. For calculating it, we use the primitives introduced so far, as illustrated in figure~\ref{fig:U_c}. Assuming independence of the probabilities $(1-u)$ for connecting to the giant component and $(1-r_c)$ for not connecting to a node of color $c$, the precondition of $U_{\bar c}$ can be constructed. In this way, we can construct $(1-u_{\bar c})$ using the probability we are searching for: $(1-u_{\bar c}) = (1-u)(1-r_c)(1-U_{\bar c})$. With this we find \begin{align} U_{\bar c} &= 1 - \frac{1-u_{\bar c}}{(1-u)(1-r_c)}.\label{eq:U_c} \tag{S\theequation}\stepcounter{equation} \end{align} If $(1-u)(1-r_c)=0$, the precondition holds for an empty set of nodes. In this case we define $U_{\bar c}=1$. Notice that the additional information of the explicit color, instead of only stating that the color is not $c$, does not alter the results, as a further restriction of the colors would meat the numerator and denominator identically and therefore would cancel out. \subsection{Averaging over link distributions} \begin{figure}[htb] \begin{minipage}[b]{0.22\linewidth} \begin{center} \includegraphics[width=0.99\columnwidth]{sets_k_all.pdf} \end{center} \end{minipage} \begin{minipage}[b]{0.1\linewidth} \ \end{minipage} \begin{minipage}[b]{0.54\linewidth} \begin{center} \includegraphics[height=0.4\columnwidth]{sets_k_gc_no_2.pdf} \hspace{-1mm} \includegraphics[trim=100 0 0 0,clip,height=0.4\columnwidth]{sets_k_gc_no_3.pdf} \hspace{-1mm} \includegraphics[trim=100 0 0 0,clip,height=0.4\columnwidth]{sets_k_gc_no_1.pdf} \end{center} \end{minipage}\\ \vspace{2mm} {\large \hspace{-30mm} $\downarrow B_{k,k'}$\hspace{90mm}$\uparrow P_{\vec \kappa}$ }\\ \vspace{2mm} \begin{minipage}[b]{0.22\linewidth} \begin{center} \includegraphics[width=0.99\columnwidth]{sets_k_gc.pdf} \end{center} \end{minipage} \begin{minipage}[b]{0.1\linewidth} \begin{center} {\Large $\xrightarrow{M_{k',\vec{\kappa}}}$}\\ \vspace{20mm} \end{center} \end{minipage} \begin{minipage}[b]{0.54\linewidth} \begin{center} \includegraphics[height=0.4\columnwidth]{sets_k_no_2_gc.pdf} \hspace{-1mm} \includegraphics[trim=100 0 0 0,clip,height=0.4\columnwidth]{sets_k_no_3_gc.pdf} \hspace{-1mm} \includegraphics[trim=100 0 0 0,clip,height=0.4\columnwidth]{sets_k_no_1_gc.pdf} \end{center} \end{minipage} \caption{For calculating the probability of a node with $k$ links to belong to $\mathcal{L}_{\rm color}$, we have to average over different link constellations which this node might show. First, $B_{k,k'}$ is the probability that out of the $k$ links $k'$ connect to the giant component. It is calculated using $u$ (compare figure~\ref{fig:primitives} on the left). Second, $M_{k',\vec{\kappa}}$ gives the probability for a certain color distribution among the links. It is calculated using $r_{\rm g}$ etc. (compare figure~\ref{fig:primitives}, second from left). We assume that this second step is independent of the first step, what is confirmed with the final results. Third, $P_{\vec \kappa}$ gives the joint probability that for this color distribution $\mathcal{L}_{\bar{\rm r}}$, $\mathcal{L}_{\bar{\rm b}}$ and $\mathcal{L}_{\bar{\rm g}}$ are connected to at the same time. This is calculated using $U_{\bar{\rm r}}$ etc. (compare figure~\ref{fig:primitives} on the right).} \label{fig:stepwise} \end{figure} As in equation~\ref{eq:S} for the giant component, we want to get an analytical result for $S_{\rm color}$ by averaging over possible link constellations of a randomly chosen node. Let us give the whole result and then explain it step by step afterwards: \begin{align} S_{\rm color} &= \sum_{k=0}^{\infty}p_k \sum_{k'=0}^{k} B_{k,k'} \sum_{\kappa_1,\dots, \kappa_C=0}^{k'} M_{k',\vec \kappa} P_{\vec \kappa},\label{eq:S_color}\tag{S\theequation}\stepcounter{equation}\\ B_{k,k'} &={k \choose k'}(1-u)^{k'}u^{k-k'},\label{eq:B}\tag{S\theequation}\stepcounter{equation}\\ M_{k',\vec \kappa} &=\frac{k'!}{\kappa_1! \times \dots \times \kappa_C!} \, (r_1)^{\kappa_1} \times \dots \times (r_C)^{\kappa_C}\, \delta_{k',\kappa_1+\dots + \kappa_C},\tag{S\theequation}\stepcounter{equation}\\ P_{\vec \kappa} &= \prod_{c=1}^C [1-(U_{\bar c})^{k'- \kappa_c }].\label{eq:p_success}\tag{S\theequation}\stepcounter{equation} \end{align} The formulas include the single link probabilities $r_c$, $u$ (equation~\eqref{eq:u}) and $U_{\bar C}$ (equation~\eqref{eq:U_c}~with~\eqref{eq:u_c}). An illustration of the procedure can be seen in figure~\ref{fig:stepwise}. $B_{k,k'}$ is the binomial probability that out of the $k$ links $k'$ links connect to the giant component. $M_{k',\vec{\kappa}}$ gives the multinomial probability for a certain color distribution among the $k'$ links connecting to the giant component. We assume that this second step is independent of the first step, what is confirmed with the final results. The numbers $\kappa_c$ count the links which connect to a node of color $c$ in the giant component. Finally, $P_{\vec \kappa}$ gives the joint probability that for the color distribution given by $\vec \kappa$ all giant $\mathcal{L}_{\bar c}$ are connected to at the same time. There is at least one link connecting to ${\mathcal L}_{\bar c}$ with probability $1-(U_{\bar c})^{k' - \kappa_c }$. The success probabilities for different colors have to be multiplied, as all ${\mathcal L}_{\bar c}$ have to be reached at the same time. We tested numerically that e.g. $U_{\bar{\rm 1}}$ and $U_{\bar{\rm 2}}$ are independent for a link connecting to the giant component and a third color. \section{Examination of $S_{\rm color}$} \subsection{Closed form solutions} We now calculate closed form solutions for $S_{\rm color}$ for special cases. This is done to demonstrate how the extensive summations over $k'$, $k$ and $\vec \kappa$ can be performed analytically. In cases where this is not possible, a sampling of values $\vec \kappa$ has to be performed. The results can be tested against the analytically tractable situations and by comparing with numerical results. The closed form solutions presented here were used to calculate analytical results for the main article as well. For evaluating equation~\ref{eq:S_color} with two colors, we first rewrite \begin{align} \sigma_{k'} &\equiv \sum_{\kappa_1, \kappa_2=0}^{k'} M_{k',\vec \kappa} P_{\vec \kappa}\tag{S\theequation}\stepcounter{equation}\\ &= \sum_{\kappa_1=0}^{k'} {k' \choose \kappa_1}\, (r_1)^{\kappa_1} (r_2)^{k'-\kappa_1}\, [1-(U_{\bar 1})^{k'-\kappa_1 }] [1-(U_{\bar 2})^{\kappa_1 }]\tag{S\theequation}\stepcounter{equation}\\ &= \sum_{\kappa_1=0}^{k'} {k' \choose \kappa_1}\, \left[ (r_1)^{\kappa_1} (r_2)^{k'-\kappa_1}\, - (r_1 U_{\bar 2})^{\kappa_1} (r_2)^{k'-\kappa_1}\,- (r_1)^{\kappa_1} (r_2 U_{\bar 1})^{k'-\kappa_1}\,+ (r_1 U_{\bar 2})^{\kappa_1} (r_2 U_{\bar 1})^{k'-\kappa_1}\right]\tag{S\theequation}\stepcounter{equation}\\ &= 1 - (r_1+r_2 U_{\bar 1})^{k'} - (r_2+r_1 U_{\bar 2})^{k'} + (r_1 U_{\bar 2} + r_2 U_{\bar 1})^{k'}.\tag{S\theequation}\stepcounter{equation} \end{align} In the last step, the binomial formula was used backward. We can use this procedure once more, and with equation~\ref{eq:U_c} and $r_1+r_2=1$ we find \begin{align} S_{\rm color} &= \sum_k p_k \sum_{k'=0}^{k} B_{k,k'} \sigma_{k'} \tag{S\theequation}\stepcounter{equation}\\ &= \sum_k p_k \sum_{k'=0}^{k} {k \choose k'}u^{k-k'}\, \left[ (1-u)^{k'} - ((1-u)(r_1+r_2U_{\bar 1}))^{k'} - \dots \right]\tag{S\theequation}\stepcounter{equation}\\ &= \sum_k p_k \left[1 - (u_{\bar 1})^k - (u_{\bar 2})^k + (u_{\bar 1}+u_{\bar 2}-1)^k\right]\tag{S\theequation}\stepcounter{equation}\\ &= 1 - g_0(u_{\bar 1}) - g_0(u_{\bar 2}) + g_0(u_{\bar 1}+u_{\bar 2}-1).\label{eq:two_colors}\tag{S\theequation}\stepcounter{equation} \end{align} This result holds for any degree distribution and color distribution. Notice that $r_c\leq u_{\bar c}\leq 1$. The result for two colors does only depend on the probabilities $u_{\bar c}$, while conditional probabilities as $U_{\bar c}$ were eliminated. This was possible as $\mathcal{L}_{\bar 1}$ and $\mathcal{L}_{\bar 2}$ are not overlapping for two colors. For Poisson graphs we find with the according generating function \begin{align} g_0(z) &= g_1(z) = e^{\bar{k}(z-1)}.\label{eq:g0}\tag{S\theequation}\stepcounter{equation}\\ S_{\rm color} &= [1 - g_0(u_{\bar 1})][1 - g_0(u_{\bar 2})].\tag{S\theequation}\stepcounter{equation} \end{align} For more than two colors, $\mathcal{L}_{\bar c}$ do overlap. For homogeneous color distributions $r_c=1/C$, a closed form solution can be found in the same way as for two colors with the binomial formula. We find \begin{align} S_{\rm color} &= \sum_{j=0}^C (-1)^j {C \choose j} g_0\left[u+(1-u)\left(\frac{j}{C}U_{\bar 1}^{j-1} + \frac{C-j}{C}U_{\bar 1}^{j}\right)\right].\tag{S\theequation}\stepcounter{equation} \end{align} Let us finally discuss the behavior for $C\to \infty$. This can be done utilizing the term $\sigma_{k'}$, the probability that a node connecting over $k'$ links to the giant component belongs to $S_{\rm color}$. As can be seen with equation~\ref{eq:p_success}, $\sigma_0=\sigma_1=0$. On the other hand, with $r_c\to 0$, equation~\ref{eq:u_c} converges to equation~\ref{eq:u} and therefore $U_{\bar 1}\to 0$. This means that $\sigma_{k'>1}\to 1$. We finally find with equation~\ref{eq:S_color} \begin{align} S_{\rm color,\infty} &\equiv \lim_{C\to \infty} S_{\rm color}\tag{S\theequation}\stepcounter{equation}\\ &= 1-\sum_{k=0}^{\infty}p_k [u^k +k (1-u) u^{k-1}]\tag{S\theequation}\stepcounter{equation}\\ &= 1 - g_0(u) - (1-u) \left.\frac{{\rm d}g_0(z)}{{\rm d}z}\right|_{z=u}.\tag{S\theequation}\stepcounter{equation} \end{align} \subsection{Critical behavior for Poisson graphs} With equation~\ref{eq:S_color}, vanishing $\sigma_{k'}$ causes $S_{\rm color}=0$. According to \begin{align} \sigma_{k'} &= \sum_{\kappa_1,\dots, \kappa_C=0}^{k'} M_{k',\vec \kappa} \prod_{c=1}^C [1-(U_{\bar c})^{k'- \kappa_c }].\label{eq:sigmak}\tag{S\theequation}\stepcounter{equation} \end{align} this is the case if $U_{\bar c}=1$ for any color $c$. With equation~\ref{eq:U_c} we find that $U_{\bar c}=1$ whenever $u_{\bar c}=1$. Examining equation~\ref{eq:u_c} for $u_{\bar c}$, we can relate to site percolation (random removal of nodes). For Poisson graphs we have $r_{\rm crit}=(\bar{k}-1)/\bar{k}$. With homogeneous color distribution $r_c=1/C$, we can resolve the critical connectivity given the number of colors \begin{align} \bar{k}_{\rm crit} &= C/(C-1). \tag{S\theequation}\stepcounter{equation} \end{align} The normal giant component size $S$ shows a special critical behavior shortly above the transition point, it scales linearly with $\bar{k}-1$. Here we are interested in the behavior of $S_{\rm color}$ which is a function of $1-u_{\bar 1}$ which itself can be related to $1-u=S$. With inserting into equation~\ref{eq:u_c} it can be shown that $u_{\bar c}({\bar k})=r_c+(1-r_c)u((1-r_c){\bar k})$. For small arguments $(\bar{k}-{\bar k}_{\rm crit})$, \begin{align} 1-u_{\bar 1}(\bar{k}>{\bar k}_{\rm crit}) &\approx (1-r_1)^2 \left.\frac{{\rm d}(1-u)}{{\rm d}{\bar k}}\right|_{\bar{k}=1+0} (\bar{k}-{\bar k}_{\rm crit}).\tag{S\theequation}\stepcounter{equation} \end{align} Inserting into equation~\ref{eq:U_c} we find using $1-u(\bar{k}>1) \approx \left.\frac{{\rm d}(1-u)}{{\rm d}{\bar k}}\right|_{\bar{k}=1+0} (\bar{k}-1)$ \begin{align} \varepsilon &\equiv 1 - U_{\bar 1} \approx C (\bar{k}-{\bar k}_{\rm crit})\tag{S\theequation}\stepcounter{equation} \end{align} if additionally $\bar{k}-{\bar k}_{\rm crit}\ll\bar{k}-1$ holds ($1-u_{\bar 1}$ small compared to $1-u$). For calculating $\sigma_{k'}$, we first need to evaluate $P_{\vec \kappa}$ including expressions $1-(U_{\bar 1})^{k'-\kappa_c}$. Replacing with $\varepsilon$ and applying an approximation we find $1-(U_{\bar 1})^{k'-\kappa_c}=1-(1-\varepsilon)^{k'-\kappa_c}\approx (k'-\kappa_c) \varepsilon$. This is true at least as long as $k'\varepsilon\ll 1$. With this we find $P_{\vec \kappa}\propto (\bar{k}-{\bar k}_{\rm crit})^C$ independent of $\vec \kappa$, and finally \begin{align} \sigma_{k'} &\propto (\bar{k}-{\bar k}_{\rm crit})^C. \tag{S\theequation}\stepcounter{equation} \end{align} We finally find \begin{align} S_{\rm color} &\propto (\bar{k}-{\bar k}_{\rm crit})^{\beta},\tag{S\theequation}\stepcounter{equation}\\ \beta &= C.\tag{S\theequation}\stepcounter{equation} \end{align} The critical behavior of $S_{\rm color,\infty}$ for Poisson graphs can be evaluated with the generating function eq.~(\ref{eq:g0}) and $S=1-u$. We find $S_{\rm color,\infty}=S-{\bar k} S (1-S)$, and for small positive ${\bar k}-1$ the giant component grows approximately with $S\approx 2 ({\bar k}-1)/{\bar k}^2$. Therefore \begin{align} S_{\rm color,\infty}\approx ({\bar k}-1)^2(4/{\bar k}^2-2/{\bar k}^3)\propto ({\bar k}-1)^2.\tag{S\theequation}\stepcounter{equation} \end{align}
1,108,101,563,738
arxiv
\section{Introduction} To build a macroscopic machine capable of directly utilizing chemical energy to perform mechanical work, bypassing heat, is a long standing and unresolved engineering challenge. At the same time, on the macromolecular or colloidal scale, this is routinely done by molecular motors moving on a solid substrate \cite{Motors_Review_RevModPhys.69.1269} or by colloidal swimmers moving through a fluid \cite{ColloidalSwimmers_EBBENS201614}. In the latter case, mechanical motion is usually achieved by diffusiophoresis, i.e., the drift of a colloidal particle (or a liquid droplet) in a solvent, induced by gradients in the concentration of chemical species (solute) \cite{Anderson1989, Brady2011, Paustian2015, Sear2017}. The phenomenon is driven by short-range interactions between the surface of the particle and the solute molecules which result in different energies of a solute molecule close to the surface of the particle and away from it and, depending on the sign of the interaction, it leads to the motion of the particle along or opposite to the direction of the concentration gradient. Recently, diffusiophoresis has been proposed as a non-equilibrium, non-motor protein mechanism for metabolism-dependent transport of protein filaments, plasmids, storage granules, and foreign particles of different sizes in cells \cite {Parry2014, Sear2019}. Related cross-diffusion and chemotaxis effects \cite{Vanag2009} have been also implicated in the aggregation of enzymes and the formation of metabolons in regions of high substrate concentrations \cite{Zhao2018}. Under the name ``chemically (or phoretically) active matter'' these systems attracted much attention from theorists in recent years. A far reaching phenomenological theory was developed by Ramin Golestanian with co-authors \cite{PhysRevLett.108.038303, PhysRevLett.112.068301, PhysRevE.89.062316, PhysRevE.91.052304, PhysRevLett.123.018101, Saha_2019, Golestanian_Review_2019, Nasouri_Golestanian_2020} and in a number of other works \cite{PhysRevLett.115.258301, PhysRevE.81.046311, Oshanin_2017} (reviewed in \cite{Golestanian_Review_2019}). One simple way to create solute concentration gradients is to have colloidal particles catalyzing the reaction $A \leftrightharpoons B$ between substrate $A$ and product $B$ molecules, provided that substrates are supplied to the system, while products are washed away. An interesting observation about such a system is that concentration gradients typically decay as $1/r$ with distance, thus leading to effective interactions which are long-ranged and reminiscent of electrostatics or gravity \cite{PhysRevE.89.062316, PhysRevE.81.046311, Oshanin_2017}. This realization leads to prediction of a plethora of beautiful and unusual states of this ``phoretically active matter'' \cite{PhysRevE.89.062316}. We here want to revisit that same system in order to clarify one aspect of it, which is the following. {\it Whenever there is a catalyst that accelerates chemical transformation of $A$ (``fuel'') to $B$ (``exhaust'') molecules, $A \rightharpoonup B$, it accelerates also the reverse reaction, $B \rightharpoondown A$; in other words, it accelerates relaxation to equilibrium}. This fact has interesting consequences for the analog of Debye-H\"{u}ckel electrostatic screening in systems of catalytic colloids. Specifically, in electrostatics, the field that is being screened is, of course, the electric field, or potential. What is screened in our chemical system? We shall show that it is the field of chemical imbalance that measures the deviation from chemical equilibrium, $\psi(\mathbf{r}) \equiv k_{\rightarrow} c_{A}(\mathbf{r}) - k_{\leftarrow} c_{B}(\mathbf{r})$, where $c_{A}(\mathbf{r})$ and $c_{B}(\mathbf{r})$ are local concentrations of solute components, while $k_{\rightarrow}$ and $k_{\leftarrow}$ are corresponding catalytic rate constants. For instance, in a canonical example, when there is a large crowd of catalytic particles confined in an osmotic bag permeable for fuel $A$ and exhaust $B$ molecules, but not permeable for catalytic particles, and even if chemical imbalance is maintained outside by supplying $A$ and removing $B$, the chemical imbalance field $\psi(\mathbf{r})$ penetrates into the crowd only by a finite distance and decays exponentially beyond that distance. Deep inside the crowd of catalysts both $A$ and $B$ are present, but in chemical equilibrium. This main point of our work has some important consequences which we will discuss at the end. The plan of this article is as follows. To make the work self-contained and to establish the notations, we rederive some of the well-known results \cite{PhysRevE.89.062316} about concentration profiles around a single catalyst and about interactions between two catalysts in section \ref{sec:combined}. This section contains no new results and serves mostly pedagogical purposes, except that unlike previous authors, we never omit the fundamentally important reverse catalytic reaction. The crowd of catalysts, screening \cite{Debye_Screening_1923}, Wigner crystals \cite{Wigner_PhysRev.46.1002}, and clusters of catalytic colloids, are considered in section \ref{sec:many_catalysts}. \section{The $1/r$ interaction between catalytic colloids}\label{sec:combined} Let $c_{A}^{\infty}$ and $c_{B}^{\infty}$ be the respective concentrations (molecules per unit volume) far away from the catalysts. We will assume that the energy barrier for the interconversion reaction $A \leftrightharpoons B$ is sufficiently high so that, {\it in the absence of catalysts}, the system can be maintained indefinitely out of equilibrium and therefore $c_{A}^{\infty}$ and $c_{B}^{\infty}$ are externally controlled parameters. Consider first a single spherical particle of radius $R$ which can catalyze the reaction $A \leftrightharpoons B$ on its surface by reducing the energy barrier to a value comparable to $k_{B}T$. Assuming for simplicity that concentrations are sufficiently small, the steady state rate (current) of catalytic reaction can be written as \begin{equation} J = v k_{\rightarrow} c_{A}(R) - v k_{\leftarrow}c_{B}(R) \ , \label{eq:current_general} \end{equation} with $k_{\rightarrow}$ and $k_{\leftarrow}$ forward and backward rate constants, $ c_{A}(R)$ and $c_{B}(R)$ the concentrations of $A$ and $B$ species at the surface of the catalyst, and $v$ the volume where reaction takes place (for instance, if catalysis occurs uniformly along the spherical surface, then $v = 4 \pi R^2 d$, with $d$ a molecular length scale). As we stated, eqn (\ref{eq:current_general}) is valid only for sufficiently small concentrations of $A$ and $B$, otherwise the catalyst gets ``clogged'' and a non-linear Michaelis-Menten reaction rate has to be used, as it was done in \cite{PhysRevE.89.062316}. However, for our purpose, it is important to have both forward and backward reaction taken into consideration, that at large concentrations would require using the so-called reversible Michaelis-Menten kinetics \cite{Reversible_MichaelisMenten_1, Reversible_MichaelisMenten_2} which was not done in \cite{PhysRevE.89.062316}. Because of the dramatic simplification, we stay with the linear relation (\ref{eq:current_general}). Since solute particles $A$ and $B$ have to be delivered to and from the catalyst surface by diffusion, their steady state concentration profiles must be found from the appropriate diffusion equation. For a spherically symmetric catalyst, the concentration fields of solutes $A$ and $B$ are spherically symmetric as well: \begin{equation} c_{A}(r) = c_{A}^{\infty} - \frac{J}{4 \pi D_{A} r} \ ; \ \ c_{B}(r) = c_{B}^{\infty} + \frac{J}{4 \pi D_{B} r} \ , \label{eq:concentrations_8} \end{equation} where $D_{A}$ and $D_{B}$ are the corresponding diffusion coefficients. Plugging these expressions (at $r=R$) back to eqn (\ref{eq:current_general}) which serves as a boundary condition for the diffusion equation, produces an equation for the current $J$ with the solution \begin{equation} \frac{J}{v} = \frac{k_{\rightarrow} c_{A}^{\infty} - k_{\leftarrow} c_{B}^{\infty}}{1+ \frac{v}{4\pi R} \left[ \frac{k_{\rightarrow}}{D_{A}} + \frac{k_{\leftarrow}}{D_{B}} \right]} \ . \label{eq:current_5} \end{equation} This result is easily generalized for the case when several species of $A_i$ and $B_j$ are present. Current $J$ (\ref {eq:current_5}) vanishes in thermal equilibrium, since the equilibrium concentrations $c_{A}^{\mathrm{eq}}$ and $c_{B}^{\mathrm{eq}}$ obey the detailed balance condition, $k_{\rightarrow} c_{A}^{\mathrm{eq}}= k_{\leftarrow} c_{B}^{\mathrm{eq}}$. In this sense, the quantity in the numerator of formula (\ref{eq:current_5}) characterizes the degree of chemical imbalance which drives the process, and which can be governed by energy (if energy of a fuel molecule is larger than that of exhaust), or by entropy (if $c_{A}^{\infty} > c_{B}^{\infty}$), or by any combination of the two. We now consider two catalytic spheres, some distance $r$ apart, such that $r \gg R$; the catalyst spherical symmetry assumption will be relaxed later on. Because of the short-range interactions between solute molecules $A$ and $B$ and the catalyst, and because steady state concentrations of $A$ and $B$ are non-uniform in space, the energies of these two spheres depend on the distance $r$ between them, i.e., there is an interaction force between them. This problem can be treated, in the first approximation (see Supplementary Material \cite{SupMat1}), by imagining one particle located in the origin, while the other particle, positioned at distance $r$ away, interacts with unperturbed concentration fields $c_{A}(r), \ c_{B}(r)$ eqn (\ref{eq:concentrations_8}) created by the first. Expanding the surface energy of a sphere in small concentrations $c_{A}$ and $c_{B}$ at the sphere surface, as $\sigma \simeq \sigma_0 + c_{A}(r) \sigma^{\prime}_{A} + c_{B}(r) \sigma^{\prime}_{B}$ (where prime signs indicate partial derivatives of surface tension with respect to the corresponding concentration), we write distance-dependent part of energy for two spheres as follows: \begin{equation} \frac{E}{4 \pi R^2} = \sigma^{\prime}_{A} \left[ c_{A}(r) - c_{A}^{\infty} \right] + \sigma^{\prime}_{B} \left[ c_{B}(r) - c_{B}^{\infty} \right] \ . \end{equation} For brevity, we again drop the generalization for the case of several species $A_i$ and $B_j$. The constant ($r$-independent) $c_{A}^{\infty}$ and $c_{B}^{\infty}$ terms are subtracted such that this energy vanishes when two droplets are infinitely far. Plugging in the concentration profiles eqn (\ref{eq:concentrations_8}), the force on each sphere is \begin{equation} \frac{f}{4\pi R^2} = \frac{J}{4 \pi r^2} \left[ \frac{\sigma^{\prime}_{B} }{D_{B}} - \frac{ \sigma^{\prime}_{A} }{D_{A}}\right] \ . \label{eq:Force_droplets_compact_1} \end{equation} where the current $J$ is given by eqn (\ref{eq:current_5}) (we have neglected the hydrodynamic interaction contribution to the force, $f = - \nabla E$; see \cite{Nasouri_Golestanian_2020}). This force depends on the distance as $1/r^2$ i.e., it is a long-range interaction similar to gravitational and Coulomb forces, as it was pointed out in \cite{PhysRevLett.108.038303, PhysRevLett.112.068301, PhysRevE.89.062316, Saha_2019, PhysRevE.81.046311, Oshanin_2017}. Furthermore, the force is proportional to $J$ -- the chemical rate (or current) of interconversion of $A$ to $B$, which emphasizes that the entire phenomenon is of non-equilibrium nature. It is driven by the supply of fuel $A$ molecules as well as removal of exhaust $B$ molecules at infinity. A word of caution is in order about our usage of equilibrium surface tension $\sigma$ and its derivatives $\sigma^{\prime}_{A}$ and $\sigma^{\prime}_{B}$ in this decidedly non-equilibrium context. In fact, it is well justified by the assumption that colloidal catalytic particles are much larger and move much slower than the solute molecules $A$ and $B$. The interaction force (\ref{eq:Force_droplets_compact_1}) between catalytic colloids can be either attractive or repulsive. To see this, consider the following simple model. Imagine that catalysis takes place in a narrow layer of thickness $d$ around the catalyst surface, then \begin{subequations} \begin{align} k_{\rightarrow} = \frac{1}{\tau} e^{\beta \left(\varepsilon_{A} - \varepsilon^{\dag} \right)} \; \ \ k_{\leftarrow} = \frac{1}{\tau} e^{\beta \left(\varepsilon_{B} - \varepsilon^{\dag} \right)} \ , \end{align} where $1/\tau$ is the attempt rate, $\beta = 1/k_{B}T$, while $\varepsilon_{A}$ and $\varepsilon_{B}$ are the bulk free energies of $A$ and $B$, and $\varepsilon^{\dag}$ is the free energy of the transition state of the catalytic surface reaction (for reasons of brevity, we will refer to these free energies as energies in the following). Furthermore, if energies of $A$ and $B$ molecules inside surface layer of a colloid $\varepsilon_{A}^{\ast}$ and $\varepsilon_{B}^{\ast}$ are different from their bulk values $\varepsilon_{A}$ and $\varepsilon_{B}$, then $\sigma^{\prime}_{A} = d \tilde{\varepsilon}_{A} e^{-\beta \tilde{\varepsilon}_{A}}$ and $\sigma^{\prime}_{B} = d \tilde{\varepsilon}_{B} e^{-\beta \tilde{\varepsilon}_{B}}$, with $ \tilde{\varepsilon}_{A} = \varepsilon_{A}^{\ast} - \varepsilon_{A}$ and $ \tilde{\varepsilon}_{B} = \varepsilon_{B}^{\ast} - \varepsilon_{B}$. In this approximation, \begin{align} \frac{J}{4 \pi R^2 d} & = \frac{c_{A}^{\infty}e^{\beta \varepsilon_{A}} - c_{B}^{\infty}e^{\beta \varepsilon_{B}}}{ \frac{Rd}{D_{A}} e^{\beta \varepsilon_{A}} + e^{\beta \varepsilon^{\dagger}}\tau + \frac{Rd}{ D_{B}} e^{\beta \varepsilon_{B}}} \label{eq:current_4} \\ \frac{f}{4\pi R^2} & = \frac{Jd}{4 \pi r^2} \left[ \frac{\tilde{\varepsilon}_{B}}{D_{B}} e^{-\beta \tilde{\varepsilon}_{B}} - \frac{\tilde{\varepsilon}_{A}}{D_{A}} e^{-\beta \tilde{\varepsilon}_{A}} \right] \ . \label{eq:Force_droplets_compact} \end{align} \label{eq:naive} \end{subequations} Inspection of eqn (\ref{eq:Force_droplets_compact}) confirms that the force between catalysts can be attractive or repulsive, depending on the energies $\varepsilon_{A}$, $\varepsilon_{B}$, $\varepsilon_{A}^{\ast}$ and $\varepsilon_{B}^{\ast}$, as shown in the Fig. \ref{fig:Where_Attraction} (see also \cite{PhysRevLett.115.258301}). For instance, if both $A$ and $B$ molecules are attracted to the surfaces of catalytic particles, $\varepsilon_{A}^{\ast} < \varepsilon_{A}$ and $\varepsilon_{B}^{\ast} < \varepsilon_{B}$, then the resulting long range interaction between catalysts is a competition: interaction with $A$ pushes each sphere away from the other, towards greater supply of $A$, but interaction with $B$ pulls catalysts towards one another, towards where new $B$ is produced. Therefore, overall attraction between spheres occurs if $\varepsilon_{B}^{\ast} - \varepsilon_{B} < \varepsilon_{A}^{\ast} - \varepsilon_{A}$, and overall repulsion takes place in the opposite case. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{Where_Attraction_1.pdf}\\ \caption{Diagram of regimes for two catalytic spheres in terms of energies $\tilde{\varepsilon}_{A}$ and $\tilde{\varepsilon}_{B}$, according to eqn (\ref{eq:Force_droplets_compact}). Yellow marks the region where interaction force is repulsive, in other areas it is attractive. The plot is constructed for $D_{A} = D_{B}$; the only modification required in the case $D_{A} \neq D_{B}$ is change of scales along axes.}\label{fig:Where_Attraction} \end{figure} In equations (\ref{eq:naive}), we expressed phenomenological quantities $\sigma^{\prime}_{A}$ and $\sigma^{\prime}_{B}$, as well as rate constants $k_{\rightarrow}$ and $k_{\leftarrow}$ in terms of energies such as $\varepsilon_{B}^{\ast}$, $\varepsilon_{B}$, $\varepsilon_{A}^{\ast}$, $\varepsilon_{A}$; these mechanical quantities are easy to imagine for a theorist, but virtually impossible to measure. Furthermore, we consider only the force acting on catalytic colloidal particles, which is, in principle, measurable in an optical tweezers experiment, but we do not consider their motion under this force. Translating force into velocity requires the knowledge of mobility, and simple minded assumption of Stokes friction is known to be only qualitatively and not quantitatively correct. More systematic phenomenological treatments \cite{PhysRevLett.108.038303, PhysRevLett.112.068301} operate with directly measurable surface tensions, Onsager coefficients, and other phenomenological parameters. We present the somewhat more naive approach based on equations (\ref{eq:naive}) only because of its simplicity and pedagogical value. Our consideration so far was restricted to spherically symmetric catalytic particles. This idealization is perhaps rarely realized. A catalytic particle without spherical symmetry creates non-isotropic concentration fields of reagents, which can result in auto-diffusiophoretic motion of the catalyst \cite{Golestanian_2005_PhysRevLett.94.220801, Golestanian_2007, OsmoticMotor_PhysRevLett.100.158303, OsmoticMotor_Comment1_PhysRevLett.102.159801, OsmoticMotor_reply1_PhysRevLett.102.159802, OsmoticMotor_Comment2_PhysRevLett.103.079801, OsmoticMotor_Reply2_PhysRevLett.103.079802, Brady2011, Palacci2013, PhysRevLett.115.258301, Buttioni_PhysRevLett.110.238301, Popescu2016, Brady2016}. Such self-diffusiophoretic particles are of considerable current interest and represent an important example of the so-called active swimmers \cite{Bocquet2012, Palacci2013, Buttioni_PhysRevLett.110.238301, Popescu2016}. Speaking about concentration field mediated interactions between catalysts, we should think of multipole expansion of the concentration fields (see also \cite{Brady2016}). Then, exactly as in the familiar electrostatics context, the dominant long range contribution is that from a monopole, $\sim 1/r^2$, which is what we considered above, while dipole (like for Janus particles), quadrupole, and higher order multipoles are important for the near field. Thus, we will continue working in the monopole approximation which is only justified when distance between catalysts is large. Accordingly, we do not consider self-diffusiophresis, simply because it was already studied in detail \cite{Golestanian_2005_PhysRevLett.94.220801, Golestanian_2007, PhysRevLett.108.038303, Buttioni_PhysRevLett.110.238301, PhysRevLett.112.068301, PhysRevLett.115.258301, PhysRevE.89.062316, Saha_2019}. \section{A crowd of catalysts}\label{sec:many_catalysts} We now turn from considering the force between two catalytic particles to the case when there are many catalysts. Since catalysts reduce free energy barriers, thus paving the way for the system to approach chemical equilibrium, in order to maintain the system away from equilibrium in the presence of a finite concentration of catalysts, it is necessary (though maybe not sufficient) to confine the catalysts to a region of space that is surrounded by a ``bath'' in which non-equilibrium concentrations of $A$ and $B$ molecules are enforced and maintained from outside. Our goal now is to explore implications of such boundary conditions. Consider a crowd of catalytic particles, with density of $\rho(\mathbf{r})$ catalysts per unit volume. On the mean field level, overall behavior should be described by the volume fraction of catalytic centers in space, $\phi(\mathbf{r}) = v \rho(\mathbf{r})$, where $v$ (exactly as before) is the volume of the region in which catalysis takes place on the surface of one catalytic particle. Now, let $c_{A}(\mathbf{r})$ and $c_{B}(\mathbf{r})$ be the concentration fields of ``fuel'' and ``exhaust'' molecules $A$ and $B$, coarse grained over distances large compared to the typical distance between catalysts, $\ell$ ($\ell^{-3} \sim \rho$). Then mean field equations for concentrations of $A$ and $B$ read (the upper dot indicates time derivative) \begin{equation}\begin{split} \dot{c}_{A} (\mathbf{r}) & = D_{A} \nabla^2 c_{A}(\mathbf{r}) - \phi(\mathbf{r}) \left[k_{\rightarrow} c_{A}(\mathbf{r}) - k_{\leftarrow} c_{B}(\mathbf{r}) \right] \\ \dot{c}_{B} (\mathbf{r}) & = D_{B} \nabla^2 c_{B} (\mathbf{r})+ \phi(\mathbf{r}) \left[k_{\rightarrow} c_{A}(\mathbf{r}) - k_{\leftarrow} c_{B}(\mathbf{r}) \right]\end{split} \label{eq:diffusion_eq_for_two} \end{equation} To analyze these equations, we introduce the ``field of chemical imbalance'' \begin{equation} \psi(\mathbf{r}) \equiv k_{\rightarrow} c_{A}(\mathbf{r}) - k_{\leftarrow} c_{B}(\mathbf{r}) \ . \label{eq:imbalance}\end{equation} The meaning of $\psi (\mathbf{r})$ field (\ref{eq:imbalance}) is clarified by noticing that, up to a constant factor $1/v$, $\psi(\mathbf{r})$ is equal to $J(\mathbf{r})$ -- the rate of chemical reaction (\ref{eq:current_general}) in the vicinity of point $\mathbf{r}$, coarse grained over the scale $\ell$ in the same way as concentrations and $\psi$. Therefore, according to eqn (\ref{eq:Force_droplets_compact_1}), it also follows that $\psi(\mathbf{r})$ determines the strength of interactions between catalytic colloids around $\mathbf{r}$. In steady state, time derivatives vanish and, combining Eqs. (\ref{eq:diffusion_eq_for_two}) with proper weights, we find that $\psi(\mathbf{r})$ satisfies \begin{equation} \nabla^2 \psi(\mathbf{r}) - \xi^{-2}(\mathbf{r})\psi(\mathbf{r}) = 0 \ , \label{eq:screening}\end{equation} where \begin{equation} \xi^{-2}(\mathbf{r}) = \left[ \frac{k_{\rightarrow}}{D_{A}} + \frac{k_{\leftarrow}}{D_{B}}\right] \phi(\mathbf{r}) \ . \label{eq:Penetration_Depth} \end{equation} At first glance, eqn (\ref{eq:screening}) is identical to the celebrated Debye-H\"{u}ckel equation \cite{Debye_Screening_1923} for the electrostatic potential around a point charge in an ionic solution, implying that $\xi(\mathbf{r})$ can be interpreted as the screening length (see equation (8) in reference \cite{PhysRevE.89.062316}). Upon further inspection one notices significant differences. First and foremost, electrostatic potential is defined up to an additive constant, while field of chemical imbalance does not have this gauge freedom, because $\psi = 0$ is the special state of chemical equilibrium. Because of that, the boundary conditions on our chemical imbalance field $\psi(\mathbf{r})$ are quite different from those on the potential in a typical electrostatics problem: while the electric potential diverges at the point charge and decays to a constant, usually identified as zero, at large distance from it, $\psi$ is maintained at some fixed non-equilibrium value away from the catalysts where $\phi(\mathbf{r})=0$. Depending on the geometry, $\psi$ vanishes or reaches some lower value inside the colloid-occupied region ($\phi(\mathbf{r})\neq 0$) because catalysis always tends to reduce the degree of chemical imbalance and drive the system towards equilibrium at which detailed balance is obeyed and $\psi\mathbf(r)=0$. This effect has not been noticed in previous works, in which the reverse reaction (the second term in eqn (\ref{eq:imbalance})) that drives the system to equilibrium, was omitted. In order to understand the physical meaning of the length $\xi$ let us consider the 3D situation shown in Fig. \ref{fig:Penetration_Depth}: catalytic colloids are confined inside a spherical osmotic bag of radius $L$ which is permeable to solute molecules $A$ and $B$ but not to colloids. Substrate molecules $A$ are delivered by diffusion from infinity, and product molecules $B$ are also absorbed at infinity such that their concentrations at infinity are fixed at some non-equilibrium values $c_{A}^{\infty}$ and $c_{B}^{\infty}$, respectively (note that since chemical reactions take place only inside the bag, these bulk concentrations can be arbitrarily far from equilibrium). As shown in Fig. \ref{fig:Penetration_Depth}, the chemical imbalance field $\psi(r)$ penetrates up to a {\it penetration depth} $\xi$ into the catalysts-occupied domain. Deeper into the bulk of the catalysts-occupied region $\psi(r)\rightarrow 0$, the concentrations of $A$ and $B$ approach equilibrium values. More specifically, assuming the concentrations to be $c_{A}^{\infty}$ and $c_{B}^{\infty}$ at infinity, concentration profiles are expressed in terms of chemical imbalance function $\psi(r)$ \begin{subequations} \begin{align} c_{A}(r) & = c_{A}^{\infty} + \frac{ \psi(r) - \psi(\infty)}{D_A \left(\frac{k_{\rightarrow}}{D_{A}} + \frac{k_{\leftarrow}}{D_{B}} \right)} \\ c_{B}(r) & = c_{B}^{\infty} - \frac{ \psi(r) - \psi(\infty)}{D_B \left(\frac{k_{\rightarrow}}{D_{A}} + \frac{k_{\leftarrow}}{D_{B}} \right)} \ , \end{align} while $\psi(r)$ itself is found for this spherical geometry, based on eqn (\ref{eq:screening}), along with boundary conditions of continuous function and its derivative and no singularity at the origin: \begin{align} \frac{\psi(r)}{ \psi(\infty)} = \left\{ \begin{array}{lcr} 1 - \frac{L}{r} + \frac{\xi}{r} \tanh \frac{L}{\xi} & \mathrm{ at} & r>L \\ \frac{\xi}{r} \frac{\sinh r/\xi}{\cosh L/\xi} & \mathrm{at} & r < L \end{array} \right. \label{eq:solution_for_psi} \end{align} \label{eq:solution_in_3D} \end{subequations} These results are plotted, for specific values of parameters, in Fig. \ref{fig:Penetration_Depth}. As expected, the current $J$ vanishes inside the crowd of catalysts along with $\psi$, and the forces between colloids vanish as well. These forces (attractive or repulsive) will be significant only inside the boundary layer of thickness $\xi$. If the size of the osmotic bag $L$ is smaller or comparable to the penetration depth $\xi$, depending on the sign of the force in eqn (\ref{eq:Force_droplets_compact}), catalysts will attract one another and form an aggregate (which could be smaller than the available volume of the osmotic bag) or repel each other and form a Wigner crystal \cite{Wigner_PhysRev.46.1002} (occupying the whole accessible volume). \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Penetration_Depth.pdf}\\ \includegraphics[width=0.45\textwidth]{Concentration_Profiles_Screened_Sphere.pdf} \caption{Catalytic particles, of diameter $2R$ each, are distributed in an osmotic bag of diameter $2L$ (shown by thick dashed circle), while substrate molecules $A$ diffuse from outside, and product molecules $B$ diffuse out to infinity. The chemical imbalance function $k_{\rightarrow} c_{A}(r) - k_{\leftarrow} c_{B}(r) = \psi(r)$ is shown in shades of gray. Deep in the crowd of catalysts, there is no chemical imbalance between $A$ and $B$, $\psi(r) \to 0$. For plotting, we assumed $c_{B}^{\infty} = 0$, $\xi = L/5$, $\frac{k_{\leftarrow}}{k_{\rightarrow}} =\frac{1}{2}$ and $\frac{D_{B}}{D_{A}} = 1$.}\label{fig:Penetration_Depth} \end{figure} A more subtle and experimentally relevant case is a quasi-2D system, where the container has a finite depth $H$, while catalytic colloids are confined due to gravity within a short distance $h$, sometimes called gravitational height, from the bottom (or from the top if they float) of a container, as shown in a cartoon, Fig. \ref{fig:Quasi_2D}. Boundary conditions, in addition to fixed value of $\psi(\infty)$ at $r \to \infty$, require zero normal (vertical) flux of either $A$ or $B$ particles, thus vanishing normal derivative of $\psi$ on both top and bottom surfaces (we note in passing that this does not have simple electrostatic analogy). The situation, as it turns out, depends sensitively on the relations between several relevant length scales. If the depth of the container is infinite or very large, as in Fig. \ref{fig:Quasi_2D}A, then although colloidal spheres are confined in 2D by barriers or osmotic bag to the interior of a circle of radius $L$, the fuel $A$ and exhaust $B$ molecules are diffusing in 3D. Formally, in this case, equation (\ref{eq:screening}) gets reduced simply to $\nabla^2 \psi = 0$ everywhere except a very thin pancake-shaped region of thickness $h$ and radius $L$, and finite penetration depth $\xi$ (\ref{eq:Penetration_Depth}) exists only inside the pancake. If $h$ is very small and $\xi \gg h$, then delivery of $A$ and removal of $B$ by diffusion in 3D is unhindered and reaches every point of the pancake from the top. Therefore, the force of interaction between catalytic colloids still obeys the $1/r^2$ law everywhere inside the pancake, unlike the 3D case where colloids in the bulk of the confined region essentially do not interact. In the attractive case, we expect catalysts in 2D to form a large aggregate whose growth may be stopped only when its diameter becomes comparable to $H$. For repulsive forces we expect formation of a 2D Wigner crystal whose size is not limited by $\xi$ and is controlled by the confining boundaries or an osmotic bag only. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Penetration_Depth_Bottom-1.pdf}\\ \caption{Catalytic colloids are located within a gravitational height $h$ from the bottom of the container of depth $H$. As before, the chemical imbalance field is approximately shown by shades of gray. Barriers or osmotic bag are shown by thick dashed lines. In case of a deep container (figure A), diffusion in 3D is sufficient to supply fuel and remove exhaust for all colloids, and chemical imbalance field $\psi$ penetrates to all colloids. In case of a shallow container (figure B), the middle part of the ``pancake'' of colloids is not accessible to chemical imbalance field. }\label{fig:Quasi_2D} \end{figure} Consider now the opposite limit of a shallow container, as in Fig. \ref{fig:Quasi_2D}B. Clearly, on horizontal length scales larger than $H$ the problem becomes essentially two dimensional. Averaging eqn (\ref{eq:screening}) along the vertical direction, we obtain 2D equation of the same form, except with effective penetration depth given by $\xi^{-2}_{\mathrm{eff}} = \xi^{-2} h/H$. The solution of the corresponding 2D problem is qualitatively similar to Eqs. (\ref{eq:solution_in_3D}), which means that the field of chemical imbalance penetrates into the pancake of catalysts only to horizontal distance about $\xi_{\mathrm{eff}} = \xi \left( H/h \right)^{1/2}$. Closer to the middle of the pancake the local imbalance field is reduced by catalysis and long-range interactions between colloids are suppressed. In this case we expect attractive catalysts to assemble in 2D aggregates of size no larger than $\sim \xi_{\mathrm{eff}}$. Repulsive catalysts will form 2D Wigner crystals only for $L\leq \xi_{\mathrm{eff}}$. This conclusion is reminiscent of the fact that ``live'' colloids in the experiments by Pallaci et al \cite{Palacci2013}, Buttioni et al \cite{Buttioni_PhysRevLett.110.238301}, and Theurkauff et al \cite{Bocquet2012} formed 2D aggregates of limited size that did not grow further. These colloids were not spherically-symmetric and exhibited self-diffusiophoretic swimming. Moreover, they were shown \cite{Palacci2013} to form the so-called ``living crystals'', a finding which was interpreted as an experimental confirmation of the theoretically predicted activity-driven condensation \cite{PhysRevLett.108.235702, PhysRevLett.110.055701, PhysRevLett.110.238301, PhysRevLett.111.145702, Bialk__2013, C3SM52813H, C3SM52469H, Wysocki_2014, PhysRevLett.112.218304, Cates_phi_4_2014}. We speculate that the limited size of the aggregates could be due to the finite penetration depth of the chemical imbalance field (this effect does not require catalysts to be spherically symmetric and is expected to take place even for self-diffusiophoretically driven swimmers). \section{Conclusion} To summarize, we have presented a very simple schematic theory demonstrating that spherical colloids capable of catalyzing a reversible chemical reaction between solutes, experience a peculiar interaction which exists only as long as the concentrations of the solutes are maintained out of equilibrium by constant supply of high free energy ``fuel'' and removal of ``exhaust'' molecules at the boundaries of the colloid solution. The long range ($1/r$) interaction between colloids is driven by the chemical imbalance field that measures the deviation from chemical equilibrium. We have shown that, despite the apparent similarity of the underlying equations, the origin of this effect is very different from the Debye-Huckel electrolyte polarization mechanism in electrostatics: catalytic activity drives the concentrations of solute molecules towards their equilibrium values and therefore reduces the chemical imbalance that controls the strength of the diffusiophoretic interaction between the colloids. We demonstrated that the combination of boundary conditions and finite penetration depth has a profound effect on the interaction between catalytic colloids. Thus, in a realistic 3D geometry of a colloid solution enclosed in an osmotic bag (permeable to solute molecules but not to colloids) and surrounded by a ``bath'' that fixes the concentrations of solutes at some arbitrary values, non-equilibrium concentrations of solutes can be maintained in steady state only within a penetration depth from the boundary, and therefore interactions between colloids vanish in the bulk of the colloid solution. These results remain valid even in the limit of vanishing reverse reaction rate, when they boil down to a simple statement that fuel molecules $A$ cannot penetrate into the bulk of the catalyst crowd as they get transformed into $B$ (the field $\psi(r)$ is very small at small $r$ according to eqn (\ref{eq:solution_for_psi}), meaning in eqn (\ref{eq:imbalance}) that $k_{\leftarrow} \to 0$ and $c_{A}(r) \to 0$); we thank the Referee for bringing our attention to this point. The effects of finite penetration depth can be overcome in quasi-2D geometry (with colloids confined to a surface and solute molecules free to move in 3D) where unscreened $1/r$ attractions between colloids can lead to macroscopic aggregates or to Wigner crystals, depending on the sign of the diffusiophoretic interaction between colloids. In the attractive case, we predict that finite 2D clusters of colloids will form if the depth of the 3D container is finite and that the diameter of these clusters will be proportional to the effective correlation length that increases as the square root of the depth $H$. In the repulsive case one expects Wigner crystals to form if the separtion between the barriers that confine the colloids in 2D is smaller than the effective penetration length. These theoretical predictions await experimental verification. \acknowledgements We would like to thank Alexandra Zidovska and Paul Chaikin for valuable discussions. We are also indebted to Ramin Golestanian for helpful comments and suggestions, and thank Siegfried Dietrich and Mihail Popescu for useful correspondence. YR's work was supported by grants 178/16 from the Israel Science Foundation and 1902/12 from the Israeli Centers for Research Excellence program of the Planning and Budgeting Committee. YR would like to acknowledge the hospitality of the Center for Soft Matter Research of New York University where part of this work was done. AYG's research is supported in part by the MRSEC Program of the National Science Foundation under Award DMR-1420073. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958 and by the National Institutes of Health under Grant No. R25GM067110.
1,108,101,563,739
arxiv
\section{Introduction\label{intro}} Incompressible fluid motion is governed by the Navier-Stokes equations (NSE): \begin{equation} \partial_t \bm u+\bm u\cdot \bm \nabla \bm u=-\bm \nabla p + \nu \Delta \bm u +\bm F \, \label{eq:NS} \end{equation} where $\bm u(\bm x,t)$ is the velocity field, $p$ the scalar pressure ensuring $\bm \nabla \cdot \bm u=0$, $\nu$ the viscosity and $\bm F$ represents an external stirring force. In the absence of viscosity ($\nu=0$), the NSE are invariant under time reversal, i.e. the simultaneous transformation $\bm u \to -\bm u$ and $t \to -t$, provided $\bm F$ respects this symmetry. This means that if at time $t$ we reverse the fluid velocity, the flow will trace back its evolution. The effects of viscosity are particularly subtle for turbulent flows at high Reynolds numbers: \begin{equation} Re = \frac{U_LL}{\nu} \end{equation} where $L$ is the characteristic length of the flow and $U_L$ the associated velocity. Fully developed turbulence corresponds to the fluid state realized in the limit $Re \to \infty$, which is equivalent to $\nu \to 0$ for fixed large scale flow configuration. As a result, one could naively think that in this limit the dynamics becomes reversible with zero mean energy flux. This is not observed: it is an empirical fact that in three dimensions turbulence dissipates energy at a finite average rate, $\langle \varepsilon\rangle$, independently of the value of viscosity, a fact known as the {\it dissipative anomaly} \cite{frish_turbulence}. Thus, viscous effects play a singular role in the dynamics of turbulent flows. Moreover, it is also known that the Euler equations ($\nu =0$) can develop weak solutions \cite{de2010admissibility} that do not conserve energy as already conjectured by Onsager in the 40's. As a consequence, at least formally, there is no need of a viscous sink to absorb energy in three dimensional fluids. As a result, we still lack a fundamental understanding of time irreversibility in the strongly out-of-equilibrium energy cascade (from large to small scales) observed in 3D turbulent flows. In particular, it is not clear how to disentangle the effects due to the explicit time reversal symmetry breaking introduced by the viscous term from the breaking due to the attractor selected by the non equilibrium dynamics, similarly to what happens for macroscopic time irreversibility in the thermodynamical limit of systems with a time reversible microscopic dynamics \cite{rose1978fully,falkovich2006}. In this paper we further investigate this fundamental issue by studying the evolution of a family of dynamical models for the NSE equipped with a fully time-reversible viscosity, elaborating an original idea proposed by Gallavotti at the end of the '90s \cite{gallavotti1996equivalence,Gallavotti1997Dynamical} (see also \cite{gallavotti2004lyapunov,gallavotti2014equivalence}) and never fully checked in strongly out-of-equilibrium systems as the turbulent energy cascade. In a nutshell, the idea consists in allowing the viscosity to change such as some global quantity is exactly conserved, for example by fixing the total energy or enstrophy of the flow. In this way, we move from the original dynamics where viscosity is fixed and the total energy (or enstrophy) is chaotically changing in time around some stationary value to a system where viscosity is oscillating with fixed energy (enstrophy). Loosely speaking, we are playing a similar game when moving from canonical to microcanonical ensembles in equilibrium statistical mechanics. Here the system will be out-of-equilibrium, and it is far from trivial to prove the equivalence of the two descriptions. In the original NSE, time-reversal symmetry breaking can be easily revealed by studying multipoint Eulerian or Lagrangian correlations, as for the case of the third order moment of the velocity increments in the configurational space \cite{frish_turbulence} or the relative dispersion of two-or-more particles \cite{sawford2005comparison,jucha2014time,biferale2005multiparticle}. Remarkably, irreversibility manifests also in the dynamics of a single fluid element as recently found in \cite{xu2014,xu2014b} (see also \cite{cencini2017time}). Fluid elements, or tracers, evolve according to the dynamics $\dot{\bm x}=\bm v(t)\equiv \bm u(\bm x,t)$. By inspecting experimental and numerical tracer trajectories it was discovered that the Lagrangian kinetic energy, $\mathcal{E}(t)=\frac{1}{2}v^2(t)$, is dominated by events in which it grows slower than it decreases \cite{xu2014}. As a consequence, the rate of the kinetic energy change (Lagrangian power), \begin{equation} p(t)\equiv\dot{\mathcal{E}}(t)=\bm v(t)\cdot \bm a(t) \label{eq:pnse} \end{equation} where $\bm a$ is the particle's acceleration, is characterized by a skewed distribution with $\langle p^3\rangle$ negative and scaling with a power of the Reynolds number. Such asymmetry is directly linked to time irreversibility \cite{xu2014}. These features have been found also in compressible \cite{grafke2015} and two-dimensional turbulence \cite{xu2014,piretto2016irreversibility}. It should be emphasized that the skewness of the Lagrangian power is also relevant to more applied issues such as the stochastic modelisation of single particle transport in turbulent environmental flows \cite{sawford}. In \cite{cencini2017time}, the authors have investigated the Lagrangian power statistics by means of direct numerical simulations (DNS) of the NSE (\ref{eq:NS}) and of shell models of turbulence \cite{biferale2003shell,bohr2005}. By looking at observables that are sensitive to the asymmetry of the probability distribution function (pdf), we found that both the symmetric and the time asymmetric components do scale in the same way in the DNS data and the scaling properties can be rationalized within the framework of the multifractal (MF) model of turbulence, which is blind to time-symmetry \cite{FP1985,benzi1984multifractal}. {Because the measured asymmetry is very small and the Reynolds numbers naturally limited by the numerical resolutions in three dimensions, we studied in the same paper also shell models, where a clear difference in scaling among symmetric and anti-symmetric components was observed. Not surprisingly, by applying the same multifractal theory valid for NSE it is possible to capture the symmetric part of the Lagrangian power statistics only. The latter result suggests that shell models are a good playground for asking precise questions concerning the relative importance of (time) symmetric \textit{vs} asymmetric components of the Lagrangian power pdf at Reynolds numbers otherwise not achievable in the NSE case. In the following we extend the study of time irreversibility initiated in \cite{cencini2017time} by using a family of \textit{time-reversible} shell models, obtained by modifying the viscosity according to Gallavotti's idea. Besides the academic interest on such kind of models, it is important to remark that reversible dissipative terms have been also used in Large Eddy Simulations (LES) of the NSE \cite{she1993constrained,carati2001modelling,fang2012time,Jimenez2015}. Therefore, investigating such reversible equations, even in the simplified framework of shell models is of interest for the more general issue of developing effective models for the small scales of turbulence (see, e.g., the discussion in \cite{fang2012time}). Comparing \textit{vis a vis} the dynamics of the irreversible shell model (ISM) with its reversible (RSM) variant offers us a unique possibility to deepen the understanding of the Lagrangian power statistics, and its connection with irreversibility. In particular, we show here that for RSM, the time reversibility is \textit{spontaneously} broken due to the non-equilibrium character of the dynamics. We also show that RSM share the same statistical properties of ISM for all inertial degrees-of-freedoms, those that are not directly affected by the properties of the specific time-reversible viscous mechanisms, while dissipative statistics is different. Our results suggest that time-irreversibility is a robust property of the turbulent energy transfer, and that it is spontaneously broken on the attractor selected by the dynamics. The paper is organized as follows. In Section \ref{sec:shellmodels} we briefly recall the idea behind shell models, describe the particular model considered and introduce its reversible formulation. We end the section recalling how Lagrangian statistics can be studied within the shell model framework. In Section \ref{sec:confronto} we compare the statistics of RSM and ISM, in particular we focus on the structure functions and their scaling behavior in the inertial range. We end the section discussing the small-scale properties of the RSM, where the modified dissipation acts more strongly. Section \ref{sec:lagpow} is devoted to the Lagrangian power statistics. We first briefly summarize previous findings and then focus on the results of simulations of the two models. Section \ref{sec:conclusions} is devoted to conclusions. In Appendix \ref{sec:app_a} we provide some details on the numerical simulations of the RSM, while in Appendix~\ref{app:MF} we summarize the basics of the multifractal model for turbulence and its application to Lagrangian statistics. \section{Irreversible and reversible shell models \label{sec:shellmodels}} Shell models are finite dimensional, chaotic dynamical systems providing a simplified laboratory for fundamental studies of fully developed turbulence \cite{frish_turbulence,bohr2005,biferale2003shell,ditlevsen2010turbulence}. These models have been introduced as drastic simplifications of the NSE and, remarkably, found to share with them many non-trivial properties encompassing the energy cascade, dissipative anomaly, and intermittency with anomalous scaling for the velocity statistics. In this section we describe the so called ``Sabra'' shell model \cite{Lvov_1998_improved_shellmodels} and introduce a variant of it where the dissipative term is modified as proposed in \cite{gallavotti1996equivalence,Gallavotti1997Dynamical} in order to obtain formally time reversible equations. We end the section by showing how the shell model can be used to study Lagrangian power statistics. \subsection{Standard (irreversible) Sabra shell model (ISM) \label{sec:sabra}} The Sabra shell model \cite{Lvov_1998_improved_shellmodels} is a modified version of the well known Gledzer-Ohkitani-Yamada model \cite{gledzer1973system,ohkitani1989temporal} for which anomalous scaling was first observed \cite{jensen1991intermittency}. As typical for shell models, the dynamics is defined over a discrete number of shells in Fourier space arranged in a geometric progression $k_n=k_0 \lambda^{n-1}$ with $n=1,\ldots,N$ (with $k_0=1$ and $\lambda=2$ in our simulations). A complex velocity variable $u_n(t)$ is considered for each shell, which can be interpreted as the velocity fluctuation (eddy) at scale $k^{-1}_n$. The Sabra model equation for $u_n$ reads: \begin{equation} \begin{aligned} \dot{u}_n =& -\nu k_n^2 u_n +ik_n (a\lambda u_{n+2} u^*_{n+1} + bu_{n+1} u^*_{n-1} \\ &+ \frac{c}{\lambda} u_{n-1}u_{n-2}) + f_n \, , \end{aligned} \label{eq:sabra} \end{equation} where $^*$ denotes the complex conjugate. The first term in the rhs of (\ref{eq:sabra}) is the dissipation with constant viscosity $\nu$. Notice that this term explicitly breaks the time reversibility, i.e. the symmetry under the transformation $t\to -t$ and $u_n\to -u_n$, of the equation, as it does in the NSE. The second, non-linear term, preserving the time-reversal symmetry, couples velocity variables at different shells and is built in analogy with the non-linear term of the NSE in Fourier space. The coupling is restricted to neighboring shells, owing to the predominant locality of the energy cascade \cite{rose1978fully}. Choosing the coefficients with the prescription $a+b-c=0$ (in our simulations $a=1$ and $b=-1/2=-c$), the nonlinear term preserves two quadratic invariants, i.e. energy $E=\sum_n |u_n|^2$ and helicity $H=\sum_n (-1)^n k_n |u_n|^2$, similarly to the NSE. \begin{figure*}[t!] \centering \includegraphics[width=0.85\linewidth]{Fig1} \caption{Temporal dynamics of different observables measured during typical runs of both the ISM (\ref{eq:sabra}) (left side of each panel) and the RSM with viscosity given by (\ref{eq:nu_reversible}) (right side of each panel). On the x--axis of all panels time is measured in simulation units. Panels: (a) total energy $E$; (b) total enstrophy $\Omega$; (c) viscosity coefficient $\nu$; (d) energy dissipation rate $\varepsilon(t) = 2 \nu \Omega$. Continuous lines represent instantaneous values, dashed lines represent running averages (in time). For details on simulations, see Appendix \ref{sec:app_procedure} and \ref{sec:app_params}.} \label{fig:overview} \end{figure*} Finally, the last term $f_n$ represents the forcing, which injects energy at an average rate $\langle\varepsilon\rangle=\langle\sum_n \mathcal{R}\{f_n u_n^*\}\rangle$, where $\mathcal{R}$ denotes the real part. In our simulations we considered a constant forcing, which preserves the time-reversal symmetry, acting only on the large scales (small wavenumbers) $f_n=f\delta_{n,0}$ with $f=const$. \subsection{Reversible shell model (RSM)\label{sec:reversible}} As discussed above the term $-\nu k_n^2 u_n$ in Eq.~(\ref{eq:sabra}) explicitly breaks the time reversal symmetry. In this section, we show how it can be modified by allowing the viscosity to vary depending on the velocity variables in such a way that the dynamics is (formally) time-reversible. In this way we can directly probe the irreversibility due the non-equilibrium energy cascade. The first proposal to modify the Navier-Stokes equation in such a way to have a reversible dynamics is due to She and Jackson \cite{she1993constrained} who introduced the constrained Euler equation, in order to devise a new Large Eddy Simulation (LES) scheme, by imposing a global constraint on the energy spectrum. On a more theoretical ground, Gallavotti \cite{gallavotti1996equivalence,Gallavotti1997Dynamical} proposed to modify the dissipative term by letting the viscosity depend on the velocity field in such a way to conserve a global quantity, e.g. energy or enstrophy. The value of these quantities is then determined by the initial conditions which should be taken so that the total energy or enstrophy, depending on the chosen constraint, equal the average value obtained from a long integration of the irreversible model dynamics. Gallavotti conjectured that these (formally) reversible equations should be ``equivalent'', in the spirit of equivalence of ensembles in equilibrium statistical mechanics, to the (irreversible) NSE, at least in the limit of very high Reynolds number. This idea was then tested, for some aspects, in 2D NSE \cite{gallavotti2004lyapunov} and, more recently, in the Lorenz 1996 model \cite{gallavotti2014equivalence}, which can be thought as a single scale shell model. Here we apply these ideas to the shell model (\ref{eq:sabra}). Past attempts to modify (\ref{eq:sabra}) imposing the energy conservation have encountered some difficulties in reproducing the dynamics of the original shell model \cite{Biferale1998}. When fixing the energy, we found similar difficulties. Briefly, the main problem is that, in the regime of energy cascade, the value of the mean energy is essentially determined at the integral (forcing) scales and is basically independent of the viscosity. Therefore, fixing the energy alone does not fix the extension of the inertial range (\textit{viz.} the Reynolds number). On the other hand, fixing the enstrophy \begin{equation} \Omega=\sum_n k_n^2 |u_n|^2 \, \label{eq:omega_def} \end{equation} enforces a constraint on the small scales so that once its value is imposed via the initial condition the extension of the inertial range, and thus the Reynolds number, results well defined also in the reversible model. By using (\ref{eq:sabra}) in the request \begin{equation} \dot{\Omega}=0 \, , \label{eq:omega_constant} \end{equation} one obtains the dynamical evolution for the time-reversible viscosity \begin{equation} \begin{aligned} \nu_R(t) &= \frac{\sum_n k_n^2 \mathcal{R}\{f_n u_n^*\}}{\sum_n k_n^4 |u_n|^2} +\\ &+\frac{\sum_n\! k_n^3 \!\left[a\lambda C_{3,n\!+\!1}\!+\!bC_{3,n}\!-\!\frac{c}{\lambda}C_{3,n\!-\!1}\right]}{\sum_n k_n^4 |u_n|^2} \, , \end{aligned} \label{eq:nu_reversible} \end{equation} where $C_{3,n}= -\mathcal{I}\{u_{n+1}u^*_nu^*_{n-1}\}$, and $\mathcal{I}$ stands for the imaginary part. It is worth noticing that there are two terms in the rhs of Eq.~(\ref{eq:nu_reversible}) because enstrophy is both injected by the forcing (first term) and produced by the nonlinear dynamics (second term). Most importantly, since $\nu_R$ is odd in the velocity variables, the modified dissipative term $-\nu_R k_n^2 u_n$ preserves time reversal symmetry, i.e. does not change sign for $t\to -t$ and $u_n\to -u_n$. Being $\nu_R$ a variable quantity, the initial condition for the $u_n$ becomes the only way of controlling the separation between the injection and dissipation scales in the system. Increasing the enstrophy of the initial condition increases the separation of scales and vice-versa. Further details on the simulation procedure can be found in the Appendices \ref{sec:app_procedure} and \ref{sec:app_params}. We conclude the presentation of the reversible shell model by showing, in Fig.~\ref{fig:overview}, the time evolution of some global observables such as energy, enstrophy, energy dissipation and the viscosity itself measured both in the ISM and RSM. As one can see, in spite of the drastic change in the enstrophy and viscosity (Fig. \ref{fig:overview}b,c) the qualitative features of energy and energy dissipation are similar. The highly intermittent behavior of the energy dissipation, $\varepsilon(t)$, is qualitatively preserved in RSM. Notice that in the ISM the time-dependent energy dissipation reads $\varepsilon(t)=\nu \Omega(t)$ while in the RSM it takes the form $\varepsilon(t)=\nu_R(t)\Omega$, i.e. the quantity dependent on time is enstrophy in the former and the viscosity in the latter with the enstrophy $\Omega$ fixed at the average value obtained from the ISM. Also, the time average of the variable viscosity (\ref{eq:nu_reversible}) is approximately equal to the value of $\nu$ in the corresponding irreversible simulation, which is a prerequisite to have the dynamical equivalence between the two dynamics \cite{gallavotti1996equivalence}. \begin{figure*}[h!] \centering \includegraphics[width=0.85\linewidth]{Fig2} \caption{Comparison between the two models. (a) Structure function $F_q(k_n)$ of order $q=2,4,6$ (as labeled) vs $k_n$, for the ISM (solid curves) and RSM (dashed curves). (b) Scaling exponents $\zeta(q)$ obtained by fitting the structure functions in the inertial range in the two models, compared with the K41 dimensional prediction ($q/3$) and the multifractal one (\ref{eq:zetaEMF}), see legend. (c) Energy flux $\Pi^E_n$ as a function of the scale in both models. In all panels the error bars are smaller than the symbols. For details on the parameters of simulations see Appendix \ref{sec:app_params} (parameter sets \textbf{I1} and \textbf{R1}). \label{fig:SFrev}} \end{figure*} \subsection{Lagrangian statistics in shell models} For shell models, lacking a spatial structure, there is not an obvious recipe for introducing a Lagrangian velocity. However, as observed in \cite{boffetta2002lagrangian}, the quantity \begin{equation} v(t) = \sum_n \mathcal{R}\{u_n(t)\} \label{eq:vlag} \end{equation} can be regarded as a sort of Lagrangian velocity. The choice of the real part is arbitrary, working with the imaginary part gives equivalent results. The rationale for (\ref{eq:vlag}) is that the Lagrangian velocity is the superimposition of eddies at all scales, $u_n$ in the shell models. Since the shell model is not affected by sweeping \cite{bohr2005}, such a superposition is expected to reproduce the statistics of velocity along the particle path. Indeed it has been shown that $v(t)$ as defined above shares many qualitative and quantitative features of the Lagrangian velocity statistics of real 3D turbulent flows \cite{boffetta2002lagrangian}. In particular, Lagrangian structure functions have been shown to display a scaling behavior with exponents deviating from the dimensional prediction and quantitatively close to those observed in experiments and simulations of the NSE \cite{mordant2001measurement,chevillard2003lagrangian,biferale2008lagrangian,arneodo2008universal}. Using (\ref{eq:vlag}) as a definition of Lagrangian velocity in the shell models, we define the Lagrangian acceleration as \begin{equation} a(t) = \dot{v} = \sum_n \mathcal{R}\{\dot{u}_n(t)\}\,, \label{eq:a} \end{equation} and the Lagrangian power \begin{equation} p = va= \sum_n\mathcal{R}\{u_n\} \sum_m \mathcal{R}\{\dot{u}_m\}\,, \label{eq:p} \end{equation} whose statistics can be studied in oder to explore the issue of Lagrangian time irreversibility. We notice that the constant forcing on the first shell, which is used in our simulations, imposes a strong constraint on the phases of the first shells, leading to $\langle v(t) \rangle \neq 0$. Since, in principle, this may induce some spurious effects on the asymmetry of the power statistics, we have also tested our results with a (time-reversible) stochastic forcing for which the statistics of $v$ is symmetric around $\langle v(t) \rangle = 0$, though non-Gaussian. Since the results we present are independent of the forcing choice, in the following we shall only show the constant forcing results, for a comparison with the other choice the reader may consult \cite{cencini2017time}. \section{Energy cascade and anomalous scaling in the reversible shell model \label{sec:confronto}} The modified viscosity (\ref{eq:nu_reversible}) can be interpreted within the framework of large eddy simulations as an effective model for small scale dissipation. In this respect it is worth mentioning that also for the NSE several time-reversible LES model have been proposed \cite{carati2001modelling,fang2012time,Jimenez2015}. It is thus important to verify whether and to what extent the RSM is able to reproduce the inertial range physics of the ISM. In particular, here, we study the scaling behavior of velocity structure functions that for shell models read \cite{jensen1991intermittency,biferale2003shell,bohr2005} \begin{equation} F_q(k_n)=\langle|u_n|^q\rangle\sim k_n^{-\zeta(q)}\,. \label{eq:sfshell} \end{equation} and the energy spectrum defined as $E_n \equiv F_2(k_n)=\langle |u_n|^2 \rangle$. For the standard Sabra shell model it has been shown \cite{Lvov_1998_improved_shellmodels} that the exponents $\zeta(q)$ deviate from the dimensional (Kolmogorov 1941, K41) prediction, i.e. $\zeta(q)\neq q/3$, and are quantitatively close to the exponents of the Eulerian structure functions observed in experiments and simulations of the NSE. In Fig.~\ref{fig:SFrev}a, we compare the structure functions $F_q(k_n)$ for $q=2,4$ and $6$ obtained from both ISM and RSM. As one can see their inertial-range scaling behavior is essentially indistinguishable. This is further confirmed in Fig.~\ref{fig:SFrev}b where we compare the scaling exponents of the structure functions $\zeta(q)$ obtained by fitting the structure functions in both models. In Fig.~\ref{fig:SFrev}b we also show that the scaling exponents are very well described by the multifractal formula (see Appendix~\ref{app:MF} for a brief summary of the MF model for turbulence) \begin{equation} \zeta(q)= \inf_{h} \{hq+3-D(h)\}\,, \label{eq:zetaEMF} \end{equation} where for $D(h)$ we used a log-Poisson model [see Eq.~(\ref{dofh})]. The constancy of the energy flux, $\Pi_n^E$, through the scale $k_n$, displayed in Fig.~\ref{fig:SFrev}c, confirms that in both models a direct energy cascade is taking place. We remark, however, that the reversible shell model displays a slightly reduced inertial range, as inertial scaling disappears a few shells before its irreversible counterpart. A major difference between the two models is apparent in the dissipative range. Indeed the RSM shows a non trivial behavior at the scales where the ISM is exponentially damped by the fixed-viscosity dissipation. \subsection{Small scale behavior of the RSM} The reversible and irreversible shell models display different statistics at small scales, due to the different dissipative schemes. In the following we focus on this range of scales by looking at the energy and enstrophy spectra at varying the Reynolds number, i.e. the extension of the inertial range. In the ISM, we observe an exponential suppression of turbulent fluctuations after the inertial range of scales, i.e. above the Kolmorogov wavenumber, $k_\eta \approx (\nu^3/\langle\varepsilon\rangle)^{-1/4}$. Conversely, in the RSM, we can distinguish an additional range of scales for $k>k_\eta$ characterized by a scaling close to a power law as clear from Fig.~\ref{fig:Confronto_spettri_rev}a, where we show the energy spectrum at increasing $\Omega$. At even larger wavenumbers, this power-law decay is followed by an exponential suppression, which is not visible in Fig.~\ref{fig:Confronto_spettri_rev}a due to the limited resolution but is clearly observed in simulations at smaller $\Omega$ (not shown). As shown in Fig.~\ref{fig:Confronto_spettri_rev}b, the post-inertial range of scales shows a trend toward constancy of enstrophy at different $k_n$, suggesting equipartition of enstrophy and $E_n \sim k_n^{-2}$. Simulations at higher resolution (high $N$) and high values of $\Omega$ are computationally very demanding, due to the stiffness of ODE (\ref{eq:sabra}) and its numerical instability, so we were not able to explore higher values of $\Omega$ and determine unambiguously whether an effective equipartition of enstrophy is reached in the limit $\Omega \rightarrow \infty$. A further complication in understanding the physics of this range of scales is that both enstrophy equipartition and enstrophy cascade (constant flux) are characterized by the same energy spectrum scaling $E_n \sim k_n^{-2}$, making it difficult to predict the physical mechanism behind the observed dynamics from only looking at the spectrum. To disentangle the two possibilities, one would have to look at the enstrophy flux, however, at difference with $E$ or $H$, the enstrophy $\Omega$ is not an invariant for the non-linear term of equation (\ref{eq:sabra}), and its time-derivative cumulated on the first $M$ shells cannot be interpreted as a rate of transfer. In our simulations we found that the enstrophy dynamics is dominated by the balancing between the enstrophy generated by the non-linear interactions and the enstrophy dissipation. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{Fig3} \caption{Energy (a) and enstrophy spectra (b) of the RSM at varying the total enstrophy $\Omega$. The enstrophy spectra in (b) have been rescaled in order to keep the Kolmogorov length-scale $O(1)$. Notice that in the RSM the Kolmogorov scale can be defined as $k_\eta\approx (\langle \nu \rangle ^3 / \langle \varepsilon \rangle)^{-1/4}\sim \langle \varepsilon \rangle^{-1/2} \Omega^{3/4}$, where we used that $\langle \nu \rangle=\langle \varepsilon\rangle / \Omega$. Errors, not shown, are of the same order of the symbol size or less. For details on simulations, see Appendix \ref{sec:app_a} (parameter sets \textbf{R2-5}).} \label{fig:Confronto_spettri_rev} \end{figure} Regardless of the underlying physical mechanism, the existence of a post-inertial range of scales suggests that the energy dissipation statistics of the RSM could be substantially different from that of the ISM. We thus studied the moments of the energy dissipation at varying the Reynolds number in both models. More specifically, we studied how the moments depend on the Taylor scale Reynolds number defined as $Re_\lambda=E/\sqrt{\nu\langle\varepsilon\rangle}$, i.e. as the ratio between the large scale time scale, $T_L=E/\langle\varepsilon\rangle$, and the small scale Kolmogorov time scale, $\tau_\eta=\sqrt{\nu/\langle\varepsilon\rangle}$. For the ISM, the moments of the energy dissipation are known to follow a power-law scaling on $Re_\lambda$ \cite{boffetta2000energy} \begin{equation} \label{eq:momeps} \langle \varepsilon^q\rangle \sim Re_\lambda^{\chi(q)} \end{equation} with the exponents $\chi(q)$ in agreement with the multifractal model as (see also Appendix~\ref{app:MF}) \begin{equation} \chi(q)=\sup_h\left\{2\frac{D(h)-3-(3h-1)q}{1+h}\right \}\,, \label{eq:MT_resulteps} \end{equation} where $D(h)$ is the same function used for the structure functions (\ref{eq:zetaEMF}). Since, in the RSM, the viscosity (\ref{eq:nu_reversible}) can assume negative values, we studied the moments of the absolute value of the energy dissipation $\langle |\varepsilon|^q \rangle$ (we also checked that moments preserving the sign, such as $\langle |\varepsilon|^{q-1}\varepsilon\rangle$ give the same results, not shown). In Fig.~\ref{fig:epsilonmom} we show the exponents obtained by fitting the scaling behavior (\ref{eq:momeps}) for the moments of energy dissipation for both the RSM and ISM, together with the prediction (\ref{eq:MT_resulteps}). As one can see in the RSM the moments are definitely different from the ISM values, which are well predicted by (\ref{eq:MT_resulteps}). In particular, the exponents of the RSM are smaller, meaning that the intermittency of $\varepsilon$ is weaker in the reversible model. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{Fig4} \caption{Scaling exponents of the moments of the energy dissipation (\ref{eq:momeps}) for the RSM (circles), and for the ISM (squares). The dashed line represents the MF prediction (\ref{eq:MT_resulteps}). Errors, not shown, are of the same order of the symbol size or less. For details on simulations, see Appendix \ref{sec:app_a} (parameter sets \textbf{I2-9} and \textbf{R6-14}).} \label{fig:epsilonmom} \end{figure} \section{Lagrangian Power statistics and time irreversibility\label{sec:lagpow}} It is useful to start this Section by briefly summarizing previous findings on Lagrangian power statistics in turbulence. As mentioned in the introduction, by inspecting both experimental and numerical trajectories of Lagrangian tracers Xu et al. \cite{xu2014} discovered that time increments of Lagrangian kinetic energy are negatively skewed and that this skewness persists for the time derivatives, i.e. for the Lagrangian power (\ref{eq:pnse}). Such skewness is directly linked to the time irreversibility of the tracer dynamics, as it means that the probability of gaining and losing kinetic energy is not the same, though $\langle p\rangle=0$ (by stationarity). In particular, they found that approximately: \begin{equation} \langle p^2 \rangle \simeq \langle\varepsilon\rangle^2 Re_\lambda^{4/3}\,, \quad \langle p^3 \rangle \simeq -\langle\varepsilon\rangle^3 Re_\lambda^{2}\,. \label{eq:mom2e3xu} \end{equation} The above results convey two messages. First, the probability density function of $p$ is skewed, with $\langle p^3\rangle /\langle p^2\rangle ^{3/2}\approx const<0$ suggesting that time-irreversibility is robust and persists in the limit $Re_\lambda \to \infty$. Second, the exponents $4/3$ and $2$, which approximately describe the scaling behavior of the second and third moment, strongly deviate from the dimensional prediction based on K41 theory, according to which \begin{equation} \langle p^q\rangle/\langle\varepsilon\rangle^q \propto Re_\lambda^{q/2} \, , \label{eq:K41_lagpower} \end{equation} meaning that the Lagrangian power is strongly intermittent. It has been shown, in \cite{cencini2017time}, that the deviations from (\ref{eq:K41_lagpower}) can be understood within the framework of the multifractal model for turbulence (see also Appendix~\ref{app:MF}). In particular, the MF model predicts that \begin{equation} \langle p^q \rangle\! \sim\! \langle\varepsilon\rangle^q Re_\lambda^{\alpha(q)} \label{eq:MF_lagpower} \end{equation} with \begin{equation} \alpha(q)=\sup_h\left\{2\frac{(1-2h)q-3+D(h)}{1+h}\right\}\,. \label{eq:MF_lagpower_exponent} \end{equation} In \cite{cencini2017time} it is also shown that, defining the Lagrangian power as in (\ref{eq:p}), the (irreversible) shell model displays an intermittent statistics for $p$, but at variance with NS-turbulent data, deviations from the prediction (\ref{eq:MF_lagpower_exponent}) are present, at least in the statistical asymmetries of the power pdf. In this section, we broaden the investigation comparing Lagrangian power statistics in both the ISM and RSM. \subsection{Moments and asymmetry of Lagrangian power} For both the ISM and RSM the Lagrangian power is defined according to Eq.~(\ref{eq:p}). \begin{figure} \centering \includegraphics[width=0.99\linewidth]{Fig5} \caption{Probability density function of the Lagrangian power normalized by the average energy input rate, $p/\langle \varepsilon\rangle$, at three values of $Re_\lambda$ for the ISM. To highlight tail asymmetries, the pdf is plot against $|p|/\langle \varepsilon\rangle$, the positive/negative tail is in solid/dashed lines. Inset: the three pdfs of the main plot normalized with $p_{rms}=\langle p^2\rangle^{1/2}$, the curves do not overlap which is the signature of intermittency in the power statistics. For details on simulations, see Appendix \ref{sec:app_a} (parameter sets \textbf{I4}, \textbf{I6}, \textbf{I8}).} \label{fig:pdfP} \end{figure} As discussed above, time irreversibility reveals itself in the odd order moments of the power that are sensitive to the asymmetries in the tails of the pdf of power. Such asymmetries are shown in Fig.~\ref{fig:pdfP} for different values of $Re_\lambda$. The absence of collapse onto a unique curve for the pdf of $p/\langle p^2\rangle^{1/2}$ (shown in the inset) highlights the presence of intermittency in the statistics of $p$. Here, following \cite{cencini2017time}, in order to probe the scaling behavior of the symmetric and asymmetric component of the statistics we introduce two non-dimensional moments: \begin{equation} S_q=\frac{\langle |p|^q\rangle}{\langle\varepsilon\rangle^q}; \quad A_q=\frac{\langle p|p|^{q-1}\rangle}{\langle\varepsilon\rangle^q}\,. \label{eq:defmom} \end{equation} Clearly the latter vanishes for a symmetric (time-reversible) pdf. \begin{figure*} \centering \includegraphics[width=0.85\linewidth]{Fig6} \caption{Lagrangian power statistics in the ISM. Power moments $S_q$ and $-A_q$ (see legend) as a function of $Re_\lambda$ for (a) $q=2$ and (b) $q=3$. The curves for $-A_q$ have been shifted vertically to highlight the difference with respect to $S_q$. The black solid line shows the MF prediction (\ref{eq:MF_lagpower})--(\ref{eq:MF_lagpower_exponent}). Panel (c): exponents for the $Re_\lambda$ dependence fitted from $S_q$ and $-A_q$ compared with K41 (\ref{eq:K41_lagpower}) and MF predictions (\ref{eq:MF_lagpower})--(\ref{eq:MF_lagpower_exponent}). Inset: $Re_\lambda$-dependence of $\langle p/|p|\rangle\propto Re_\lambda^{-\mu}$ with $\mu\approx 0.187(7)$ as obtained by a best fit shown as a black line. Where error bars are not shown, it means that they are smaller or equal to the symbol size. For details on simulations, see Appendix \ref{sec:app_a} (parameter sets \textbf{I2-9}). \label{fig:SMirrev}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.85\linewidth]{Fig7} \caption{Lagrangian power statistics in the RSM. Panels (a), (b) and (c) present the same quantities as in Fig.~\ref{fig:SMirrev} but for the RSM. Inset of panel (c): $Re_\lambda$-dependence of $\langle p/|p|\rangle\propto Re_\lambda^{-\mu}$ with $\mu\approx 0.20(7)$ as obtained by a best fit shown as a black line. For details on simulations, see Appendix \ref{sec:app_a} (parameter sets \textbf{R2-5}).} \label{fig:SMrev} \end{figure*} The main results on the Lagrangian power moments are summarized in Fig.~\ref{fig:SMirrev} and Fig.~\ref{fig:SMrev} for the ISM and RSM, respectively. In Fig.~\ref{fig:SMirrev}a,b (Fig.~\ref{fig:SMrev}a,b) we show the second and third moments for the ISM (RSM), respectively. Two observations are in order. \begin{enumerate} \item As for the ISM, the symmetric moments $S_q$ are in excellent agreement with the scaling behavior in $Re_\lambda$ predicted by the MF model obtained using (\ref{eq:MF_lagpower_exponent}) with the $D(h)$ given by (\ref{dofh}) (see Fig.~\ref{fig:SFrev}b). Conversely, deviations from the MF prediction are evident in the RSM. \item For both models, the asymmetric moments $A_q$ are negative (positive) for $q>1$ ($q<1$) (we recall that $A_1=0$ by stationarity). The non-vanishing values of $A_q$ for $q\neq 0$ are the signature of time reversal symmetry breaking. In both models, the scaling behavior of $A_q$ is definitively different from that of $S_q$. In particular, the exponents are smaller and thus the asymmetry in the tails appears to be subleading with respect to the symmetric component. \end{enumerate} The second observation implies that the generalized skewnesses $\tilde{S}_q=-A_q/S_q$, which measures the scaling ratio between the asymmetric and the symmetric components of the statistics at varying the order, are decreasing functions of $Re_\lambda$. This suggests that there is a statistical recovery of the time reversal symmetry in the limit of infinite $Re_\lambda$, at variance with what observed in NS turbulence \cite{xu2014,xu2014b,cencini2017time}. It is important to stress that the decay of the generalized skewness does not imply the decay of standard measures of skewness \cite{cencini2017time}, such as e.g. $\langle p^3\rangle /\langle p^2\rangle^{1/2}$, which may be still growing with $Re_\lambda$ due to intermittency corrections, (see \cite{BV2001} for a similar issue in the problem of statistical recovery of isotropy). Figure~\ref{fig:SMirrev}c (Fig.~\ref{fig:SMrev}c) summarizes the results concerning the scaling exponents of the moments of power in the ISM (RSM). We can see the excellent agreement between the fitted exponents for $S_q$ of the ISM and the MF prediction. Strong deviations from the MF prediction are evident for the RSM, which is characterized by exponents smaller than predicted, denoting a less intermittent statistics. This behavior is consistent with the observation made for the energy dissipation (Fig.~\ref{fig:epsilonmom}). This points to a major role played by the contribution of the dissipative terms to the Lagrangian power of shell models. To verify this, for the ISM, we decomposed the power in its contributions due to forcing, $p_f=v \sum_n \mathcal{R}\{f_n\}$, dissipation, $p_d=-v \nu \sum_n k_n^2 \mathcal{R}\{u_n\}$, and nonlinear terms, $p_{nl}=v \sum \mathcal{R} \{ i k_n (a\lambda u_{n+2} u^*_{n+1} + bu_{n+1} u^*_{n-1}) \}$, where $p=p_f+p_{d}+p_{nl}$. We found that $\langle p_d^2\rangle/\langle p^2\rangle\approx 1$ and $\langle p_{nl}^2\rangle/\langle p^2\rangle\approx 2$ independently of $Re_\lambda$, which confirms that the dissipative and non-linear contributions scale as the total power and that they are of the same order. This is at odds with what has been observed in DNS of turbulent flows \cite{xu2014b}, where the statistics is dominated by the pressure gradients, i.e. by the nonlinear terms, and the dissipative contribution was found to be subleading in terms of scaling and less intense with respect to the nonlinear one. Figures~\ref{fig:SMirrev}c and \ref{fig:SMrev}c also show the exponents obtained by fitting the scaling behavior of the antisymmetric moments. For both ISM and RSM these exponents can be linked to the symmetric exponents by a rigid shift, i.e. \begin{equation} -A_q \sim S_q Re_\lambda^{-\mu}\,. \label{eq:shift} \end{equation} We found this relation to be consistent with the assumption that, in terms of scaling behavior, there is a decoupling between the absolute value of the power and its sign, i.e. $A_q \sim \langle p/|p|\rangle S_q$. Indeed for both models, as shown in the insets of Figs.~\ref{fig:SMirrev}c and \ref{fig:SMrev}c, we found \begin{equation} \langle p/|p|\rangle \sim Re_\lambda^{-\mu}\,, \label{eq:sign} \end{equation} with $\mu \approx 0.18$ ($0.2$) for the ISM (RSM). At present, this is just an observation of which we do not have a clear understanding. It should be remarked that the relation of (\ref{eq:shift}) with (\ref{eq:sign}) shows that the multifractal model is not completely failing in reproducing the asymmetries of the power statistics, and that the scaling behavior of the asymmetries is compatible, modulo the cancellation exponent $\mu$ (see also \cite{ott1992sign}), with the multifractal phenomenology. We emphasize that in DNS of the Navier-Stokes equations \cite{cencini2017time} there is no evidence of a cancellation exponent different from zero, suggesting that the asymmetry persists also in the infinite Reynolds number limit. \section{Discussions and Conclusions \label{sec:conclusions}} In this paper we have introduced a time-reversible shell model for turbulence, obtained by modifying the dissipative term of the so-called Sabra model, allowing the viscosity to vary in such a way as to maintain the total enstrophy constant. In spite of the formal time reversibility of the model we found that the dynamics spontaneously breaks the time reversal symmetry selecting an attractor onto which irreversibility manifests in the asymmetry of the Lagrangian power statistics. A detailed quantitative comparison between the reversible and irreversible (original) shell models has shown that the dynamics of the former well reproduce the inertial range physics of the latter, indeed the structure functions of the two models are indistinguishable in the inertial range. On the contrary, the modified viscous term of the reversible model is responsible for important modifications in the physics below the Kolmogorov scale. While the irreversible model at these scales is characterized by an energy spectrum with an exponential fall-off, in the reversible model an intermediate range characterized by a close-to equipartition of enstrophy physics appears. The difference between the two models in this range of scale is responsible for a different statistics of the energy dissipation. As for the Lagrangian power statistics, we found that even though qualitatively the two models display the same features, quantitative details are different. In particular, the exponents characterizing the scaling behavior of the moments of power of the reversible model are smaller than those of the irreversible model. These differences are consistent with those observed for the energy dissipation and have, possibly, a similar origin in the non trivial physics of the reversible model below the Kolmogorov scale. As for the irreversible shell model, consistently with our previous observations \cite{cencini2017time}, we found that independently of the nature of the forcing, the (time-reversible) symmetric statistics of the power statistics are well captured by the multifractal model while deviations are present for the (time-irreversible) asymmetric component, which is characterized by smaller exponents. However, numerical evidence suggests that these deviations can be traced back to the Reynolds dependence of the sign of the power (cancellation exponent \cite{ott1992sign}). This indicates that the bulk part of the statistics is well captured by the multifractal model. Time-reversible sub-grid models for Large Eddy Simulations of the NSE might be important to better capture backscatter events where the energy is locally transferred from small to large scales in turbulence, i.e. when an inverse energy transfer is observed. The issue is particularly subtle considering that there is not a unique meaning of local energy transfer in the configuration space and that some of the inverse transfer events are probably simply due to large instantaneous fluctuations disconnected from any robust transfer mechanism \cite{chaodyn}. \begin{acknowledgements} We thank R. Benzi, M. Sbragaglia and G. Gallavotti for fruitful discussions. We acknowledge support from the COST Action MP1305 ``Flowing Matter''. LB and MDP acknowledge funding from ERC under the EU $7^{th}$ framework Programme, ERC Grant Agreement No 339032. \end{acknowledgements} \section*{Authors contribution statement} All the authors conceived the study. MDP performed the simulations of RSM, MC performed simulations of the ISM. All the authors analyzed the data, discussed the results and wrote the manuscript.
1,108,101,563,740
arxiv
\section*{Acknowledgments} This work (generation of configurations, computation of the static potential and the Polyakov loop) was supported by grant of the Russian Science Foundation (project number 15-12-20008). A.Yu.K acknowledges the support from Dynasty foundation. The work was partially supported by RFBR grant 16-32-00048. This work has been carried out using computing resources of the federal collective usage center Complex for Simulation and Data Processing for Mega-science Facilities at NRC ``Kurchatov Institute'',\url{http://ckp.nrcki.ru/}. In addition, we used the supercomputer of the Institute for Theoretical and Experimental Physics (ITEP). \bibliographystyle{apsrev4-1}
1,108,101,563,741
arxiv
\section{Introduction} \label{sec:intro} In recent years, performance became increasingly limited by power consumption as Dennard scaling has come to an end~\cite{taylor2012dark}. The effect where the available power budget allows for different maximum frequencies depending on the number of cores is called dim silicon~\cite{huang2011scaling}. The same effect also applies to different instruction mixes. As different operations cause different switching activity on the chip, they consume different amounts of energy, so complex instructions have to be executed at a lower frequency. Similarly, if unused parts of the chip are power-gated because they are not required by simpler operations, the resulting power savings can be used to increase the frequency. The power budget is not only limited due to thermal constraints but also due to power supply limitations\footnote{In our tests, recent Intel CPUs have reported maximum current as the most common reason for frequency changes in AVX-512-heavy workloads.}, where even short-term transgressions could cause instability due to voltage drops. As the large size of the SIMD registers used by recent SIMD instruction set extensions causes high power variation, recent CPUs have started to vary their frequency based on the workload to maximize performance under power budget constraints. For example, Intel CPUs reduce their clock speed as soon as code containing AVX2 and AVX-512 instructions is executed~\cite{xeonscalableerrata}. However, every frequency change causes some overhead~\cite{park2013accurate}, because the system has to wait for voltages to change\footnote{The frequency can only be increased when sufficient voltage is available, leading to frequency change delays and a resulting \enquote{underclocking loss}~\cite{park2013accurate}.} and clock signals to stabilize. Therefore, even if no AVX2 and AVX-512 instructions are executed anymore, these CPUs delay increasing the clock speed\cite{optimizationmanual}. This mechanism ensures that if the code continues executing these vectorized instructions shortly after, no excessive numbers of frequency changes are performed. For some workloads, the delay causes overhead, though, as parts of the software which could be executed at higher frequency are needlessly slowed down. For example, a simple benchmark using the nginx web server is slowed down by 10\% if the SSL library used by the web server is compiled with support for AVX-512, as the CPU frequency is reduced during AVX-512-heavy encryption and decryption, but the frequency change also affects the non-vectorized parts of the web server~\cite{krasnovdangers}. A policy similar to this constant-delay policy is employed in the area of dynamic power management. In this area, a similar trade-off is found, as disabling devices saves energy but incurs overhead both during shutdown and reactivation. The widely-used \emph{fixed timeout} policy shuts down devices after a fixed delay~\cite{benini2000survey}, where the delay is usually equal to the \emph{break-even time} in order to improve worst-case power consumption~\cite{karlin1994competitive}. In the area of power management, research has brought up a plethora of other shutdown strategies promising higher energy savings~\cite{benini2000survey} and has shown that input from the application can be used to further improve power efficiency~\cite{venkatachalam2005power}. It is likely that similar approaches can be used to reduce DVFS overhead for partially power-intensive workloads. In this work, we show that, in particular, input from the application can be used to predict whether immediate reclocking makes sense. Our contributions are as follows: \input{fig/freqlevels.tex} \input{fig/overhead-with-ht.tex} \input{fig/overhead-without-ht.tex} \begin{itemize} \item We describe the parallels between DVFS in dim silicon scenarios and dynamic power management. The duality allows to apply research from the area of dynamic power management to the former. \item We determine the frequency change cost on a current server system and calculate the break-even time for frequency changes. We use this result to show how the delay specified by Intel does not provide optimal worst-case behavior. \item We show that application knowledge about execution phases or the instruction types used by individual processes can be used to improve performance by passing hints about future instruction set usage to the DVFS policy. We validate this finding through simulation of different DVFS policies on a web server workload. \item We describe a mechanism to determine at runtime whether individual processes will trigger frequency reductions due to their usage of power-intensive instructions. Unlike existing approaches, our design can reliably distinguish between all three frequency levels provided by current Intel CPUs. This information can be used as input for an improved DVFS policy to trigger frequency changes during context switches. \end{itemize} \section{Effects of AVX2 and AVX-512} \label{sec:avxeffects} Starting with the Haswell microarchitecture which introduced the AVX2 instruction set, Intel introduced a separate maximum frequency for AVX2-intensive code segments~\cite{hackenberg2015energy}. The Skylake microarchitecture added AVX-512 instructions and a third AVX-512 frequency level~\cite{schone2019energy}. Table~\ref{tab:freqlevels} shows the maximum turbo frequency for the Intel Xeon Gold 6130 server processor. The maximum frequency depends both on the number of active cores -- with larger numbers of active cores requiring larger frequency reduction -- as well as on the type of instructions executed. AVX-512 causes a particularly large frequency reduction due to the complexity of operations on 512-bit vectors. As described above, the reduced frequency is maintained longer than necessary to prevent excessive reclocking overhead. There are two situations where this delay can cause the frequency reduction to negatively affect unrelated non-AVX code and cause a significant performance reduction. First, on a system with simultaneous multithreading (SMT), if one of the hardware threads causes the frequency of the physical core to be reduced, the other hardware threads on the same core also execute at lower frequency even if their code is not as energy-intensive~\cite{li2019corescheduling}. Second, in heterogeneous applications consisting of power-intensive and less power-intensive parts -- or if the OS frequently switches between power-intensive and less power-intensive tasks -- the delay before increasing the frequency causes reduced performance for the less power-intensive code~\cite{gottschlag19sfma}. As an example for the latter, previous work describes overhead caused by AVX-512{} in a web server workload, where the nginx web server provides up to 10\% lower performance when the SSL library uses cryptography primitives implemented with AVX-512 instructions, because unrelated web server code is slowed down following calls into the SSL library~\cite{krasnovdangers}. We replicated this experiment, the result is shown in Figure~\ref{fig:overhead-with-ht} alongside other experiments with workloads consisting of multiple different processes to show that the performance impact is also present in such scenarios. For these other experiments, we execute different non-AVX workloads while concurrently executing the x265 video encoder configured to use AVX, AVX2, or AVX-512 instructions. The experiments are conducted on a system with an Intel Xeon Gold 6130 processor. Our first multi-process experiment determines the impact on an interactive web server workload: We executed the nginx web server alongside the x265 video encoder and configured the wrk2 client to generate a fixed number of requests to the web server. This setup imitates the scenario where a web server is not fully utilized and the remaining CPU time is used for background batch tasks. Figure~\ref{fig:overhead-with-ht} shows the normalized CPU time required by the nginx web server to serve a unencrypted static file (\enquote{nginx+x265}). The results show a 6.6\% performance impact when the background process uses AVX2 instructions and a 21.8\% performance impact for AVX-512. As the web server is not operating at 100\% utilization, the background process is often executed inbetween two consecutive requests or is executed in parallel on the other hardware thread of the same core, causing a particularly large performance impact. To show that the problem affects both interactive and batch workloads, we also execute various benchmarks from the Parsec~\cite{bienia11benchmarking} benchmark suite and the Phoronix Test Suite (PTS)~\cite{phoronixtestsuite} benchmarks in parallel to the x265 video encoder. As shown in Figure~\ref{fig:overhead-with-ht}, all these benchmarks are also affected by the frequency changes caused by x265. The Parsec benchmarks experience an average performance reduction by 10.0\% for AVX-512. Similarly, the PTS benchmarks are slowed down by 12.4\%. As described above, one major mechanism for slowdown that is targeted by other approaches~\cite{li2019corescheduling} is that software on one hardware thread slows down other hardware threads of the same core. To show that some of the slowdown is also experienced on systems without hyperthreading, we repeat all the benchmarks on a system with hyperthreading disabled. The results of this experiment are shown in Figure~\ref{fig:overhead-without-ht} and show that CPU-intensive non-interactive workloads are not significantly slowed down anymore once hyperthreading is disabled as the system does not switch between the processes often enough for frequency change delays to have a significant effect. For example, on a system with the default Linux CFS scheduler, we observe only one context switch every 10 to \SI{20}{\milli\second} for the blackscholes workload whereas frequency increases are only delayed by less than one millisecond. Although disabling hyperthreading reduces the performance of the system and is therefore not a viable technique against the overhead caused by AVX-heavy code in these scenarios, other techniques such as core specialization~\cite{gottschlag19sfma} and core scheduling~\cite{li2019corescheduling} can make sure that whenever possible either both hyperthreads are executing AVX-intensive code or none of them is. Overhead caused by hyperthreading is out of the scope of this paper, though. Instead, the goal of our approach is to reduce the overhead in applications which periodically execute short sections of AVX-512 or AVX2 code as well as in workloads which frequently switch between AVX-512 or AVX2 and non-AVX applications on a single core. From the benchmarks shown in Figure~\ref{fig:overhead-without-ht}, an example for the former is the nginx/OpenSSL benchmark, which executes AVX-512 instructions only when OpenSSL functions are called. The nginx/x265 benchmark as well as the Apache, MySQL and SQLite benchmarks from PTS, instead, trigger frequent context switches between the AVX-512-enabled background task and the benchmarked application and are therefore examples for the latter behaviour. These types of benchmarks are the benchmarks which show overhead even when hyperthreading is disabled: For AVX-512, the nginx benchmarks are slowed down by 7.0\% on average, whereas the three PTS benchmarks are slowed down by 12.4\% on average. Due to the frequent switches between AVX-512/AVX2 and non-AVX code during these workloads, the upclocking delay implemented by the CPU's existing hardware DVFS policy is the main source for the overhead caused by AVX instructions. To isolate this overhead source and to demonstrate that improved DVFS policies are able to mitigate its effects, we conduct all further experiments in this paper with hyperthreading disabled. The assumption of CPUs without hyperthreading significantly simplifies the design of some parts of our approach. This does not mean that improved DVFS policies are inherently ineffective on systems with hyperthreading, although more research has to be conducted to identify appropriate heuristics for improved DVFS decisions. \section{Parallels to Dynamic Power Management} \label{sec:analysis} As described above, the complex frequency behavior of modern CPUs stems from the fact that it is not economically viable to cool modern CPUs when they are executing power-intensive code at their maximum frequency~\cite{huang2011scaling}. Instead, available thermal headroom is used to temporarily use higher frequencies (a form of computational sprinting~\cite{raghavan2012computational}). In this scenario, the more the energy consumption per instruction varies, the higher is the thermal headroom for code executing simple instructions. Therefore, modern Intel CPUs use different turbo frequencies for different types of code, with AVX2 and AVX-512 instructions triggering a transition to significantly lower frequency levels~\cite{schone2019energy}. As shown by the registers provided by these CPUs to determine the reason for frequency changes, not only thermal headroom plays a factor for these frequency reductions, though: The power dissipation of the chip correlates with the current required from the power supply, and frequency changes are also required to prevent voltage drops due to increased current draw. The frequency changes required to use the available headroom come at a cost. For example, Mazouz et al. have measured the cost of a single frequency change to be approximately \SI{10}{\micro\second} on an Intel Ivy Bridge system~\cite{mazouz2014evaluation} and our own experiments presented in Section~\ref{sec:freqchangecost} arrive at a similar cost (between \SI{9}{\micro\second} and \SI{19}{\micro\second}) on more recent Skylake server CPUs. Therefore, increasing the frequency to use thermal headroom is only viable if the higher frequency can be applied long enough that the performance improvement makes up for the frequency change overhead. This trade-off is similar to the problem of \emph{dynamic power management} where devices are temporarily switched off or transitioned to a low-power state in order to save energy~\cite{benini2000survey}. Here, the energy cost for the state transition means that switching devices off for only short periods of time is frequently unviable. As the operating system, however, does not know how long a device is going to stay unused, it is in general not possible to determine in advance whether shutting a device off is going to result in a net improvement. In the area of dynamic power management, significant effort has gone into developing heuristic approaches to guess when to shutdown devices~\cite{benini2000survey}. One metric to measure the quality of heuristic approaches is their \emph{competitiveness} in a worst-case scenario. The competitiveness is the worst-case ratio between the energy required by the approach compared to the energy required by an oracle policy that can determine in advance whether shutting off a device is viable. Karlin et al.~\cite{karlin1994competitive} showed at most 2-competitiveness (meaning that the approach uses at most twice as much energy) is possible for deterministic algorithms. In dynamic power management, 2-competitiveness can be achieved by switching a device off after a fixed timeout. When that timeout equals the \emph{break-even time} (i.e., the time of inactivity during with the low-power state would have made up for the transition costs), the device uses at most twice as much energy if it wakes up directly after being sent to a low-power state. Intel CPUs show a very similar behavior as they delay increasing the frequency by a fixed timeout after the CPU has stopped executing any AVX instructions~\cite{schone2019energy}. However, the fixed delay is not optimal in terms of competitiveness because, as we show in Section~\ref{sec:competitiveness}, DVFS has wildly varying break-even times in different scenarios. Neither is the DVFS policy implemented by current Intel CPUs optimal for real-world workloads as we show in Section~\ref{sec:dvfs-policy-eval}. There are approaches that can, depending on the situation, perform better than simple heuristic approaches. For example, applications can give hints about expected future behavior to let the OS perform better informed decisions~\cite{lu2002power} or the OS can use the deadlines of I/O requests to change the device usage pattern to save more energy~\cite{weissel2002cooperative} Both these approaches can be applied to DVFS policies in dim silicon scenarios. In this paper, we show an example for the former approach. As software developers often know whether the application is going to execute no power-intensive code -- i.e., no AVX2 and AVX-512 -- in the near future, that information can be used by the CPU to forego the frequency change delay and immediately change frequencies for improved performance. \section{Behavior of Intel CPUs} \label{sec:competitiveness} According to the optimization manual, recent Intel CPUs implement a fixed-timeout policy where the CPU waits approximately \SI{2}{\milli\second} after the last section of AVX-intensive code before increasing the frequency again~\cite[p. 2-13]{optimizationmanual}. In addition, before lowering the frequency, the core requests a power license from the package control unit (PCU) which takes up to \SI{500}{\micro\second} before granting the license. However, as shown by Schöne et al.~\cite{schone2019energy}, the behavior of the hardware does not match the documentation. Instead, the processor waits for a significantly shorter timeout (approx. \SI{670}{\micro\second} as measured in our experiments) before upclocking. We were able to confirm the observed behavior on a system with an Intel Core i9-7940X, where we measured the delay for frequency changes when executing sections of code consisting of scalar, AVX2, or AVX512 instructions. Note that frequency reduction is triggered almost immediately when AVX2 or AVX-512 instructions are executed, as required to prevent excessive power consumption. The upclocking delay is constant independent from the number of cores in use. As described in the last section, maximum competitiveness in worst-case scenarios is reached when the timeout equals the break-even time, but the break-even time depends not only on the cost for the frequency transition but also on the performance advantage at a higher frequency. In this case, the frequency change is higher if more cores are active~\cite{xeonscalableerrata}, so the performance overhead for downclocking is higher and the break-even time is shorter when more cores are active. Therefore, the policy implemented by Intel does not provide maximum competitiveness. To show the potential for improved timeout-based policies, the following sections describe experiments to determine both frequency transition overhead as well as performance impact for different situations to determine the corresponding break-even times. \subsection{Cost of Frequency Changes} \label{sec:freqchangecost} \input{fig/freq-reduction-overhead-results.tex} \input{fig/freq-increase-overhead-results.tex} One factor required to determine the break-even time is the frequency change overhead: If the cost of individual frequency changes increases, more time between consecutive changes is required in order to make up for the overhead. For the Intel Ivy Bridge architecture, Mazouz et al. determined that a CPU is stopped for approximately \SI{10}{\micro\second} during a frequency change~\cite{mazouz2014evaluation}. This pause is required to allow the new frequency to stabilize~\cite{park2013accurate}. However, in particular in the case of frequency changes caused by AVX instructions, additional factors increase the overall overhead. Therefore, and because our systems use a newer CPU architecture than the one considered by Mazouz et al., we measure the overhead of frequency changes on a system with an Intel Xeon Gold 6130 CPU. To measure the overhead due to frequency reduction caused by AVX2 and AVX-512 instructions, we execute the same amount of such instructions twice, once when the system is already at the appropriate frequency, and once when it executes at a higher frequency and the code triggers a frequency change. The overhead of the frequency change can be calculated as the difference of the two runtimes. The results of this experiment for all combinations of scalar, AVX2, and AVX-512 instructions are shown in Figure~\ref{fig:freq-reduction-overhead-results}, which shows significantly higher overhead than measured by Mazouz et al.~\cite{mazouz2014evaluation}. For example, a transition from the maximum frequency to the AVX2 frequency level takes \SI{17}{\micro\second} on average, whereas a transition to the AVX-512 frequency level takes \SI{24}{\micro\second}. The reason for this increased overhead is likely the reduced IPC due to additional throttling before the frequency switch is complete\cite{downs20gathering}. As AVX2 and AVX-512 instructions would draw excessive power at the previous higher frequency, the system temporarily employs throttling to reduce power consumption~\cite{bonen2017performing}. Note that the overhead appears to vary slightly for the different frequencies and frequency differences caused by different numbers of active cores. Measuring the overhead of frequency increases is slightly more complex due to the large -- and, in our experiment, somewhat variable -- delay before the system restores the non-AVX frequency level. In this case, we employ the technique employed by Mazouz et al. to determine frequency change costs~\cite{mazouz2014evaluation} as we start at a system running at either AVX2 or AVX-512 frequencies and repeatedly execute a short code section which consists of instructions allowing a higher frequency. We measure the runtime of the code section each time, so that frequency changes are shown as spikes in the measured runtime. As other sources such as the activation of additional cores can trigger additional reduction of the maximum frequency, we simply assume that the first frequency change is the one triggered by the lack of AVX2 and AVX-512 instructions and discard any further runtime spikes. The size of the spike is assumed to be the overhead of the frequency change, which is plotted in Figure~\ref{fig:freq-increase-overhead-results}. The results closely match those of Mazouz et al.~\cite{mazouz2014evaluation} and show no variation based on the absolute frequency of the core or the magnitude of the frequency change, both of which vary with the number of active cores. Note, however, that this experiment does not consider the performance loss due to the system temporarily executing at a lower frequency while the voltage is ramped up to the level required for the frequency change~\cite{park2013accurate}. For many dynamic power management approaches, state changes can be predicted in advance, so voltage changes can likely be conducted speculatively, removing the need for such additional delays. For example, for fixed-timeout policies, the timeout can be slightly reduced accordingly. \subsection{Performance Versus Frequency} \input{fig/frequency-vs-performance} The break-even time for frequency changes depends not only on the overhead for frequency transitions but also on the relative performance advantage due to the higher frequency. Whereas the performance of CPU-bound tasks is nearly proportional to the CPU frequency, the same is not true for memory-heavy workloads as the memory latency is independent from the CPU frequency. In this work, to simplify the prototype, we assume the former. The result of this simplification is that the break-even time is underestimated for memory-heavy applications. To quantify this error for the workloads used in this paper, we executed most of the individual applications described in Section~\ref{sec:avxeffects} -- nginx, x265, the Parsec benchmarks and the PTS benchmarks with the exception of mysql due to the long execution time of the corresponding benchmark and sqlite due to its particularly I/O-heavy nature -- at different frequencies and measured the instructions per cycle (IPC). We executed the applications at frequencies between \SI{2.8}{\giga\hertz} and \SI{1.3}{\giga\hertz} on a system with a 16-core Intel Xeon Gold 6300 processor. We configured the application to use all cores of the system except for the nginx server benchmark where we allocated three cores to the HTTP request generator\footnote{x265 failed to fully saturate all cores due to inter-thread dependencies.}. Maximizing the number of active cores should maximize the working set of the application and should therefore maximize the impact of memory accesses on performance. Figure~\ref{fig:frequency-vs-performance} shows the results of this experiment. Counterintuitively, IPC consistently improves when the frequency is increased from \SI{2.0}{\giga\hertz} to \SI{2.1}{\giga\hertz} -- we assume this is due to the chip adapting either memory or bus frequency to the core frequency. For all other frequency ranges, higher frequency correlates with lower IPC. When comparing the IPC at \SI{2.1}{\giga\hertz} and \SI{2.8}{\giga\hertz}, the biggest difference was found for x264 which had 5.9\% higher IPC at \SI{2.1}{\giga\hertz}. This IPC difference would translate into a error of 5.9\% during break-even time calculation, which is likely low enough for the simplified model to be viable for this workload. The reason for the low IPC changes is found in the low cache miss rates for all these applications: The workloads trigger at most 2.03 last-level cache misses per 1000 instructions (in the case of PTS build-linux-kernel). Note that our simulation to show the viability of improved DVFS policies in Section~\ref{sec:dvfs-policy-eval} also uses the simplified performance model. However, as our experiment shows, the resulting error is negligible and does not influence our conclusions. The simulation uses the nginx web server with the configuration marked as \enquote{nginx+openssl} in Figure~\ref{fig:frequency-vs-performance}. In this configuration, the nginx web server showed less than 1\% IPC difference between \SI{2.1}{\giga\hertz} and \SI{2.8}{\giga\hertz}. Workloads with higher cache miss ratios than the benchmarks shown in Figure~\ref{fig:frequency-vs-performance} can show lower correlation between performance and frequency~\cite{hebbar2019impact}. While we show that improved DVFS policies in general have the potential to improve performance for workloads involving AVX2 and AVX-512 code, our simplified linear model might not be sufficient for these workloads in practice. Concrete DVFS policy implementations for such workloads might require a better prediction of the performance at different frequencies to make decisions on whether to change the CPU frequency or not. Such predictions can be made, for example, by using performance counters to determine the impact of frequency changes on the number of stall cycles~\cite{keramidas2010interval}. Further research has to be conducted to show whether DVFS policies based on such approaches are viable and provide a significant performance advantage for a wider range of workloads. \subsection{Break-Even Time} \label{sec:break-even-time} \input{fig/break-even-time} The break-even time $t_{BE}$ -- i.e., the time after which the performance increase due to increased frequencies offsets the cost to increase and decrease the frequency -- can be calculated according to the following formula: $$ p_{low} t_{BE} = p_{high} (t_{BE} - t_{o}) $$ In this formula, $p_{low}$ and $p_{high}$ are the performance at the lower and higher frequency, respectively, and $t_{o} = t_{o,d} + t_{o,u}$ is the total overhead for reducing ($t_{o,d}$) and increasing ($t_{o,u}$) the frequency, measured as the equivalent CPU time as in Section~\ref{sec:freqchangecost}. If we insert the results from the last sections and calculate $t_{BE}$, we arrive at the times shown in Figure~\ref{fig:break-even-time}. As the performance is dominated by the frequency whereas the overhead is fairly constant, the break-even time is significantly affected by the number of active cores. For example, for a transition between AVX2 and non-AVX frequencies, the break-even time in situations with less than four active cores is approximately \SI{1000}{\mu\second} due to the low frequency swing of only \SI{100}{\mega\hertz} (see Table~\ref{tab:freqlevels}), whereas for more than eight cores frequency changes between 400 and \SI{500}{\mega\hertz} cause break-even times between 150 and \SI{190}{\mu\second}. As Karlin et al.~\cite{karlin1994competitive} show, a fixed-timeout policy achieves optimal competitiveness -- in our case, minimal overhead when the system has to switch back to a lower frequency at the least opportunistic time -- when the timeout equals the break-even time. In this case, the timeout before the CPU increases its frequency should therefore be based on the frequency difference to achieve good competitiveness in all cases. Intel CPUs, however, only implement one fixed timeout for all core counts and instruction sets. As shown in Section~\ref{sec:avxeffects}, some applications are negatively affected by the overhead of frequency changes, which shows that an improved DVFS policy with variable timeout based on the frequency difference can likely have positive impact on these applications. \section{Exploiting Application Behavior} \label{sec:design} While the 2-competitive fixed-timeout policy is optimal in the worst case for unpredictable workloads, it is not when the behavior of the workload is predictable, in which case earlier decisions to increase the CPU frequency can result in higher performance. In this work, we focus on two types of predictions about whether the system is going to use AVX-512{} in the near future. First, the application developer has knowledge about the structure of the application and can tell the operating system when AVX-intensive parts begin and end, which can aid workloads where one process switches between AVX-intensive code and code without power-intensive instructions. Second, the operating system can statistically determine whether a process is likely to require a reduced frequency and can change the CPU frequency during context switches in order to immediately let non-power-intensive processes profit from higher frequencies. \subsection{Heterogeneous Applications} \label{sec:hint_heterogeneous} If an application consists of vectorized and non-vectorized parts and those are executed alternately -- such as the web server example in Section~\ref{sec:intro} -- the non-vectorized part is slowed down due to the frequency change caused by the vectorized part. Often, software developers know which part of the application is vectorized and how long execution of each part takes. In that case, assuming that a suitable hardware-software interface exists, they can notify the CPU after each vectorized code portion if the next scalar portion is likely \emph{long enough} to warrant for an early frequency increase. The CPU could use that hint to immediately switch to a higher frequency. Such a hint could therefore improve performance, as the existing DVFS policy of the CPU would instead needlessly keep the frequency reduced for some time. \subsection{Classification of Tasks} \label{sec:hint_classification} Even if each individual application is sufficiently uniform, it is still possible that context switches between different applications cause overhead as an application is slowed down by the preceding AVX-enabled application as described in Section~\ref{sec:avxeffects}. For most workloads, this overhead is avoidable, as scheduler time slices are usually longer than the break-even time. During a switch from an AVX-enabled application to a non-AVX application, the scheduler should usually immediately select a higher frequency. To trigger such frequency changes, the scheduler needs a categorization of the individual processes based on their instruction set usage and their expected frequency reduction. To this end, we introduce the notion of a \emph{power score} which serves as a measure of the expected power consumption of the instruction mix executed by a process. A high power score signals that the process will likely trigger significant frequency reductions. More specifically, a power score of 1 means that the process is assumed to execute at AVX2 frequencies, whereas a power score of 2 means that the process likely causes a reduction down to AVX-512 frequency levels. This power score could potentially be determined either via a static analysis of the application binary or via a dynamic analysis of the frequency changes at runtime. A static analysis can detect whether an executable contains any AVX2 or AVX-512 instructions that could trigger a frequency reduction. However, applications might contain such instructions even if they do not execute them frequently enough to significantly reduce the average frequency. Also, functions like \texttt{memset} make use of AVX-512 instructions, but only for inputs of certain size which is hard to detect via static analysis. Overall, a static analysis is therefore bound to be unreliable. We expect dynamic analyses to yield a better estimate of the instruction set usage of individual processes as they are able to observe the effects of the actual execution patterns within the process. Simply mapping the frequency level to the active process is, however, not accurate in situations with frequent context switches, because the delays mean that some of the time spent at lower frequencies is attributed to the wrong processes. Counting the AVX2 and AVX-512 instructions executed by the active process might be sufficient to draw conclusions about the resulting frequency requirements in most cases, but recent Intel CPUs only provide performance counters for specific types of such instructions~\cite[p. 19-20f]{intelmanualvol3}. In any case, though, more accurate statistics would be possible if the processor provided the operating system with information about whether the conditions for each frequency level were fulfilled at each point in time, for example via appropriate performance counters. Current hardware does not provide such performance counters, either. As a method to collect reliable information about frequency requirements and to determine the processes responsible for frequency reductions, we therefore suggest distinguishing between two cases based on the time between subsequent scheduler invocations. If the time between subsequent scheduler invocations is significantly longer than the frequency increase delay of \SI{670}{\micro\second}, the scheduler can sample the CPU frequency level and can directly attribute the frequency to the last process, as any influence of its predecessor on the CPU frequency has ended. To determine the CPU frequency level, we configure the performance counters to track the cycles spent at \emph{power license levels} 0, 1, and 2 which correspond to the frequency levels for non-AVX, AVX2, and AVX-512 code, respectively~\cite{optimizationmanual}. If the time between subsequent scheduler invocations is shorter than the frequency increase delay, such an approach would risk misattributing frequency changes. In this case, our main observation is that if the frequency is reduced during the execution of a process, then that process is most likely responsible for the change. For short periods of execution of a process, we therefore only attribute the resulting frequency to the process in case of a frequency change during the period. In some rare cases, however, frequency changes can occur during the execution of a process that did not trigger the change -- most likely due to delays during frequency selection as documented by Intel~\cite{optimizationmanual}. Therefore, the power score is calculated as the moving average over all CPU frequency samples attributed to a process to reduce the impact of occasional misattribution. The following steps are conducted to calculate the power score of the processes: \begin{enumerate} \item Initially, the power score of new processes is set to 0, i.e., the system assumes that new processes will not use AVX-512 or AVX2. \item At each scheduler invocation, we detect the current power license level by sampling all power license level performance counters twice in a row. The counter that is incremented during the short time inbetween indicates the current frequency level. \item We compare the level during two consecutive context switches. If the levels match, the power license did not change. In this case, for short CPU bursts, the current process might not have had enough time to have an impact on the power license, so the power score is not updated. \item If context switches are more than \SI{1}{\milli\second} apart -- longer than the frequency delay as reasoned above -- or if the power license decreases below or increases above the current power score, however, the power score of the process is updated as the exponential moving average of such power license changes. Assuming $S_{t-1}$ is the old power score and $L_t$ is the new power license, the new power score is $S_t = 0.2 L_t + 0.8 S_{t-1}$. \end{enumerate} The resulting power score indicates the potential frequency reduction caused by the process. The dynamic analysis of frequency changes can be combined with the results of a static analysis of the executable -- e.g., by overriding the score to be 0 if the executable does not contain AVX2 nor AVX-512 instructions -- and with manual instrumentation as described in Section~\ref{sec:hint_heterogeneous}, in which case hints from the developer override the automatically determined power score. Note that with hyperthreading the frequency is determined by two programs. Thus, this technique only works on systems with deactivated hyperthreading and on systems which always schedule the same program on both cores, as recently suggested for the Linux kernel~\cite{corbet2019corescheduling}. On other systems, the hardware has to be modified to provide a more reliable source of information about the energy consumption of the instructions executed by individual processes. \subsection{Using Hints For DVFS} Once predictions about the instruction set use are available, the system can use this information to improve performance. When the code running on a core -- i.e., all hardware threads in the case of a system with hardware multithreading -- indicates that no power-intensive instructions are going to be executed in the near future, for example, via the mechanisms presented in Sections~\ref{sec:hint_heterogeneous} or~\ref{sec:hint_classification}, the system can eagerly increase the frequency when it is not already at the highest level possible for the expected instructions. Ideally, the DVFS policy should be implemented in the CPU to be able to provide quick reactions to changing instruction usage and to prevent power budget violations, so any hint about future instruction usage needs to communicated to the CPU using an appropriate software-hardware interface. For example, the operating system or the application software could temporarily configure a different frequency change timeout depending on the type of executed code, to force earlier frequency changes or to prevent any changes. \subsubsection{Viability on Current CPUs} Current hardware does not provide any such interface. It does, however, provide a mechanism to manually set the CPU frequency, which can be used to implement a wide range of DVFS policies in software. For the dim silicon scenario described in this paper, the limitations of the hardware prevent both practical software-based implementations of DVFS policies as well as limited implementations to estimate the performance of hardware-based implementations. Any practical implementation of a DVFS solution for AVX-512 or AVX2 code is prevented both by the inability to detect problematic AVX-512 or AVX2 code as well as by the delay of manual frequency changes. First, conservative detection of problematic code is necessary so that the OS knows when frequency reductions are required. Our approach in Section~\ref{sec:hint_classification} is not usable as it only results in approximate long-term classification of applications. In contrast, conservative short-term estimation based on register set usage can detect any access to 512-bit and 256-bit vector registers but will often select lower frequencies than necessary as we show in our evaluation in Section~\ref{sec:categorization-eval}, leading to reduced performance. Second, software-based DVFS policy implementations require the ability to change the frequency at a precise point in time, yet current CPUs delay frequency changes significantly. As described by Hackenberg et al.~\cite{hackenberg2015energy}, the frequency selection logic of Intel CPUs starting with the Haswell microarchitecture only allows frequency changes once every \SI{500}{\micro\second}, so any frequency change request is delayed until the end of the next such \SI{500}{\micro\second} window. The immediate throttling of AVX-512 instructions~\cite{downs20gathering}, however, shows that immediate power reduction is necessary for stability, so such delays are inacceptable. These limitations not only prevent practical software solutions but unfortunately also prevent the construction of a prototype based on existing hardware to evaluate the performance of hardware implementations. Such a prototype would not necessarily have to be able to ensure system stability, but would have to trigger frequency changes in a way that results in equal performance compared to a complete implementation. As from the point of view of the OS the frequency change delay often appears to be random with an even distribution, a na\"ive approach might assume that the average delay of frequency increases cancels out the average delay of frequency reductions. However, for short sections of AVX- or non-AVX code, both frequency increase and decrease might occur within the same \SI{500}{\micro\second} window, in which our experiments showed that no frequency change occurs. In this paper, we suggest improved DVFS policies as a method to reduce the overhead caused by AVX2 and AVX-512. As we cannot use existing hardware to conduct a performance evaluation, we are limited to demonstrating the performance impact through simulations and microbenchmarks as shown in Section~\ref{sec:dvfs-policy-eval}. \section{Evaluation} \label{sec:evaluation} As described in the last section, this paper proposes using hints from the application or the operating system to provide improved frequency scaling. Our approach consists of two main pieces, namely the classification of the processes -- or, alternatively, hints from the application developer -- and a modified DVFS algorithm that takes those hints into account. For existing processors, it is impossible to build a complete implementation of this design, as the existing DVFS policy implemented by the CPU cannot be extended as required. Deactivating all AVX-induced frequency changes and completely reimplementing the policy in software is impossible due to the latency of software-triggered frequency changes which can be as long as 500\,$\mu$s. Our evaluation is therefore limited to qualitatively showing that the individual components are functional and that application-directed DVFS can have an advantage over the existing policy. \subsection{Categorization of Processes} \label{sec:categorization-eval} \input{fig/categorization-eval} The main goal of the process classification mechanism described in Section~\ref{sec:hint_classification} is to be able to detect the required power license of individual processes even if they are running in a heterogeneous multi-process workload where the effects of one process on the CPU frequency might shadow the effects of another process. To show that the mechanism fulfills this goal, we constructed a prototype based on Linux 5.2. We modified the kernel's completely fair scheduler (CFS) and inserted the power license detection code in the main scheduler function \texttt{\_\_schedule()}. Our implementation uses the Linux perf framework to read the power license performance counters. We let our prototype estimate the power score of the x265 video encoder using different instruction sets running in isolation. To show that our prototype is able to correctly distinguish between different processes executing on the same system and is able to attribute frequency changes to the correct process, we also executed the Apache benchmark from the Phoronix Test Suite as well as the swaptions benchmark from Parsec in parallel with x265. The two applications were configured to share the same set of cores without any restrictions to scheduling. Note that we specifically selected an interactive benchmark as well as a batch workload, to show that the classification works with both. Table~\ref{tab:categorization-eval} shows both the expected power score for the applications -- we expected our prototype to classify x265 according to the instruction set used, and neither of the other two benchmarks used significant amounts of vector instructions -- as well as the estimated power score from our prototype, averaged over the runtime of the application. The first three rows show that x265 was correctly classified in all cases, except for some uncertainty if neither AVX2 nor AVX-512 instructions were used. The next two table rows then show the results for the mixed scenarios. In both cases, our prototype can correctly identify x265 as the process responsible for the frequency reduction. For x265 executed alone, we compared the performance of our prototype to a stock Linux kernel and were not able to measure any statistically significant performance overhead. We compare our approach to the state-of-the-art technique available in the Linux kernel. Linux provides the time elapsed since the last use of AVX-512{} as part of the \emph{arch\_status} file in the proc file system~\cite{linuxproc}. The time since the last use of AVX-512 is calculated by checking the state of the FPU registers at each context switch. Like our approach, this mechanism is able to detect AVX-512 usage in the benchmarks described above as shown in the upper half of Table~\ref{tab:categorization-eval}. The approach found in the Linux kernel has a significant drawback, though, as the use of specific FPU registers is only loosely connected to the resulting frequency change. For example, a dense sequence of multiplication instructions on 512-bit vector registers causes the CPU to transition to the lowest frequency, whereas other instructions only trigger the intermediate \enquote{AVX2} frequency. Therefore, in a workload consisting of processes showing the former behavior as well as processes of the latter type, the time since the last 512-bit register usage cannot be used to identify the processes responsible for a frequency reduction. We demonstrate this effect by executing a sequence of 512-bit and 256-bit multiplications and additions both with our approach and on an unmodified Linux 5.5 kernel. The results shown in the lower half of Table~\ref{tab:categorization-eval} show that our prototype is correctly able to detect the three different frequency levels caused by different types of instructions, whereas the stock Linux kernel is only able to detect whether 512-bit registers are used. To show that the problem also affects real-world workloads, we execute an web server benchmark using nginx and OpenSSL similar to the one described in Section~\ref{sec:avxeffects} and measure the average time since the last AVX-512 usage as determined by the Linux 5.5 kernel on a system running Fedora 30. We let the nginx web server serve a static file with compression at runtime and use OpenSSL compiled with either AVX2 and AVX-512 instruction support for TLS encryption. As shown above, the web server provides significantly higher performance when using AVX2 instructions due to the resulting higher frequencies. Even in this case the system uses 512-bit registers, though, as the C library provides AVX-512 variants of \texttt{memset()}, \texttt{memmove()}, and \texttt{memcpy()}. Therefore, the stock Linux kernel detects AVX-512 usage in both cases, with similar reported average time since the last usage of 512-bit registers. Note that the implementation tests whether registers are in use only during context switches. Different scheduling causes large variation in the resulting values, making a quantitative comparison for such experiments difficult. \subsection{Potential of Eager Frequency Changes} \label{sec:dvfs-policy-eval} Once it is known which parts of the system use power-intensive instructions -- either using manual annotation as described in Section~\ref{sec:hint_heterogeneous} or via automatic detection as described in the previous section -- this information can be used to optimize performance. Whereas other approaches perform core specialization to separate AVX-512 code from non-AVX code~\cite{li2019corescheduling,gottschlag19sfma}, we, as described in Section~\ref{sec:design}, suggest that improved DVFS policies can also significantly reduce the overhead caused by AVX-512 instructions and similarly power-intensive instruction sets. In particular, we suggest that the delay for frequency increases as implemented by recent Intel CPUs is unnecessary if the system can predict that the software executed in the near future does not require power-intensive instructions. \subsubsection{Methodology} \label{sec:eval-methodology} \input{fig/eval-methodology} The most direct method to show the potential of improved DVFS policies would be to compare the performance of a benchmarked application when using a fixed-timeout policy such as the one implemented by the processor to the same benchmark instrumented to change the processor frequency at points in the program selected by the developer. However, recent Intel CPUs delay frequency change requests by up to \SI{500}{\micro\second}\cite{hackenberg2015energy}, making it impossible to precisely specify the points in the program at which frequency changes occur. Therefore, our evaluation relies on simulation of different DVFS policies based on a trace generated while running a web server benchmark (Section~\ref{sec:eval-simulation}) and uses a microbenchmark to demonstrate the potential performance impact of a single eager frequency change (Section~\ref{sec:eval-microbenchmark}) and to check the accuracy of the simulation. The following experiments were conducted on a system with an Intel Core i9-7940X processor, with the simulation configured to match this system. This processor was selected because, as it is designed for overclocking, it allows configuration of the \emph{AVX offsets} which specify the frequency reduction caused by AVX2 and AVX-512 instructions. For tests to determine the baseline performance of the system, we configured the offsets to match the frequencies reported in news articles~\cite{spille2017skylake}, where the base frequency of the processor is reported to be \SI{3.1}{\giga\hertz} and the frequencies for AVX2 and AVX-512 code are \SI{2.7}{\giga\hertz} and \SI{2.4}{\giga\hertz}, respectively, providing similar frequency ratios compared to server processors. Note that no authoritative information about AVX2 and AVX-512 frequencies is found in official Intel documentation and that mainboards such as ours frequently provide non-default AVX offsets. With minimal AVX offsets, the AVX2 and AVX-512 frequencies are both \SI{3.0}{\giga\hertz}, so the AVX frequency reduction cannot be disabled completely. We discuss the effect of this minimum frequency change where applicable below. All experiments were executed with Turbo Boost disabled and with C-states limited to C1 in order to reduce variance in the measurement results. \subsubsection{Web Server Simulation} \label{sec:eval-simulation} For our simulation experiments, the workload used is the nginx web server example from Section~\ref{sec:avxeffects}. We configure the web server to serve a single static file using gzip compression and we encrypt HTTP requests and replies using the OpenSSL library. The library is configured to vectorize encryption and decryption using AVX-512 instructions, which in other experiments has resulted in a 10\% slowdown. We instrument the web server to record the times when the OpenSSL functions for encryption and decryption are called and when they return. When generating the log of the OpenSSL function calls, we execute the benchmark with minimal AVX offsets. Although the resulting frequencies would not be stable and would result in frequent system crashes with all cores utilized, this setup yields more representative timing input for the simulator, as the simulator itself is supposed to slow down the AVX-512 portions of the simulated workload. To ensure system stability and to simplify simulation, the web server is only executed on a single core. We do not expect individual web server threads to behave significantly different when additional web server threads are placed on the other cores of the system. The resulting application trace contains a list of periods where the system is assumed to execute only AVX-512 code (the function calls into OpenSSL) alternating with periods where the system is assumed not to execute any AVX-512 or AVX2 instructions. We feed this trace into a simple model-based simulator as shown in Figure~\ref{fig:eval-methodology} to estimate the application runtime resulting from different DVFS policies. The simulator applies a DVFS policy to the trace and dilates the time during periods where the CPU would be executing at a lower frequency. During the simulation, to get results more representative for a server scenario, we assume that most of the cores are active and assume a corresponding large frequency reduction whenever AVX-512 code is executed. We implement fixed-timeout policies with the timeout used by Intel processors as well as with a timeout of \SI{180}{\micro\second} which was shown to be more competitive in Section~\ref{sec:break-even-time}. As an example for a policy based on developer input, we also implement a policy which only increases the frequency when the last packet of an HTTP request was received and decrypted, which we identify by the return value of the corresponding OpenSSL function call~\footnote{A more generic and robust implementation would be to instrument the HTTP request parsing logic to increase the frequency whenever the end of a HTTP request is detected. Our implementation suffices to show that the approach is generally possible.}. After this call, the web server processes the request and takes a significant amount of time before any further AVX-512 code is executed when the HTTP reply is sent, so at this point eager frequency changes are most likely to be beneficial for application performance. For all the policies, the simulator assumes a performance impact of \SI{16}{\micro\second} per frequency change, similar to the values determined experimentally in Section~\ref{sec:freqchangecost}. The simulation result shows that a lower timeout than what is used by Intel CPUs results in a 2.9\% higher performance in the simulated scenario. With a lower timeout, the policy can exploit shorter non-AVX program phases and wastes less time at lower frequencies throughout the program. The resulting performance improvement outweighs the (simulated) overhead of the larger number of frequency changes. Even though the difference is small, the result shows that the timeout does have a measurable impact on application performance. The developer-directed DVFS policy performed even better, with a 3.9\% performance improvement compared to the policy implemented by Intel CPUs, as the policy was able to completely mitigate overhead due to low CPU frequencies during the longest non-AVX phases of the program. While this improvement might seem minor, it covers most of the 5.7\% overhead caused by AVX-512 for this workload as shown in Figure~\ref{fig:overhead-without-ht}. Workloads with more frequent AVX-512 phases might benefit more from improved policies. In addition, the policy did not increase the frequency during some other non-AVX phases where a frequency change would have been beneficial, showing that a carefully optimized prototype might achieve higher performance. \subsection{Maximum Potential per Frequency Change} \label{sec:eval-microbenchmark} \input{fig/frequency-change-potential-eval} When looking at a single developer-directed eager frequency change, the simulation resulted in a CPU time saving of \SI{195}{\micro\second} for sufficiently long stretches of non-AVX code compared to a fixed timeout of \SI{670}{\micro\second} as implemented by current Intel CPUs, as the CPU was operating 30\% faster during this time. To show that the assumptions made in our simulator yield realistic results, we validate this value against measurements based on a simple microbenchmark. The microbenchmark first executes a series of AVX-512 instructions and then executes a fixed amount of non-AVX instructions. The number of instructions is chosen so that they take longer than the frequency change timeout implemented by the CPU. We measure the time required for the code section to determine the impact of the frequency change caused by the preceding AVX-512 code in different configurations. All experiments are repeated 1000 times. First, to measure the overall impact of such frequency changes on the CPU, we compare the average time at default frequencies (Figure~\ref{fig:frequency-change-normal-offsets}) with the average time with minimal AVX offsets (Figure~\ref{fig:frequency-change-minimal-offsets}). Our experiment shows that with minimal frequency changes the code executes \SI{130}{\micro\second} faster. In this configuration the AVX-512 code still reduces the CPU frequency by \SI{100}{\mega\hertz} as described in Section~\ref{sec:eval-methodology} and the measured runtime still includes the overhead of the corresponding frequency change which needs to be taken into account when comparing the values with the model used for our simulation. Second, we manually insert frequency changes into our prototype so that the frequency is reduced when the AVX-512 code starts and is immediately increased when the non-AVX code starts. Note that, as described in Section~\ref{sec:eval-methodology}, frequency changes are applied with a random delay of up to \SI{500}{\micro\second}. Therefore, for this experiment, we do not take the average time but instead take the 5th percentile as this value represents the situation when an optimized DVFS policy implementation almost immediately triggers frequency changes. In this experiment, we measure a runtime for the non-AVX code which is \SI{32}{\micro\second} slower than the result with minimum frequency changes, but \SI{98}{\micro\second} faster than regular frequency changes (Figure~\ref{fig:frequency-change-manual-reclocking}). The performance is slightly lower than in the experiment with minimal AVX offsets because the benchmark triggers not one but two frequency changes -- one by the hardware due to the \SI{100}{\mega\hertz} reduction described above, and one manual frequency change to simulate the DVFS policy. Apart from this overhead and minor overhead due to the additional system calls, the runtime mostly matches the optimal case, which supports our model that eager frequency changes can mitigate most of the overhead caused by AVX instructions. However, the absolute runtime differences are lower than determined by the simulation. As described above, two potential reasons for the deviation are the larger number of frequency changes as well as some remaining frequency reduction. As shown in Figure~\ref{fig:freq-increase-overhead-results}, the additional frequency change costs approximately \SI{10}{\micro\second}, and an expected 3\% performance overhead due to the \SI{100}{\mega\hertz} frequency difference costs another \SI{20}{\micro\second}. While the measured results mostly match our model when taking these effects into account, further analysis of the CPU behavior has to be conducted to provide a better quantitative model of the performance in similar situations. \section{Discussion} \label{sec:discussion} In this paper, we showed that the fixed timeout policy implemented by recent Intel CPUs for AVX frequencies yields less-than-optimal average processor frequencies for heterogeneous workloads. We also argue that better timeouts and developer-directed frequency changes can improve performance. Even though our evaluation lacks experiments to directly demonstrate the effects on real-world workloads, the estimate generated by our simulation shows that it is highly likely that such a performance improvement is to be expected. This basic result opens up a number of further research questions which we will discuss in the following sections. \subsection{Hardware Interfaces} In Section~\ref{sec:eval-methodology}, we show why the frequency change delays on current Intel CPUs prevent constructing a full prototype demonstrating our approach. Even if frequency changes were triggered instantly, though, a software-only DVFS policy implementation would not be viable for two reasons: First, the CPU would still need to be able to autonomously reduce power consumption when executing AVX-512 instructions to ensure system stability, for example, by reducing the frequency or applying other forms of throttling. Second, not all applications in the system would be modified to make use of developer-directed frequency scaling, making a hardware fallback necessary. If the DVFS policy is implemented in hardware, a software-hardware interface is required to influence policy decisions. We propose the combination of two such interfaces: \begin{enumerate} \item \textbf{Configurable frequency change delay:} As we show, the problem of AVX-induced frequency changes is similar to the dynamic power management problem, and the main decision is whether to immediately increase the frequency when possible or whether to wait or not increase the frequency at all. While it would be possible to tell the CPU to immediately increase the frequency after the next section of AVX code, we expect such an interface not to be viable in many situations, because the boundaries of AVX-intensive program execution phases are not well defined and variations in the program's control flow might cause unnecessary frequency changes. Instead, we suggest an interface to manually set a different frequency change timeout for individual parts of the program -- i.e., until the application manually reverts the change or sets a different timeout -- to allow applications to enable eager frequency changes in certain situations. \item \textbf{Forced immediate frequency change:} In addition, the CPU should provide an interface to immediately increase the frequency to the maximum frequency for use by the operating system to increase the frequency during context switches when it is known that the next task is unlikely to use AVX-512 or AVX2. \end{enumerate} Further work has to be conducted to test whether these interfaces are sufficiently flexible to implement a wide range of DVFS policies in software. \subsection{Hardware Multithreading} One significant limitation of our work is that all our experiments were conducted with a system in mind that does not use hardware multithreading. On a system with hardware multithreading, the CPU frequency has to be reduced when either of the threads executes AVX instructions, thereby limiting the potential performance advantage of developer-directed approaches as it is hard to predict when another completely unrelated hyperthread will affect the frequency. Also, as shown in Section~\ref{sec:avxeffects} on systems with hyperthreading many additional types of workloads experience slowdown due to frequency reductions. Despite the differences, improved DVFS policies might be viable and their effectiveness might even be amplified as more code is affected by frequency reductions. More research should be conducted to create a statistical model of the CPU frequency selection in systems with hardware multithreading and to develop suitable DVFS policies. \subsubsection{AVX Overhead Profiling} In our controlled experiment, we used a benchmark that had a clearly defined performance metric. In general it is, however, not always clear whether the overhead caused by AVX-512 is large enough to warrant the usage of techniques to reduce it and it is not always clear whether these techniques are successful. In particular when techniques have the potential to cause additional overhead -- for example, due to increased numbers of frequency changes -- it would be beneficial to be able to profile a system to estimate the impact of AVX-512 on performance. The result of such a profiler could also be used to implement close-loop policies. For example, the system could repeatedly try out different DVFS policies depending on the resulting performance change. The performance counters on current CPUs, however, cannot be used to construct such a profiler, as they can only be used to count cycles spent at reduced frequencies but do not provide sufficient information about how long the reduced frequencies are actually required. In particular, the performance monitoring units of these CPUs can not be used to detect any executed AVX2 and AVX-512 instructions as they can only count floating point instructions. Instead, we envision an approach which periodically samples the frequency of the system, pauses the system to let the CPU switch back to the highest possible frequency, and then checks whether the system will immediately switch back to a lower frequency when the workload is continued. The latter check determines whether a frequency reduction is required due to ongoing AVX code or whether the reduction represents avoidable overhead. Further experiments have to determine the accuracy of such an approach, and further work has to be conducted to show whether modified hardware-software interfaces can provide a more accurate profiling mechanism with lower CPU time overhead. \section{Related Work} \label{sec:relwork} This paper presents improved DVFS policies as a method to reduce the overhead of the frequency reduction caused by AVX and AVX-512 instructions on recent Intel CPUs. Other approaches to this and similar problems have used core specialization or have modified the application to reduce the impact of varying power consumption and of frequent frequency changes. \subsection{Core Specialization} Another method to limit the performance impact of AVX and AVX-512 code on unrelated non-AVX code is to place AVX and non-AVX parts of the workload on separate sets of cores. As performance problems occur when non-AVX code is executed on the same core following AVX code which reduced the frequency, specialization of cores can prevent such overhead. Approaches for core specialization either targeted heterogeneous programs consisting of AVX and non-AVX code within one process~\cite{gottschlag19sfma} or targeted workloads consisting of AVX and non-AVX processes~\cite{li2019corescheduling}. The former detects the usage of AVX instructions either by instrumentation inserted by the developer or by reconfiguring the CPU to trigger exceptions when executing AVX instructions~\cite{gottschlag19sfma,gottschlag2020automatic}. Based on this information, individual threads are migrated between cores to concentrate the AVX part of the program on as few cores as possible. The latter technique which is targeted at multi-process workloads instead relies on heuristics to identify processes using AVX-512 instructions and modifies the scheduler to prevent scheduling an AVX-512 and a non-AVX task on hardware threads of the same core at the same time~\cite{li2019corescheduling}. This approach currently uses the Linux \texttt{arch\_status} interface which only gives a rough estimate of AVX-512 usage. In this paper, we present a method to identify applications which cause frequency reductions with higher accuracy. Note that all these approaches can cause significant performance overhead themselves. Task migrations can increase cache miss rates, and restricting scheduling of different processes on the same core at the same time can cause significant overhead with some workloads~\cite{corbet2019corescheduling}. We present a technique which might provide advantages in situations where other approaches cause too much overhead. The fact that co-scheduling applications on the hardware threads of a single core can cause varying overhead depending on the type of the applications has been observed by other works before and many scheduling techniques have been developed to improve the performance of SMT systems. For example, existing approaches use sampling-based techniques~\cite{snavely2000symbiotic}, cache conflict detection~\cite{settle2004architectural}, or performance counters~\cite{el2006compatible,mcgregor2005scheduling} to determine whether two tasks are suited for parallel scheduling on the same physical core. We describe a similar approach which uses performance counters to identify tasks requiring execution at reduced frequency and which can likely be used for improved co-scheduling of AVX-512 applications as described above. \subsection{Profile-Guided Software Modifications} The approach in this paper is designed either for applications which are only available in binary form or which can benefit from AVX2 and AVX-512 instructions. If a program only makes use of such instructions in very short execution phases, those parts could alternatively be rewritten to use instructions with lower power consumption. Kumar et al.~\cite{kumar2014efficient} use such an approach to improve the efficiency of power-gating the processor's SIMD unit. In this scenario, devectorizing parts of the program reduces the speedup caused by SIMD instructions, but reduces the power-gating overhead. The authors use a profiler to determine the SIMD instruction usage in individual parts of the program. As static recompilation based on this information is problematic as the profiling results are only accurate for specific input data, the authors integrate the profiler into a system which uses dynamic translation at runtime to devectorize those parts which only rarely use SIMD instructions. Such an approach could likely be applied to AVX-512 to improve average CPU frequencies, although hardware modifications would be required -- current CPUs can only count floating-point AVX-512 instructions, but not integer operations~\cite[p. 19.20f]{intelmanualvol3}. Even with such hardware changes, it is not possible to use the approach with existing ahead-of-time compilers, though. In our work, we explore techniques usable within the existing software environment. Roy et al.~\cite{roy2009framework}, instead, suggest a similar technique that uses information from dynamic profiling to insert static power management code into an application at compile time. Their approach inserts instructions for power gating of parts of the processor in order to save energy. A similar approach, however, could potentially be used to let the application guide frequency selection decisions of the processor. \section{Conclusion} \label{sec:conclusion} Modern Intel CPUs reduce their frequency whenever power-intensive AVX2 or AVX-512 instructions are executed to prevent violating power limits. The frequency is only increased again after a fixed timeout has elapsed, in order to prevent excessive numbers of frequency changes. This behavior reduces the performance for heterogeneous workloads where code sections with and without such AVX instructions alternate, as parts of the latter are executed at a lower frequency than necessary. We show the similarity between this behavior and mechanisms from dynamic power management. We show that the constant delay before increasing the frequency is not optimal in terms of worst-case competitiveness and show how the delay should depend on the magnitude of the frequency change. We also sketch how information from the OS or the developer can be used to inform the CPU about future system behavior so that the CPU can implement more efficient DVFS policies. Although we do not have a complete implementation due to constraints of the hardware, we show that it is possible to reliably determine whether an application will cause frequency changes and we show that eager frequency changes based on such information about the workload can improve performance. \subsection{Future Work} \label{sec:futurework} Although we show that an oracle-style DVFS policy can improve performance, it remains to be seen whether other approaches from the area of dynamic power management can be applied as well. In particular, some shutdown strategies achieve lower power consumption compared to the simple fixed-timeout policy even without application-level knowledge. In addition, due to hardware constraints, we do not present any complete implementation of our approach. We plan to construct a testbed for other DVFS policies and to use it to evaluate different hardware-software interfaces which would allow input from the operating system or from applications to affect hardware-controlled frequency scaling.
1,108,101,563,742
arxiv
\section{Introduction} Let $M$ be a random variable, taking values in $\mathbb{\overline{N}} = \{1, ... ; \infty\}$, and let $\xi$ be an independent Bernoulli($p$) random variable. We consider the following simple Recursive Distributional Equation (henceforth abbreviated as RDE): \begin{equation} \label{DE1} Y = \xi\prod_{i=1}^M Y_i+ (1-\xi)(1-\prod_{i=1}^M Y_i). \end{equation} Viewing (\ref{DE1}) as an RDE, we seek a stationary distribution, $\nu$, such that if $Y_i$ are iid with distribution $\nu$ and are independent of $(M, \xi)$, then $Y$ also has distribution $\nu$. We term (\ref{DE1}) the noisy veto-voter model since, if each $Y_i$ takes values in $\{0,1\}$ with value $0$ being regarded as a veto, then the outcome is vetoed unless either (a) each voter i \lq assents' ($Y_i=1$ for each $1\leq i\leq M$) and there's no noise ($\xi=1$) or (b) someone vetos, but is reversed by the noise ($\xi=0$). The system was originally envisaged as modelling a representative voting system applied to a veto issue. Thus each representative votes according to their constituency if $\xi=1$ or reverses the decision if $\xi=0$. An alternative interpretation is as a model for a noisy distributed error-reporting system. Here a $0$ represents an error report from a sub-system. Thus there is an error in the system if there is an error in any sub-sytem (hence the veto structure). Noise can reverse the binary (on-off) report from any sub-system. In this paper, we look for solutions to the RDE (\ref{DE1}) taking values in $[0,1]$. As observed in Aldous and Bandhapadhyay \cite{Aldous}, and as we shall explain in a little more detail in section 2, we may think of (families of) solution to the RDE as being located at the nodes of a (family) tree (for a Galton-Watson branching process). Actually, for some purposes we shall find it more convenient to embed this family tree into ${\mathbf T}$, the deterministic tree with infinite branching factor of size $\aleph_0$. The generic setup in such circumstances is to find distributional fixed points of the recursion: \begin{equation} \label{Gen} X_u = f(\xi_u;X_{ui}, i\geq 1), \end{equation} where $X_u$ and $\xi_u$ are respectively, the value and the noise associated with node $u$ and $ui$ is the address of the $i$th daughter of node $u$. With this model, it is of some interest not only to find solutions to the RDE (\ref{Gen}) but also to answer the question of endogeny: $$ \hbox{\lq is $(X_u; u\in {\mathbf T})$ measurable with respect to $(\xi_u;u\in {\mathbf T})$?'} $$ If this measurability condition holds, then $X_{\cdot}$ is said to be endogenous. In the context of the error-reporting model, endogeny represents the worst possible situation---the top-level error report is based entirely on the noise and is uninfluenced by the error state of low-level sub-systems. Similarly, in the veto-voter paradigm, endogeny represents the situatrion where the voice of the \lq little man' is completely swamped by reversals by officials. In this paper we will first show how to transform (\ref{DE1}) into the new RDE: \begin{equation} \label{DE2} X = 1-\prod_{i=1}^N X_i, \end{equation} for a suitable, random variable $N$, independent of the $X_i$, Then we'll not only find all the solutions to this RDE on $[0,1]$, their basins of attractions and the limit cycles of the corresponding map on the space of distributions on $[0,1]$, but also give necessary and sufficient conditions for the corresponding solutions on ${\mathbf T}$ to be endogenous. The fundamental technique we use, which we believe is entirely novel, is to consider the distribution of a solution conditional upon the noise and to identify endogeny by showing that this conditional distribution is concentrated on $\{0,1\}$. \section{Notation and a transformation of the RDE}\label{tnote} \subsection{Tree-indexed solutions} We seek distributions $\nu$ on $[0,1]$ such that if ($Y_i; 1\leq i)$ are independent with distribution $\nu$, then the random variable $Y$ satisfying (\ref{DE1}) also has distribution $\nu$. More precisely, writing ${\mathcal P}$ for the set of probability measures on $[0,1]$, suppose that $M$ has distribution $d$ on $\mathbb{\overline{Z}}_+$ and define the map $${\mathcal T} \equiv {\mathcal T}_d : {\mathcal P} \rightarrow {\mathcal P}$$ Then we set ${\mathcal T}(\nu)$ to be the law of the random variable $Y$ given by (\ref{DE1}), when the $Y_i$ are independent and identically distributed with distribution $\nu$ and are independent of $N$, and seek fixed points of the map ${\mathcal T}$. The existence and uniqueness of fixed points of this type of map, together with properties of the solutions, are addressed by Aldous and Bandhapadhyay in \cite{Aldous} (the reader is also referred to \cite{Band} and \cite{Rusc} and the references therein). The linear and min cases are particularly well-surveyed, though we are dealing with a non-linear case to which the main results do not apply. A convenient generalisation of the problem is the so-called {\em tree-indexed} problem, in which we think of the $Y_i$ as being marks associated with the daughter nodes of the root of $T$, a family tree of a Galton-Watson branching process. We start at some level $m$ of the random tree. Each vertex $v$ in level $m-1$ of the tree has $M_v$ daughter vertices, where the $M_v$ are i.i.d. with common distribution $d$ and has associated with it noise $\xi_v$, where the $(\xi_u;u\in T)$ are iid and are independent of the $(M_u;u\in T)$. By associating with daughter vertices independent random variables $Y_{vi}$ having distribution $\nu$, we see that $Y_v$ and $Y_{vi};1\leq i\leq M_v$ satisfy equation (\ref{DE1}). In this setting the notion of endogeny was introduced in \cite{Aldous}. Loosely speaking, a solution to the tree-indexed problem (which we will define precisely in the next section) is said to be endogenous if it is a function of the initial data or noise alone so that no additional randomness is present. It is convenient to work on a tree with infinite branching factor and then think of the random tree of the previous paragraph as being embedded within it. An initial ancestor (in level zero), which we denote $\emptyset$, gives rise to a countably infinite number of daughter vertices (which form the members of the first generation), each of which gives rise to an infinite number of daughters (which form the members of the second generation), and so on. We assign each vertex an address according to its position in the tree: the members of the first generation are denoted $1, 2, ...$, the second $11,12,\ldots 21, 22, \ldots , 31, 32, \ldots$ etc, so that vertices in level $n$ of the tree correspond to sequences of positive integers of length $n$. We also write $uj, j = 1, 2, ...$ for the daughters of a vertex $u$. We write ${\mathbf T}$ for the collection of all vertices or nodes (i.e. ${\mathbf T} = \bigcup_{n=0}^\infty \mathbb{N}^n$) and think of it as being partitioned by depth, that is, as being composed of levels or generations, in the way described and define the depth function $|\cdot|$ by $|u| = n$ if vertex $u$ is in level $n$ of the tree. Associated to each of the vertices $u \in {\mathbf T}$ are iid random variables $M_u$ with distribution $d$, telling us the (random) number of offspring produced by $u$. The vertices $u1, u2, ..., uM_u$ are thought of as being alive (relative to $\emptyset$) and the $\{uj: j> M_u\}$ as dead. We can now write our original equation as a recursion on the vertices of ${\mathbf T}$: \begin{equation} \label{RDE1} Y_u = \xi_u\prod_{i=1}^{M_u} Y_{ui}+(1-\xi_u)(1-\prod_{i=1}^{M_u} Y_{ui}), \; u \in {\mathbf T}. \end{equation} The advantage of the embedding now becomes clear: we can talk about the RDE at any vertex in the infinite tree and yet, because the product only runs over the live daughters relative to $u$, the random Galton-Watson family tree is encoded into the RDE as noise. \subsection{The transformed problem} It is a relatively simple maer to transform the RDE (\ref{RDE1}) into the following, simpler, RDE: \begin{equation}\label{RDE} X_u = 1-\prod_{i=1}^{N_u} X_{ui}, \; u \in {\mathbf T}. \end{equation} To do so, first note that if we colour red all the nodes, $v$, in the tree ${\mathbf T}$ for which $\xi_v=0$ then it is clear that we may proceed down each line of descent from a node $u$ until we hit a red node. In this way, we either "cut" the tree at a collection of nodes which we shall view as the revised family of $u$, or not, in which case $u$ has an infinite family. Denote this new random family size by $N_u$ then $$Y_u= 1-\prod_{i=1}^{N_u} Y_{\hat{ui}},$$ if $u$ is red, where $\hat{ui}$ denotes the $i$th red node in the revised family of $u$. Now condition on node $u$ being red, then with this revised tree we obtain the RDE (\ref{RDE}). It is easy to see that if the original tree has family size PGF $G$, then the family size in the new tree corresponds to the total number of deaths in the original tree when it is independently thinned, with the descendants of each node being pruned with probability $q$. It is easy to obtain the equation for the PGF, $H$, of the family size $N_u$ on the new tree: \begin{equation}\label{trans} H(z)=G(pH(z)+qz). \end{equation} \section{The discrete and conditional probability solutions} We begin with some notation and terminology. We say that the random variables in (\ref{RDE1}) are weakly stationary if $X_u$ has the same distribution for every $u \in {\mathbf T}$. The stationarity of the $X_u$ corresponds to $X_u$ having as distribution an invariant measure for the distributional equation (\ref{RDE}). \begin{defn} We say that the process (or collection of random variables) ${\boldsymbol{X}} = (X_u; u \in {\mathbf T})$ is a {\em tree-indexed} solution to the RDE (\ref{RDE}) if \begin{enumerate} \item for every $n$, the random variables $(X_u; |u| = n)$ are mutually independent and independent of $(N_v; |v| \leq n-1)$; \item for every $u \in {\mathbf T}$, $X_u$ satisfies \[X_u = 1 - \prod_{i=1}^{N_u} X_{ui},\] and the $(X_u; u \in {\mathbf T})$ are weakly stationary. \end{enumerate} \end{defn} Notice that these conditions determine the law of ${\boldsymbol{X}}$. This means that a tree-indexed solution is also stationary in the strong sense, that is, a tree-indexed solution is ``translation invariant" with respect to the root (if we consider the collection ${\boldsymbol{X}}^v = (X_u; u \in {\mathbf T}_v)$, where ${\mathbf T}_v$ is the sub-tree rooted at $v$, then ${\boldsymbol{X}}^v$ has the same distribution as ${\boldsymbol{X}}$ for any $v \in {\mathbf T}$). Furthermore, we say that such a solution is {\em endogenous} if it is measurable with respect to the random tree (i.e. the collection of family sizes) $(N_u; u \in {\mathbf T})$. As we remarked in the introduction, in informal terms this means that the solution depends only on the noise with no additional randomness coming from the boundary of the tree. See \cite{Aldous} for a thorough discussion of endogeny together with examples. \newline \newline The following is easy to prove. \begin{lem} Let $(X_u; u \in {\mathbf T})$ be a tree-indexed solution to the RDE (\ref{RDE}). Then the following are equivalent: \begin{enumerate} \item $X$ is endogenous; \item $X_\emptyset$ is measurable with respect to $\sigma(N_u; u \in {\mathbf T})$; \item $X_u$ is measurable with respect to $\sigma(N_v; v \in {\mathbf T})$ for each $u \in {\mathbf T}$; \item $X_u$ is measurable with respect to $\sigma(N_v; v \in {\mathbf T}_u)$ for each $u \in {\mathbf T}$. \end{enumerate} \end{lem} \begin{remark} Notice that if a tree-indexed solution to (\ref{RDE}) is endogenous then property (1) of a tree-indexed solution is automatic: for every $u \in {\mathbf T}$, $X_u$ is measurable with respect to $\sigma(N_v; v \in {\mathbf T}_u)$ and hence is independent of $(N_v; |v|\leq n-1)$. \end{remark} \begin{lem} \label{invmsr} There exists a unique probability measure on $\{0,1\}$ which is invariant under (\ref{DE2}). \end{lem} \begin{proof} Let $X$ be a random variable whose distribution is concentrated on $\{0,1\}$ and which is invariant under (\ref{DE2}). Let ${\mu^1}= \mathbb{P}(X = 1)$. We have then $\mathbb{P}(X = 0) = 1- {\mu^1}$ and \[ \mathbb{P}(X_i = 1;\hbox{ for } i = 1, ..., N) = \sum_n \mathbb{P}(X_i = 1;\hbox{ for } i = 1, ..., n|N=n)\mathbb{P}(N=n) =H(\mu^1). \] Now, $X = 0$ if and only if $X_i = 1$ for $i = 1, ..., N$. Hence a necessary and sufficient condition for invariance is \begin{equation}\label{mu} 1 - {\mu^1}=H({\mu^1}). \end{equation} Now let \[ K(x) \buildrel def\over = H(x) + x - 1. \] Since $H$ is a generating function and $H(0) = 0$, we have $K(0) = -1 < 0$ and $K(1) > 0$ so that $K$ is guaranteed to have a zero in $(0,1)$, and it is unique since the mapping $x \mapsto H(x) + x$ is strictly increasing. \end{proof} We can now deduce that there exists a tree-indexed solution on $\{0,1\}^{{\mathbf T}}$ to the RDE (\ref{RDE}) by virtue of Lemma 6 of \cite{Aldous}. \begin{thm} Let ${\boldsymbol{S}} = (S_u; u \in {\mathbf T})$ be a tree-indexed solution on $\{0,1\}^{{\mathbf T}}$ to the RDE (\ref{RDE}) (i.e. the $S_u$ have the invariant distribution on the two point set $\{0,1\}$), which we will henceforth refer to as the {\em discrete solution}. Let $C_u = \mathbb{P}(S_u = 1|N_v; v \in {\mathbf T})$. Then $\tC = (C_u; u \in {\mathbf T})$ is the unique {\em endogenous} tree-indexed solution to the RDE. \end{thm} \begin{proof} To verify the relationship between the random variables, we have, writing ${\boldsymbol{N}}= (N_u; u \in {\mathbf T})$ and ${\boldsymbol{N}}_u=(N_v; v\in{\mathbf T}_u)$, \[ C_u = {\mathbb P}(S_u=1|{\boldsymbol{N}} )=\mathbb{E}[1_{(S_u = 1)}|{\boldsymbol{N}} ] = \mathbb{E}[S_u|{\boldsymbol{N}} ] \] \[=\mathbb{E}[1 - \prod_{i=1}^{N_u} S_{ui}|{\boldsymbol{N}}]\] \[=1 - \mathbb{E}[\prod_{i=1}^{N_u} S_{ui}|{\boldsymbol{N}}]\] \[= 1 - \prod_{i=1}^{N_u} \mathbb{E}[S_{ui}|{\boldsymbol{N}}]\] \[= 1 - \prod_{i=1}^{N_u} C_{ui},\] since the $S_{ui}$ are independent and $ {\boldsymbol{N}}$ is strongly stationary. To verify stationarity, let \[ C_u^n = \mathbb{P}(S_u = 1|N_v; |v|\leq n). \] Then the sequence $(C_u^n)_{n \geq 1}$ is a uniformly bounded martingale and so converges almost surely and in $L^2$ to a limit which must in fact be $C_u$. Now, we can write $C_u^n$ as \begin{eqnarray}\label{***} C_u^n &=& 1 - \prod_{i_1=1}^{N_u} C_{ui_1}^n\\ &=& 1 - \prod_{i_1=1}^{N_u} \left(1 -\prod_{i_2=1}^{N_{u{i_1}}}\bigl(...(1-\prod_{i_{n-1-|u|}=1}^{N_{ui_1i_2\ldots i_{n-2-|u|}}} (1-(\mu^1)^{N_{ui_1i_2\ldots i_{n-1-|u|}}}))...\bigr)\right)\nonumber\\ & \rightarrow& C_u \hbox{ a.s.}.\nonumber\\ \nonumber \end{eqnarray} This corresponds to starting the distributional recursion at level $n$ of the tree with unit masses at $\mu^1$. Now, $(C_u^n; u \in {\mathbf T})$ is stationary since each $C_u^n$ is the same function of ${{\boldsymbol{N}}}_u$, which are themselves stationary. Since $C_u$ is the (almost sure) limit of a sequence of stationary random variables, it follows that ${{\boldsymbol{C}}}= (C_u; u \in {\mathbf T})$ is stationary. Notice that the conditional probability solution, ${{\boldsymbol{C}}}$, is automatically endogenous since $C_u$ is $\sigma(N_v; v \in {\mathbf T}_u$)-measurable for every $u \in {\mathbf T}$ and hence $(C_u; |u| = n)$ is independent of $(N_u; |u| \leq n-1)$. The independence of the collection $(C_u; |u| = n)$ follows from the fact that the $((S_u,{{\boldsymbol{N}}}_u); |u| = n)$ are independent . Finally, notice that if $(L_u; u\in{\mathbf T})$ solve the RDE (\ref{RDE}) and are integrable then $m\buildrel def\over ={\mathbb E} L_u$ must satisfy (\ref{mu}) and hence must equal $\mu^1$. It now follows, that $L_u^n\buildrel def\over = {\mathbb E}[L_u|N_v; |v| \leq n]=C_u^n$, since at depth $n$, $L_u^n=\mu^1$ so that $L_u^n$ also satisfies equation (\ref{***}) and hence must equal $C_u^n$. Now $L_u^n\rightarrow L_u$ a.s. and so, if $L$ is endogenous then it must equal $C$. This establishes that $C$ is the unique endogenous solution. \end{proof} \begin{remark} Notice that if ${\textbf S}$ is endogenous then ${{\boldsymbol{C}}} = {\textbf S}$ almost surely so that if ${\textbf S}$ and ${{\boldsymbol{C}}}$ do not coincide then ${\textbf S}$ cannot be endogenous. \end{remark} \section{The moment equation and uniqueness of solutions}\label{momsec} Many of the results proved in this paper rely heavily on the analysis of equation (\ref{moment}) below. \begin{thm} Any invariant distribution for the RDE (\ref{RDE}) must have moments $(m_n)_{n \geq 0}$ satisfying the equation \begin{equation} \label{moment} H(m_n) - (-1)^nm_n = \sum_{k=0}^{n-1} (^n_k)(-1)^km_k,\end{equation} where $m_n^{1+1/n} \leq m_{n+1} \leq m_n$ and $m_0 = 1$. \end{thm} \begin{proof} Let $X$ be a random variable whose distribution is invariant for the RDE and write $m_k = \mathbb{E}[X^k]$. Applying the RDE (\ref{RDE}) to $(1-X)^n$ we have \[\mathbb{E}[(1-X)^n] = \mathbb{E}[\prod_{i=1}^N X_i^n] = H(m_n).\] On the other hand, by expanding $(1-X)^n$ we obtain \[\mathbb{E}[(1-X)^n] = \mathbb{E}[\sum_{k=0}^n (^n_k)(-1)^k X_i^k]\] \[= \sum_{k=0}^n (^n_k)(-1)^k m_k,\] so that \[H(m_n) = \sum_{k=0}^n (^n_k) (-1)^k m_k.\] The condition $m_{n+1} \leq m_n$ follows from the fact that the distribution is on $[0,1]$. The other condition follows from the monotonicity of $L^p$ norms. \end{proof} As an example, if the random variable $N$ has generating function $H(x) = x^2$ (i.e. $N \equiv 2$), the moment equation tells us that \[m_1^2 + m_1 - 1 = 0\] so that $m_1 = (\sqrt{5} - 1)/2$. For $m_2$ we have \[m_2^2 - m_2 - (2 - \sqrt{5}) = 0\] so that $m_2 = m_1$ or $m_1^2$ and so on. In fact the two possible moment sequences turn out to be $m_0 = 1, m_n = (\sqrt{5}-1)/2$ for $n \geq 1$ or $m_0 = 1, m_1 = (\sqrt{5} - 1)/2, m_n = m_1^n$ for $n \geq 2$. \newline We suppose from now on that $H(0)=0$ and $H$ is strictly convex (so that ${\mathbb P}(2\leq N <\infty)>0$). \newline We now state the main result of the paper. \begin{thm} \label{main} Let $S = (S_u; u \in {\mathbf T})$ and $C = (C_u; u \in {\mathbf T})$ be, respectively, the discrete solution and corresponding conditional probability solution to the RDE (\ref{RDE}). Let $\mu^1 = \mathbb{E}[S_u]$. Then \begin{enumerate} \item S is endogenous if and only if $H'(\mu^1) \leq 1$; \item $C$ is the unique endogenous solution; \item the only invariant distributions for the RDE (\ref{RDE}) are those of $S_\emptyset$ and $C_\emptyset$. \end{enumerate} \end{thm} The proof of the theorem relies on several lemmas. For (1) we extend a result of Warren \cite{Warren} by first truncating $N$ and then take limits. First however, we give some consequences of the moment equation (\ref{moment}): \begin{lem}\label{momlem} There are at most two moment sequences satisfying (\ref{moment}). Moreover, the first moment $m^1$ is unique and equal to $\mu^1$, $1>m^1>\frac{1}{2}$ and in the case that $H'(m^1)\leq 1$ there is only one moment sequence satisfying (\ref{moment}). \end{lem} \begin{proof} Uniqueness of $\mu^1$ (the root of $f(m^1)=1$, where $f:t\mapsto H(t)+t$) has already been shown in Lemma \ref{invmsr}. Now set $$g(x)=H(x)-x, $$ then $g$ is strictly convex on [0,1] with $g(0)=0$ and $g(1-)=H(1-)-1\leq 0$. Thus there are at most two solutions of $g(x)=1-2m^1$. Since $m^1$ itself is a solution, it follows that $1-2m^1\leq 0$ and there is at most one other solution. There is another solution with $m^2< m^1$ if and only if $m^1$ is greater than $\mu^*$, the argmin of $g$, and this is clearly true if and only if $g'(m^1)>0\Leftrightarrow H'(m^1)>1$. Suppose that this last inequality holds, so that there is a solution, $m^2$, of $g(x)=1-2m^1$ with $m^2<\mu^*<m^1$. There is at most one solution of $$ f(x)=1-3m^1+3m^2, $$ and if it exists take this as $m^3$. Similarly, there is at most one solution of $g(x)=1-4m^1+6m^2-4m^3$ to the left of $\mu^*$ and this is the only possibility for $m^4$. Iterating the argument, we obtain at most one strictly decreasing sequence $m^1,\ldots$. \end{proof} \subsection{The case of a bounded branching factor} Recall that the random family size $N$ may take the value $\infty$. \begin{lem} \label{properties} Define $N^n = min(n,N)$ and denote its generating function by $H_n$. Then $N^n$ is bounded and \begin{enumerate} \item $H_n(s) \geq H(s)$ for all $s \in [0,1]$; \item $H_n \rightarrow H$ uniformly on compact subsets of $[0,1)$; \item $H_n'\rightarrow H'$ uniformly on compact subsets of $[0,1)$. \end{enumerate} \end{lem} We leave the proof to the reader. The following lemma will be used in the proof of Theorem \ref{conv}. \begin{lem}\label{prop2} Let $C_u^{(n)} = \mathbb{P}(S_u = 1|N^n_u; u \in {\mathbf T})$ denote the conditional probability solution for the RDE (\ref{RDE}) with $N$ replaced by $N^n$. Let $\mu_n^k = \mathbb{E}[(C_u^{(n)})^k]$ denote the corresponding $k$th moment and let $\mu^k = \mathbb{E}[(C_u)^k]$. Let $\mu^*_n$ denote the argmin of $g_n(x)\buildrel def\over = H_n(x)-x$ and let $\mu^2_{n,m}$ denote that root of the equation, \begin{equation} \label{modified} g_n(x) = 1 - \mu^1_m - \mu^1_n, \end{equation} which lies to the left of $\mu^*_n$ (i.e. the lesser of the two possible roots). Then $\mu_n^k \rightarrow \mu^k$ for $k = 1,2$ and $\mu^2_{n,m} \rightarrow \mu^2$ as $\min(n,m)\rightarrow \infty$. \end{lem} \begin{proof} For the case $k = 1$, consider the graphs of the functions $H_n(x) + x$ and $H(x) + x$. We have $H_n(x) \geq H(x)$ for all $x \geq 0$ and for all $n \geq 1$ so that $\mu_n^1$ is bounded above by $\mu^1$ for every $n$, since $\mu^1_n$ and $\mu^1$ are respectively the roots of $$ H_n(x)+x=1\hbox{ and }H(x)+x=1. $$ Furthermore, since $H_n$ decreases to $H$ pointwise on $[0,1)$, it follows that the $\mu_n^1$ are increasing. The $\mu_n^1$ must therefore have a limit, which we will denote $\widehat{\mu}$. It follows from Lemma \ref{properties} that, since $\mu^1<1$, $H_n(\mu_n^1) \rightarrow H(\widehat{\mu})$. Hence \[1 = H_n(\mu_n^1) + \mu_n^1 \rightarrow H(\widehat{\mu}) + \widehat{\mu},\] so that $\widehat{\mu}$ is a root of $H(x) + x = 1$. It follows, by uniqueness, that $\widehat{\mu} = \mu^1$. \newline \newline For the case $k = 2$ we consider the graphs of $g_n(x)$ and $g(x)$. We first show that $\mu_n^2 \rightarrow \mu^2$ and then that $\mu_{n,m}^2 \rightarrow \mu^2$ as $\min(n,m) \rightarrow \infty$. \newline \newline To show that $\mu_n^2 \rightarrow \mu^2$ we argue that $\mu^2$ is the only limit point of the sequence $(\mu_n^2)_{n \geq 1}$. Notice that, since $\mu_n^1 \rightarrow \mu^1$ and $\mu_n^2$ satisfies \[H_n(\mu_n^2) - \mu_n^2 = 1 - 2\mu_n^1,\] the only possible limit points of the sequence $(\mu_n^2)_{n \geq 1}$ are $\mu^1$ and $\mu^2$. Now, either $\mu^1\leq \mu^*$, in which case $\mu^1=\mu^2$ or, $\mu^2 \leq \mu^* < \mu^1<1$. In the latter case, it is easy to show that $\mu^*_n\rightarrow \mu^*$ (by uniform continuity of $g_n'$) and so, since $\mu^1_n\rightarrow \mu^1$ it follows that $$ \mu^1_n>\mu^*_n, $$ for sufficiently large $n$, and hence $$\mu^2_n\leq \mu^*_n, $$ for sufficiently large $n$. In either case, the only possible limit point is $\mu^2$; since the $\mu_n^2$ are bounded they must, therefore, converge to $\mu^2$. \newline \newline We conclude the proof by showing that $\mu^2$ is the only limit point of the sequence $(\mu^2_{n,m})$. Since $\mu_m^1, \mu_n^1 \rightarrow \mu^1$ as $\min(n,m) \rightarrow \infty$ and $\mu_{n,m}^2$ satisfies (\ref{modified}), the only possible limit points of the sequence $(\mu_{n,m}^2)_{m,n \geq 1}$ are $\mu^1$ and $\mu^2$. Once more, consider the two cases: $$ \mu^1\leq\mu^*\hbox{ and }\mu^1>\mu^*. $$ In the first case, $\mu^1_n=\mu^2_n$, for sufficiently large $n$, so that $\mu^2$ is the only limit point; in the second case $$ \mu^1=\liminf_n \mu^1_n> \mu^*=\limsup_n\mu^*_n, $$ and since $\mu^2_{n,m}\leq \mu^*_n$, $\mu^1$ cannot be a limit point. Thus, in either case, $\mu^2$ is the unique limit point and hence is the limit. \end{proof} \begin{remark}Notice that the method of the proof can be extended to prove that $\mu_n^k \rightarrow \mu^k$ for any $k$. \end{remark} \begin{thm}\label{conv} $C_u^{(n)}$ converges to $C_u$ in $L^2$. \end{thm} \begin{proof} Let $n \geq m$. Define $E_{m,n} = \mathbb{E}[(C_u^{(m)} - C_u^{(n)})^2]$. Expanding this, we obtain \[E_{m,n} = \mu_m^2 + \mu_n^2 - 2r_{m,n},\] where $r_{m,n} = \mathbb{E}[C_u^{(m)}C_u^{(n)}]$. On the other hand, by applying the RDE (\ref{RDE}) once, we obtain \[E_{m,n} = \mathbb{E}[(\prod_{i=1}^{N_u^n} C_{ui}^{(n)} - \prod_{i=1}^{N_u^m}C_{ui}^{(m)})^2]\] \[= H_m(\mu_m^2) + H_n(\mu_n^2) - 2\mathbb{E}[\prod_{i=1}^{N_u^m} C_{ui}^{(m)}\prod_{i=1}^{N_u^n} C_{ui}^{(n)}].\] We can bound $E_{m,n}$ above and below as follows: since each $C^k_{ui}$ is in [0,1] omitting terms from the product above increases it, while adding terms decreases it. Thus, since $n \geq m$, $N^n_u\geq N^m_u$, and so replacing $N^n_u$ by $N^m_u$ in the product above increases it while replacing $N^m_u$ by $N^n_u$ decreases it. Thus we get: \[ H_m(\mu_m^2) + H_n(\mu_n^2) - 2H_m(r_{m,n}) \leq E_{m,n} \leq H_m(\mu_m^2) + H_n(\mu_n^2) - 2H_n(r_{m,n}). \] Using the upper bound we have \[ 2H_n(r_{m,n}) \leq H_m(\mu^2_m) + H_n(\mu^2_n) - E_{m,n} = H_m(\mu^2_m) + H_n(\mu^2_n) - \mu^2_m - \mu^2_n + 2r_{m,n}. \] The moment equation (\ref{moment}) tells us that $H_m(\mu^2_m) - \mu^2_m = 1 - 2\mu^1_m$ and that $H_n(\mu^2_n) - \mu^2_n = 1 - 2\mu^1_n$. Hence \[2H_n(r_{m,n}) \leq 1 - 2\mu^1_m + \mu^2_m + 1 - 2\mu^1_n + \mu^2_n - \mu^2_m - \mu^2_n + 2r_{m,n}, \] so that, on simplifying, \[H_n(r_{m,n}) - r_{m,n} \leq 1 - \mu^1_m - \mu^1_n.\] Recall that the equation $H_n(x) - x = 1 - \mu^1_m - \mu^1_n$ has (at most) two roots, the lesser of which we denoted $\mu^2_{m,n}$. Let $\mu^1_{m,n}$ be the other (larger) root (or 1, if the second root does not exist). Then, since $H_n(x) - x$ is convex, $\mu_{n,m}^2 \leq r_{m,n} \leq \mu_{n,m}^1$ for all $m,n$ and hence $\liminf_{m \rightarrow \infty} r_{m,n} \geq \mu^2$ since $\mu_{n,m}^2 \rightarrow \mu^2$ by Lemma \ref{prop2}. \newline \newline On the other hand, Holder's inequality tells us that $r_{m,n} \leq \sqrt{\mu_m^2\mu_n^2}$ and so it follows that $\limsup_{m \rightarrow \infty} r_{m,n} \leq \mu^2$ since $\mu_m^2, \mu_n^2 \rightarrow \mu^2$ by Lemma \ref{prop2}. Hence $r_{m,n} \rightarrow \mu^2$ as $n \rightarrow \infty$ and \[E_{m,n} \rightarrow \lim_{m,n \rightarrow \infty} \mu_m^2 + \mu_n^2 - 2r_{m,n} = \mu^2 + \mu^2 - 2\mu^2 = 0,\] showing that $(C_u^{(n)})$ is Cauchy in $L^2$. It now follows, by the completeness of $L^2$, that $C_u^{(n)}$ converges. Since $C_u^{(n)}$ is $\sigma(N)$-measurable, the limit $L_u$ of the $C_u^{(n)}$ must also be $\sigma(N)$-measurable for each $u$ and the collection $(L_{ui})_{i\geq 1}$ must be independent and identically distributed on [0,1], with common mean $\mu^1<1$. Moreover, by strong stationarity of the $C^{(n)}$s, the $L_u$s are strongly stationary. To verify that $L_{\emptyset}$ is the conditional probability solution, notice that \[1_{E_n} C_\emptyset^{(n)} = (1 - \prod_{i=1}^{N_\emptyset^n} C_i^{(n)}) 1_{E_n}\] \[= (1 - \prod_{i=1}^{N_\emptyset} C_i^{(n)}) 1_{E_n},\] where $E_n = \{N_\emptyset \leq n\}$. As $n \rightarrow \infty$, $E_n\uparrow E\buildrel def\over = (N<\infty)$; furthermore, since the $C_i^{(n)}$ converge in $L^2$, they do so in probability. We may assume without loss of generality, therefore, that $C^{(n)}_i$ converges almost surely for each $i$ so that, in the limit, \begin{equation}\label{endo2} 1_E L_\emptyset = \lim 1_{E_n}C^{(n)}_\emptyset =\lim 1_{E_n} (1 - \prod_{i=1}^{N_\emptyset} C^{(n)}_i)=1_E(1-\prod_{i=1}^{N_\emptyset}L_i) \hbox{ a.s.} \end{equation} It is easy to show that $$\prod_{i=1}^\infty L_i=0\hbox{ a.s.} $$ while $$1_{E^c}C^{(n)}_\emptyset=1_{E^c}(1-\prod_{i=1}^{(n)}C^{(n)}_i) \rightarrow 1_{E^c}\hbox{ a.s.}, $$ so that \begin{equation}\label{endo3} 1_{E^c}L_\emptyset=\lim 1_{E^c}C^{(n)}_\emptyset=1_{E^c}. \end{equation} Thus adding equations (\ref{endo2}) and (\ref{endo3}) we see that $$ L_\emptyset=(1-\prod_{i=1}^{N_\emptyset}L_i),$$ and so $L$ is an endogenous solution to the RDE. It follows from uniqueness that $L$ must be the conditional probability solution $C$. \end{proof} \subsection{Proof of Theorem \ref{main}}We are now nearly in a position to finish proving Theorem \ref{main}. To recap, we have shown in Lemma \ref{momlem} that there are at most two distributions which solve the RDE (\ref{DE2}), corresponding to the `moment sequences' $\mu^1,\mu^1,\ldots$ and $\mu^1,\mu^2,\ldots$. The first of these is the moment sequence corresponding to the distribution on $\{0,1\}$ with mass $\mu^1$ at 1. The second may or may not be a true moment sequence and is equal to the first if and only if $H'(\mu^1)\leq 1$. Moreover, there is only one endogenous solution, and this corresponds to the conditional probability solution $C$, thus if we can show that $C$ is not discrete (i.e. is not equal to $S$) whenever $H'(\mu^1)> 1$ then we will have proved the result. We need to recall some theory from \cite{Warren}. Consider the recursion \[\xi_u = \phi(\xi_{u0}, \xi_{u1},..., \xi_{u(d-1)}, \epsilon_u), \ \ \ \ u \in \Gamma_d,\] where the $\xi_u$ take values in a finite space ${\mathcal S}$, the ``noise" terms $\epsilon_u$ take values in a space $E$, $\Gamma_d$ is the deterministic $d$-ary tree and $\phi$ is symmetric in its first $d-1$ arguments. We suppose that the $\epsilon_u$ are independent with common law $\nu$ and that there exists a measure $\pi$ which is invariant for the above recursion (i.e. $\pi$ is a solution of the associated RDE). Let $u_0 = \emptyset, u_1, u_2, ...$ be an infinite sequence of vertices starting at the root, with $u_{n+1}$ being a daughter of $u_n$ for every $n$. For $n \leq 0$, define $\xi_n = \xi_{u_{-n}}$. Then, under the invariant measure $\pi$, the law of the sequence $(\xi_n; n \leq 0)$, which, by the symmetry of $\phi$ does not depend on the choice of sequence of vertices chosen, is that of a stationary Markov chain. Let $P^2$ be the transition matrix of a Markov chain on ${\mathcal S}^2$, given by \[P^2((x_1,x_1'), A \times A') = \int_{\mathcal S} \int_E 1_(\phi(x_1,x_2, ..., x_d, z) \in A, \phi(x_1', x_2, ..., x_d, z) \in A') d\nu(z)d\pi(x_2)...d\pi(x_d).\] Let $P^-$ be the restriction of $P^2$ to non-diagonal terms and $\rho$ the Perron- Frobenius eigenvalue of the matrix corresponding to $P^-$. \newline \newline The following theorem gives a necessary and sufficient condition for endogeny of the tree-indexed solution corresponding to $\mu$. This is a small generalisation of Theorem 1 of \cite{Warren} \begin{thm} \label{endthm1} The tree-indexed solution to the RDE associated with \[\xi_u = \phi(\xi_{u0}, \xi_{u1},..., \xi_{u(d-1)}, \epsilon_u),\] corresponding to the invariant measure $\pi$, is endogenous if $d\rho < 1$; it is {\em non-}endogenous if $d\rho >1$. In the critical case $d\rho = 1$, let $\mathcal{H}_0$ be the collection of $L^2$ random variables measurable with respect to $\xi_\emptyset$ and let $\mathcal{K}$ denote the $L^2$ random variables measurable with respect to $(\epsilon_u; u \in \Gamma_d)$. Then endogeny holds in this case provided $P^-$ is irreducible and $\mathcal{H}_0 \cap \mathcal{K}^\perp = \{0\}$. See \cite{Warren} for full details. \end{thm} \begin{thm} \label{endthm2} Consider the RDE \begin{equation} \label{truncRDE} X_u = 1 - \prod_{i=1}^{N^n_u} X_{ui}. \end{equation} Then, by Lemma \ref{invmsr}, there exists an invariant probability measure on $\{0,1\}$ for (\ref{truncRDE}). Let $\mu_n^1$ denote the probability of a 1 under this invariant measure. Then the corresponding tree-indexed solution is endogenous if and only if $H_n'(\mu_n^1) \leq 1$. \end{thm} \begin{proof} Let $N^* = ess \sup N < \infty$ be a bound for $N$. We can then think of the random tree with branching factor $N$ as being embedded in a $N^*$-ary tree. Each vertex has $N^*$ daughter vertices and the first $N$ of these are thought of as being {\em alive} (the remaining being {\em dead}). In this context our RDE reads \[X = 1 - \prod_{live \ u} X_u.\] We now compute the transition probabilities from the previous theorem. Consider first the transition from $(0,1)$ to $(1,0)$. The first coordinate automatically maps to 1 and the second maps to 0 provided all of the inputs not on the distinguished line of descent are equal to 1. The conditional probability of the vertex on the distinguished line of descent being alive is $N/N^*$ since there are $N^*$ vertices, of which $N$ are alive. The probability of the remaining $N-1$ vertices each taking value 1 is $(\mu_n^1)^{N-1}$ and so the probability of a transition from $(0,1)$ to $(1,0)$, conditional on $N$, is just \[ 1_{(N \geq 1)} \frac{(\mu_n^1)^{N-1}N}{d}. \] Taking expectations, the required probability is \[ \mathbb{E}[1_{(N \geq 1)} \frac{(\mu_n^1)^{N-1}N}{N^*}] = \frac{\mathbb{E}[1_{(N \geq 1)} N (\mu_n^1)^{N-1}]}{N^*} = \frac{H_n'(\mu_n^1)}{N^*}. \] The probability of a transition from $(1,0)$ to $(0,1)$ is the same by symmetry. Hence $P^-$ is given by \[P^- = \left(% \begin{array}{cc} 0 & \frac{H_n'(\mu_n^1)}{N^*} \\ \frac{H_n'(\mu_n^1)}{N^*} & 0 \\ \end{array}% \right),\] and the Perron-Frobenius eigenvalue $\rho$ is $\frac{H_n'(\mu_n^1)}{N^*}$. By Theorem \ref{endthm1}, the criterion for endogeny is $N^*\rho \leq 1$, i.e. $H_n'(\mu_n^1) \leq 1$, provided that, in the critical case $H_n'(\mu^1_n) = 1$, we verify the stated non-degeneracy conditions. \newline \newline It is easily seen that $P^-$ is irreducible. For the other criterion, let $X \in \mathcal{H}_0 \cap \mathcal{K}^\perp$ so that $X = f(X_\emptyset)$ for some $L^2$ function $f$ and $\mathbb{E}[XY] = 0$ for all $Y \in \mathcal{K}$. Taking $Y = 1$, we obtain $\mathbb{E}[X] = 0$. Writing $X$ as \[X = a 1_{(X_\emptyset = 1)} + b1_{(X_\emptyset = 0)},\] where $a,b$ are constants, we obtain \[X = a 1_{(X_\emptyset = 1)} - \frac{a \mu_n^1}{1-\mu_n^1} 1_{{1}(X_\emptyset = 0)}.\] For convenience we will scale by taking $a = 1$ (we assume that $X \neq 0$): \[X = 1_{(X_\emptyset = 1)} - \frac{ \mu_n^1}{1-\mu_n^1} 1_{(X_\emptyset = 0)}. \] Now, for each $k$ take $Y_k = 1_{(N_\emptyset = k)} \in \mathcal{K}$. Then \[ \mathbb{E}[XY_k] = \mathbb{E}[1_{(N_\emptyset =k)} (1_{(X_\emptyset = 1)} - \frac{\mu_n^1}{1-\mu_n^1} 1_{(X_\emptyset = 0)})] \] \[ = \mathbb{P}(N = k)[1 - (\mu_n^1)^k - \frac{(\mu_n^1)^{k+1}}{1 -\mu_n^1}] \] \[ =\mathbb{P}(N = k)(1 - \frac{(\mu_n^1)^k}{1 -\mu_n^1}) . \] Now if we sum this expression over $k$ we get $1-\frac{H_n(\mu^1_n)}{1-\mu_n^1}=0$. So either each term in the sum is zero or one or more are not. But at least two of the probabilities are non-zero by assumption (at least for sufficiently large $n$) whilst the term $(1 - \frac{(\mu_n^1)^k}{1 - \mu_n^1})$ can only disappear for at most one choice of $k$. Hence at least one of the terms is non-zero and this contradicts the assumption that $X\in \mathcal{H}_0 \cap \mathcal{K}^\perp$. \end{proof} {\em Proof of the remainder of Theorem \ref{main}} We prove that $H'(\mu^1) > 1$ implies $\textbf S$ is not endogenous so that ${\boldsymbol{C}}$ cannot equal $\textbf S$. \newline \newline By Theorem \ref{endthm2} we know that the RDE (\ref{truncRDE}) has two invariant distributions if and only if $H_n'(\mu^1_n) > 1$. But we know that $C_u^{(n)}$ converges to $C_u$ in $L^2$ and hence $\mu_n^2 \rightarrow \mu^2 \neq \mu^1$ so that $S_u$ and $C_u$ have different second moments. It now follows that $S_u$ does not have the same distribution as $C_u$. Since $[0,1]$ is bounded, this sequence of moments determines a unique distribution which is therefore that of $C_u$: see Theorem 1 of Chapter VII.3 of Feller \cite{Book} \hfill$\square$ \section{Basins of attraction} Now we consider the {\em basin of attraction} of the endogenous solution. That is, we ask for what initial distributions does the corresponding solution at root, $X_\emptyset$, converge (in law) to the endogenous solution. \begin{defn} Let $\varsigma$ be the law of the endogenous solution. Suppose that we insert independent, identically distributed random variables with law $\nu$ at level $n$ of the tree and apply the RDE to obtain the corresponding solution $X_u^n(\nu)$ (with law ${\mathcal T}^{n-|u|}(\nu)$) at vertex $u$. The basin of attraction $B(\pi)$ of any solution is given by \[B(\pi) = \{\nu \in {\mathcal P} : {\mathcal T}^n(\nu) \buildrel \hbox{ weak}^*\over \rightarrow \pi\},\] which is, of course, equivalent to the set of distributions $\nu$ for which $X_u^n(\nu)$ converges in law to a solution $X$ of the RDE, with law $\pi$. \end{defn} \subsection{The unstable case: $H'(\mu^1) > 1$} \begin{lem} Suppose that $H'(\mu^1) > 1$. Then $X_u^n(\nu) \buildrel L^2\over \rightarrow C_u$, the endogenous solution, for any $\nu$ with mean $\mu^1$ other than the discrete measure on $\{0,1\}$. \end{lem} \begin{proof} Let $E_k = \mathbb{E}[X_u^n(\nu)^2]$, where $k = n - |u|$, and let $r_k = \mathbb{E}[C_u X_u^n(\nu)]$. Then \[\mathbb{E}[(X_u^n(\nu) - C_u)^2] = E_k - 2r_k + \mu^2.\] Now, \[E_k = \mathbb{E}[(1 - 2\prod_{i=1}^{N_u} X_{ui}^n(\nu) + \prod_{i=1}^{N_u} X_{ui}^n(\nu)^2)]\] \[= 1 - 2H(\mu^1) + H(E_{k-1}).\] This is a recursion for $E_k$ with at most two fixed points (recall that the equation $H(x) - x = constant$ has at most two roots). Recalling the moment equation (\ref{moment}), these are easily seen to be $\mu^1$ and $\mu^2$, the first and second moments of the endogenous solution. We have assumed that $\nu$ is not the discrete distribution and so its second moment (i.e. $E_0$) must be strictly less than $\mu^1$. Now, under the assumption that $H'(\mu^1)>1$, $\mu^1$ and $\mu^2$ lie either side of the minimum $\mu^*$ of $H(x) - x $ and $H'(\mu^*) = 1$ so that $H'(\mu^2) < 1$. Hence $\mu^2$ is the stable fixed point and it now follows that $E_k$ converges to $\mu^2$. \newline \newline The recursion for $r_k$ is essentially the same as that for $E_k$: \[\mu^2 - r_k = H(\mu^2) - H(r_{k-1}).\] This has $\mu^1$ and $\mu^2$ as fixed points and, since \[r_0 = \mathbb{E}[C_u X_u(\nu)] \leq \sqrt{\mathbb{E}[C_u^2] \mathbb{E}[X_u(\nu)^2]} < \sqrt{\mu^1 \mu^1} = \mu^1,\] we are in the same situation as with $E_k$. That is, we start to the left of $\mu^1$ and, because $H'(\mu^1) > 1$, we conclude that $\mu^1$ is repulsive and it follows that $r_k$ converges to $\mu^2$ under the assumptions of the lemma. Hence \[\mathbb{E}[(X_u^n(\nu) - C_u)^2] = E_k - 2r_k + \mu^2 \rightarrow 0.\] \end{proof} \begin{thm} Let $\delta$ denote the discrete distribution on $\{0,1\}$ with mean $\mu^1$. Then \[B(\varsigma) = \{\nu \in {\mathcal P}: \int x d\nu(x) = \mu^1 \ and \ \nu \neq \delta\}.\] That is, $B(\varsigma)$ is precisely the set of distributions on $[0,1]$ with the correct mean (except the discrete distribution with mean $\mu^1$). \end{thm} \begin{proof} We have already shown that \[\{\nu \in {\mathcal P}: \int x d\nu(x) = \mu^1 \ and \ \nu \neq \delta)\} \subseteq B(\varsigma).\] Since the identity is bounded on $[0,1]$, we conclude that \[\mathbb{E}X_u^n(\nu) \rightarrow \mathbb{E}C_u,\hbox{ if }\nu\in B(\varsigma), \] so that $\nu \in B(\varsigma)$ only if the mean of ${\mathcal T}^n(\nu)$ converges to $\mu^1$. From the moment equation (\ref{moment}), the mean of $X_u^n(\nu)$ is obtained by iterating the map $f$ $n$ times, starting with the mean of $\nu$. This mapping has a unique fixed point $\mu^1$ and, since $H'(\mu^1) > 1$, it is repulsive. It follows that the only way we can have convergence in mean is if we start with the correct mean, that is, if $\nu$ has mean $\mu^1$. Hence \[B(\varsigma) \subseteq \{\nu \in {\mathcal P}: \int x d\nu(x) = \mu^1\ \ and \ \nu \neq \delta\}.\] \end{proof} \subsection{The stable case: $H'(\mu^1) \leq 1$} \begin{thm} \label{stable} Let $b(\mu^1)$ be the basin of attraction of $\mu^1$ under the iterative map for the first moment, $f:t \mapsto 1 - H(t)$. Then \[ B(\varsigma) = \{\nu \in {\mathcal P} : \int xd\nu(x) \in b(\mu^1)\}. \] \end{thm} Consider once again $\mathbb{E}[(X_u^n(\nu) - C_u)^2]$. Let $m_k^\theta = \mathbb{E}X_u^n(\nu)^\theta$, where $k = n - |u|$. Then \[m^2_k = \mathbb{E}(1 - 2\prod_{i=1}^{N_u} X_{ui}^n(\nu) + \prod_{i=1}^{N_u} X_{ui}^n(\nu)^2)\] \[= 1 - 2H(m^1_{k-1}) + H(m^2_{k-1}).\] Recalling that $r_k = \mathbb{E}[C_u X_u^n(\nu)]$, we have \[r_k = \mathbb{E}[(1 - \prod_{i=1}^{N_u} C_{ui})(1 - \prod_{i=1}^{N_u} X_{ui}^n(\nu))]\] \[= \mathbb{E}[(1 - \prod_{i=1}^{N_u} C_{ui} - \prod_{i=1}^{N_u} X_{ui}^n(\nu) + \prod_{i=1}^{N_u} C_{ui}X_{ui}^n(\nu))]\] \[= 1 - H(\mu^1) - H(m^1_{k-1}) + H(r_{k-1}).\] We now turn our attention to analysing the dynamics of $m^2_k$ and $r_k$. We will concentrate on the equation for $m^2_k$ as the equation for $r_k$ is essentially the same. By assumption, $m^1_k$ converges to $\mu^1$ and so we may approximate $m^1_k$, for $k \geq k_\epsilon$ (say), by $\mu^1 \pm \epsilon$, for some small $\epsilon > 0$. \begin{lem} The trajectory $l_k$ of the dynamical system defined by the recursion \[ l_k = 1 - 2H(\mu^1 + \epsilon) + H(l_{k-1}), \;\;l_{k_\epsilon} = m^2_{k_\epsilon}, \] is a lower bound for $m^2_k$ for all $k \geq k_\epsilon$, where $k_\epsilon$ is a positive integer chosen so that \[|m^1_k - \mu^1| < \epsilon, \hbox{ for } k \geq k_\epsilon.\] \end{lem} The proof is obvious. \begin{lem} Let \[f_\epsilon(x) = 1 - 2H(\mu^1 + \epsilon) + H(x), \ \ \ \ \ \ x \in [0,1].\] Then, for sufficiently small $\epsilon > 0$, $f_\epsilon$ has a unique fixed point $\mu^1(\epsilon)$ for which $\mu^1(\epsilon)<\mu^*$. Moreover, as $\epsilon\rightarrow 0$, $\mu^1(\epsilon)\rightarrow \mu^1$. \end{lem} \begin{proof} This follows from uniform continuity, the fact that $H(\mu^1+\epsilon)<H(\mu^1)$ and the the fact that $H'(\mu^1)\leq 1\Rightarrow \mu^1\leq\mu^*$. \end{proof} \begin{lem} $l_k$ converges to $\mu^1(\epsilon)$. \end{lem} \begin{proof} We have $l_k = f_\epsilon^{k-k_\epsilon}(l_{k_\epsilon})$ and so we need only verify that $l_{k_\epsilon}$ is in the basin of attraction of $\mu^1(\epsilon)$ and that $\mu^1(\epsilon)$ is stable. We know that \[f_\epsilon(\mu^1 + \epsilon) < \mu^1 + \epsilon\] since $1 - H(\mu^1 + \epsilon) < 1 - H(\mu^1) = \mu^1$ and so it must be the case that $\mu^1 + \epsilon \in (\mu^1(\epsilon), p(\epsilon))$. It now follows that $l_{k_\epsilon} < p(\epsilon)$ since $l_{k_\epsilon} \leq m^1_{k_\epsilon} < \mu^1 + \epsilon$. In the strictly stable case $H'(\mu^1) < 1$, the stability of $\mu^1(\epsilon)$ follows from the fact that $\mu^1(\epsilon)$ converges to $\mu^1$ as $\epsilon$ tends to zero (by the previous lemma) and therefore $\mu^1(\epsilon)$ can be made arbitrarily close to $\mu^1$ by choosing $\epsilon$ to be sufficiently small. This means that for sufficiently small $\epsilon$, $H'(\mu^1(\epsilon)) < 1$ by the continuity of $H'$. In the critical case $H'(\mu^1) = 1$, we have $\mu^1(\epsilon) < \mu^1 $, so that (by strict convexity) $H'(\mu^1(\epsilon)) < 1$. In either case it now follows that $f_\epsilon^{k-k_\epsilon}(l_{k_\epsilon})$ converges to $\mu^1(\epsilon)$. \end{proof} {\em Proof of Theorem \ref{stable}} The preceding lemmas tell us that \[\liminf_{k \rightarrow \infty} m^2_k \geq \lim_{k \rightarrow \infty} l_k = \mu^1(\epsilon).\] Letting $\epsilon$ tend to zero, we obtain \[\liminf_{k \rightarrow \infty} m^2_k \geq \mu^1.\] The fact that $m^2_k \leq m^1_k$ for every $k$ gives us the corresponding inequality for the lim sup: \[\limsup_{k \rightarrow \infty} m^2_k \leq \lim_{k \rightarrow \infty} m^1_k = \mu^1.\] We conclude that $m^2_k$ converges to $\mu^1$. \newline \newline Now, \[\mathbb{E}[(X_u^n(\nu) - C_u)]^2 = m^2_k - 2r_k + \mu^2,\] so that $\mathbb{E}[(X_u^n(\nu) - C_u)^2] \rightarrow 0$, remembering that in the stable case the discrete solution and endogenous solution coincide (i.e. $\mu^1 = \mu^2$). We have now shown that \[\{\nu \in {\mathcal P} : \int x d\nu(x) \in b(\mu^1)\} \subseteq B(\varsigma),\] and the necessity for convergence in mean ensures that we have the reverse inclusion. This completes the proof. \hfill$\square$ \section{Outside the basin of attraction of the endogenous solution} In this section we examine what happens if we iterate distributions with mean outside the basin of attraction of the endogenous solution. \begin{defn} Recall that a map $f$ has an $n$-cycle starting from $p$ if $f^n(p) = p$, where $f^n$ denotes the $n$-fold composition of $f$ with itself. \end{defn} It is easily seen that the map for the first moment $f: t \mapsto 1 - H(t)$ can have only one- and two-cycles. This is because the iterated map $f^{(2)}:t \mapsto 1 - H(1-H(t))$ is increasing in $t$ and hence can have only one-cycles. Notice also that the fixed points (or one-cycles) of $f^{(2)}$ come in pairs: if $p$ is a fixed point then so too is $1 - H(p)$. \newline \newline We consider the iterated RDE: \begin{equation}\label{RDE3} X=1-\prod_{i=1}^{N_\emptyset}(1-\prod_{j=1}^{N_i}X_{ij}). \end{equation} This corresponds to the iterated map on laws on [0,1], ${\mathcal T}^2$, where ${\mathcal T}$ is given at the beginning of section \ref{tnote}. We denote a generic two-cycle of the map $f^{(2)}$ by the pair $(\mu^1_+,\mu^1_-)$. \begin{thm} Suppose that $(\mu^1_+,\mu^1_-)$ is a two-cycle of $f^{(2)}$. There are at most two solutions of the RDE (\ref{RDE3}) with mean $\mu^1_+$. There is a unique endogenous solution $C^+$, and a (possibly distinct) discrete solution, $S^+$, taking values in $\{0,1\}$. The endogenous solution $C^+$ is given by $P(S^+=1|{\mathbf T})$ (just as in the non-iterated case). The solutions are distinct if and only if $H'(\mu^1_-)H'(\mu^1_+)>1$, i.e. if and only if $\mu^1_+$ (or $\mu^1_-$) is an unstable fixed point of $f^{(2)}$. \end{thm} \begin{proof} This uses the same method as the proofs of results in section \ref{momsec}. First, it is clear that $S^+$ is a solution to (\ref{RDE3}), where $P(S^+=1)=\mu^1_+=1-P(S^+=0)$. Now take interleaved tree-indexed solutions to the RDE on the tree ${\mathbf T}$, corresponding (on consecutive layers) to mean $\mu^1_+$ and $\mu^1_-$. Then we define $C_{(n)}^+=P(S^+_\emptyset=1|N_v; |v|\leq 2n)= 1 - \prod_{i_1=1}^{N_\emptyset}(1 - \prod_{{i_2}=1}^{N_{i_1}}(...(1-(\mu^1_+)^{N_{i_1i_2\ldots i_{2n- 1}}})...)) $. It follows that $C_{(n)}^+$ converges a.s. and in $L^2$ to $C^+$ and that this must be the unique endogeneous solution (since if $Z$ is any solution with mean $\mu^1_+$ then $E[Z_\emptyset|N_v; |v|\leq 2n]=C_{(n)}^+$). As in Lemma \ref{momlem}, we establish that there are at most two solutions by showing that there are at most two possible moment sequences for a solution and that if $\mu^1_+$ is stable (for $f^{(2)}$) then the only possible moment sequence corresponds to the discrete solution $S^+$. To do this, note that, denoting a possible moment sequence starting with first moment $\mu^1_+$ by $(\mu^k_+)$, we have \begin{equation}\label{R4} H(\mu^k_-)=H(\sum_{j=0}^k(-1)^j{{k}\choose{j}}H(\mu^j_+))=\sum_{j=0}^k(- 1)^j{{k}\choose{j}}\mu^j_+. \end{equation} Then we look for solutions of \begin{equation}\label{R5} H(\sum_{j=0}^{k-1}(-1)^j{{k}\choose{j}}H(\mu^j_+)+(-1)^kH(t))=\sum_{j=0}^{k-1}(- 1)^j{{k}\choose{j}}\mu^j_++(-1)^kt, \end{equation} in the range where the argument of $H$ on the lefthand side is non- negative and less than 1. In this range $H$ is increasing and convex so there are at most two solutions. Suppose that $\mu^1_+$ is a stable fixed point then the unique moment sequence is constant, since the other solution of $$ g(t)\buildrel def\over = H(1-2H(\mu^1_+)+H(t))-(1-2\mu^1_++t)=0 $$ must be greater than $\mu^1_+$ (because $g'(\mu^1_+)=H'(\mu^1_+)H'(\mu^1_-)-1\leq 0$). If $\mu^1_+$ is unstable, then there are, potentially two solutions for $\mu^2_+$, one of which is $\mu^1_+$. Taking the other potential solution, and seeking to solve (\ref{R5}), one of the solutions will give a value for $\mu^k_-$ greater than $\mu^*>\mu^2_-$ which is not feasible, so there will be at most one sequence with $\mu^2_+\neq\mu^1_+$. Now, as in the proof of Theorem 4.9, we can show that, if $\mu^1_+$ is unstable then, in the corresponding RDE with branching factor truncated by $n$, the two solutions to the RDE are distinct for large $n$, and the endogenous solution converges to $C^+$ in $L^2$ as $n\rightarrow \infty$. It follows that there are two distinct solutions in this case. \end{proof} Given a fixed point $\mu^1_+$ of $f^{(2)}$, denote the law of the corresponding conditional probability solution by $\varsigma_+$. Denote the corresponding basin of attraction (under ${\mathcal T}^2$) by $B({\varsigma_+})$ and denote the basin of attraction of $\mu^1_+$ under the map $f^{(2)}$ by $b^2(\mu^1_+)$. Then \begin{thm} the following dichotomy holds: \begin{itemize} \item[(i)]if $H'(\mu^1_+)H'(\mu^1_-)>1$, then $$ B({\varsigma_+})=\{\pi:\, \pi\hbox{ has mean }\mu^1_+\hbox{ and }\pi\hbox{ is not concentrated on }\{0,1\}\}. $$ \item[(ii)]if $H'(\mu^1_+)H'(\mu^1_-)\leq 1$ then $$ B({\varsigma_+})=\{\pi:\, \pi\hbox{ has mean }m\in b^2(\mu^1_+) \} $$ \end{itemize} \end{thm} \begin{proof} This can be proved in exactly the same way as Theorems 5.3 and 5.4. \end{proof} \section{Examples} We conclude with some examples. \begin{example} We consider first the case where $N$ is Geometric($\alpha$), so that $P(N=k)=\beta^{k-1}\alpha$ and $H(s)=\frac{\alpha s}{1-\beta s}$ (with $\beta=1-\alpha$). It follows that $$ f^{(2)}(s)=s, $$ so that every pair $(s,\frac{1-s}{1-qs})$ is a two-cycle of $f$ and the unique fixed point of $f$ is $1-\sqrt \alpha$. It also follows that $s$ is a neutrally stable fixed point of $f^{(2)}$ for each $s\in[0,1]$. Thus we see that the unique endogenous solution to the original RDE is discrete and the value at the root of the tree is the a.s. limit of $1-\prod_{i_1=1}^{N_\emptyset}(1- \prod_{i_2=1}^{N_{i_1}}(\ldots (1-(1-\sqrt \alpha)^{N_{i_1,\ldots,i_n}})\ldots ))$. Moreover, for any $s$, there is a unique solution to the iterated RDE with mean $s$ and it is discrete and endogenous and is the a.s. limit of $1-\prod_{i_1=1}^{N_\emptyset}(1- \prod_{i_2=1}^{N_{i_1}}(\ldots (1-s^{N_{i_1,\ldots,i_{2n-1}}})\ldots )).$ \end{example} \begin{example} Consider the original noisy veto-voter model on the binary tree. It follows from (\ref{trans}) that $$ H(z)=(pH(z)+qz)^2\Rightarrow H(z)=\frac{1-2pqz-\sqrt{1-4pqz}}{2p^2}. $$ This is non-defective if and only if $p\leq \frac{1}{2}$ (naturally), i.e. if and only if extinction is certain in the trimmed tree from the original veto-voter model. It is fairly straightforward to show that $H'(\mu^1)>1\Leftrightarrow p<\frac{1}{2}$. Thus, the endogenous solution is non-discrete precisely when the trimmed tree is sub-critical. \end{example} \begin{example} In contrast to the case of the veto-voter model on the binary tree, the veto-voter model on a trinary tree can show a non-endogenous discrete solution even when the trimmed tree is supercritical. More precisely, the trimmed tree is supercritical precisely when $p>\frac{1}{3}$, but the discrete solution is non-endogenous if and only if $p<p_e^{(3)}\buildrel def\over =\frac{3.\sqrt 3-4}{3.\sqrt 3-2}$, and $p_e^{(3)}>\frac{1}{3}$. \end{example}
1,108,101,563,743
arxiv
\section{Introduction} Phase separation is important for white dwarf stars \cite{wdwarf}. As a star cools, and crystallization takes place, the crystal phase is enriched in oxygen while the liquid is enriched in carbon. We believe phase separation may also be important for neutron stars that accrete material from a companion. This material can undergo nuclear reactions involving rapid proton capture (the rp process) to synthesize a variety of medium mass nuclei \cite{rpash}. Further accretion increases the density of a fluid element until crystallization occurs. However, as we explicitly demonstrate with molecular dynamics simulations, crystallization is accompanied by chemical separation. The composition of the new solid crust is very different from the remaining liquid ocean. This changes many properties of the crust and can impact many observables. With chemical separation, the liquid ocean, see Fig. \ref{Fig0}, is greatly enriched in low atomic number $Z$ elements. Carbon, if present, may be depleted in the crystal (crust) and enriched in the liquid ocean phase. Also, chemical separation may change the thermal conductivity of the crust and its temperature profile. Indeed some neutron stars are observed to produce energetic X-ray bursts known as superbursts. These are thought to involve the unstable thermonuclear burning of carbon \cite{superbursts, superbursts2, superbursts3}. However it is unclear how the initial carbon concentration is obtained and how the ignition temperature is reached. Chemical separation may significantly change the thickness, shear modulus, and breaking strain of the crust. This could change the shape of a neutron star and the radiation of periodic gravitational waves \cite{jones2005} \cite{haskell}. Furthermore, changes in the crust could change the properties of quasi periodic oscillations that may be observable in thermonuclear bursts. \begin{figure}[ht] \begin{center} \includegraphics[width=2.75in,angle=0,clip=true] {crust.eps} \caption{(Color on line) Schematic diagram of the surface of an accreting neutron star. This paper focuses on chemical separation upon crystallization at the boundary between the liquid ocean and the solid crust. We find that the ocean is enriched in low $Z$ elements. Note that the boundary between the inner and outer crust is not shown.} \label{Fig0} \end{center} \end{figure} The process we consider here is distinct from sedimentation, which may also occur at lower densities where the ions are fluid \cite{peng}. However, we are not aware of any previous calculations of chemical separation from crystallization for accreting neutron stars. Jones has considered how a range of compositions may change the properties of the crust of a non-accreting neutron star. In ref. \cite{jones88} he considers phase separation, but only based on quite early work on the free energy of the two-component Coulomb plasma. Instead, Jones suggests that the system will form an amorphous solid \cite{jones}. The ash resulting from rapid proton capture (rp process) nuclear reactions is expected to have a complex composition involving a number of different chemical elements \cite{rpash, rpash2}. Unfortunately, it can be difficult to construct the phase diagram for a multi-component system. The pure one component plasma (OCP) phase diagram is well known. The liquid solidifies when the ratio of a typical Coulomb energy to the thermal energy $kT$ is $\Gamma \approx 175$ \cite{ocp}. The parameter $\Gamma$ is defined, \begin{equation} \Gamma=\frac{Z^2 e^2}{a T}, \label{gamma} \end{equation} where the ion charge is $Ze$, the temperature is $T$, and the ion sphere radius $a$ describes a typical distance between ions, $a=(3/4\pi n)^{1/3}$. Here $n$ is the ion (number) density. The phase diagram for binary mixtures has also been determined. See for example \cite{cophasediagram}. For binary mixtures, the solid phase is enriched in the high $Z$ ion and the liquid phase is enriched in the low $Z$ ion. Often, the theoretical phase diagram is constructed from extremely accurate calculations of the free energies of the solid and liquid phases. The melting point is determined by equating these two free energies. Very accurate calculations are needed because the free energies are nearly parallel as a function of $T$. A small error in the free energy of one phase can lead to a large error in the melting point. Therefore, it may be very difficult to compute the free energy of multi-component systems with enough accuracy to determine the phase diagram. Instead, in this paper we determine the phase diagram of a multi-component system directly via molecular dynamics (MD) simulations. Our simulation volume contains regions of both the liquid and solid phase. This approach has many advantages. It is simple and robust. Delicate free energy calculations are not needed. One can directly measure the composition of the two phases that are in equilibrium. Furthermore, one can run simulations with arbitrarily complicated compositions. However, there are two limitations to direct molecular dynamics simulation. First, finite size effects may be significant because a large fraction of the ions are near the interfaces between the two phases. We minimize finite size effects by using a moderately large number of ions 27,648 and we measure the composition of the two phases in regions that are away from the interfaces. Second, it can take a long time for the two phase system to come into thermodynamic equilibrium. We address non equilibrium effects by running for a total simulation time of 151 million fm/c (over six million MD time steps) and by monitoring the time dependence of the composition of the two phases. Still, as we discuss below, the system may not be in full equilibrium and this may be an important question for further work. Nevertheless, we start with equal compositions and find dramatically different compositions for the liquid and solid, where the difference has increased with simulation time. If the system is not in equilibrium by the end of our simulation, we expect the difference between liquid and solid to only increase further with time. Therefore, we do not think non-equilibrium effects will change our conclusion that {\it the liquid and solid have very different compositions}. This paper is organized as follows. In Section \ref{mdsimulation} we describe our molecular dynamics simulation. Results are presented in Section \ref{results} and we conclude in Section \ref{conclusions}. \section{Molecular Dynamics Simulation} \label{mdsimulation} We now describe the initial composition for our simulation. Schatz et al. have calculated the rapid proton capture (rp) process of hydrogen burning on the surface of an accreting neutron star \cite{rpash}. This produces a variety of nuclei up to atomic masses $A\approx 100$. Gupta et al. \cite{gupta} then calculate how the composition of the rp process ash evolves because of electron capture and light particle reactions as the material is buried by further accretion. Their final composition, at a density of $2.16\times 10^{11}$ g/cm$^3$ (near neutron drip at the bottom of the outer crust) has forty percent of the ions with atomic number $Z=34$, while an additional 10 \% have $Z=33$. The remaining half of the ions have a range of lower $Z$ from $Z=8$ to 32. Finally there is a small abundance of $Z=36$ and $Z=47$. \begin{figure}[ht] \begin{center} \includegraphics[width=2.75in,angle=270,clip=true] {yz151.ps} \caption{(Color on line) Abundance (by number) of chemical elements versus atomic number $Z$. The plus symbols show the initial composition of the mixture. The final compositions of the liquid phase, open green circles, and solid phase, filled red squares, are show after a simulation time of $151\times 10^6$ fm/c, see Section \ref{results}.} \label{Fig1} \end{center} \end{figure} For simplicity we use the Gupta et al. abundances because we have them available. However these abundances were calculated assuming no phase separation. Therefore they have not been determined self-consistently if there is phase separation. Nevertheless, we use them to provide a first orientation. Note that we use abundances calculated near $10^{11}$ g/cm$^3$, while the ocean/ crust boundary may be near $10^{10}$ g/cm$^3$. The differences in composition at these two densities may be primarily do to a modest amount of electron capture. This should not significantly change our results. Perhaps phase separation will lead to more important changes in the abundances. Chemical separation is expected to change compositions over a large range of densities in addition to densities near the ocean/ crust interface. For example, changes in composition of the liquid, near the crust interface, are expected to diffuse throughout the ocean. As we discuss in section \ref{conclusions} future calculations of abundances including phase separation would be very useful. \begin{table} \caption{Abundance $y_z$ (by number) of chemical element $Z$. Results are presented for the original mixture and for the final liquid and solid phases after a simulation time of $151\times 10^6$ fm/c, see text.} \begin{tabular}{llll} $Z$ & Mixture & Liquid & Solid \\ 8 & 0.0301 & 0.0529 & 0.0087 \\ 10 & 0.0116 & 0.0205 & 0.0021 \\ 12 & 0.0023 & 0.0043 & 0.0006 \\ 14 & 0.0023 & 0.0043 & 0.0005 \\ 15 & 0.0023 & 0.0043 & 0.0004 \\ 20 & 0.0046 & 0.0055 & 0.0029 \\ 22 & 0.0810 & 0.1024 & 0.0616 \\ 24 & 0.0718 & 0.0816 & 0.0635 \\ 26 & 0.1019 & 0.1065 & 0.1017 \\ 27 & 0.0023 & 0.0025 & 0.0027 \\ 28 & 0.0764 & 0.0744 & 0.0746 \\ 30 & 0.0856 & 0.0773 & 0.0949 \\ 32 & 0.0116 & 0.0099 & 0.0130 \\ 33 & 0.1250 & 0.1079 & 0.1388 \\ 34 & 0.3866 & 0.3408 & 0.4297 \\ 36 & 0.0023 & 0.0021 & 0.0030 \\ 47 & 0.0023 & 0.0030 & 0.0013 \\ \end{tabular} \label{tableone} \end{table} As an initial composition we chose 432 ions with $Z$ and mass number $A$ drawn at random according to the Gupta et al. abundances. This is shown in Fig. \ref{Fig1} and listed in Table \ref{tableone} and closely approximates the original distribution up to limitations of the small statistics. We chose such a small system, 432 ions, to simplify producing the original solid configuration, see below. Note that the liquid and solid phase results shown in Fig. \ref{Fig1} will be discussed in Section \ref{results}. At these densities, electrons form a relativistic degenerate Fermi gas. The ions are fully pressure ionized and interact with each other via screened Coulomb interactions. The potential between the $i$th and $j$th ion is assumed to be, \begin{equation} v_{ij}(r)=\frac{Z_iZ_j e^2}{r} {\rm e}^{-r/\lambda}. \label{v(r)} \end{equation} Here the ion charges are $Z_i$ and $Z_j$, $r$ is their separation and the electron screening length is $\lambda$. For cold relativistic electrons, the Thomas Fermi screening length is $\lambda^{-1}=2\alpha^{1/2}k_F/\pi^{1/2}$ where the electron Fermi momentum $k_F$ is $k_F=(3\pi^2n_e)^{1/3}$ and $\alpha$ is the fine structure constant. Finally the electron density $n_e$ is equal to the ion charge density, $n_e=\langle Z\rangle n$, where $n$ is the ion density and $\langle Z\rangle$ is the average charge. Note that we are interested in temperatures near the melting point where the ion thermal de Broglie wave length is much shorter than the inter ion spacing. Therefore quantum corrections to the ion motion should be very small. We now describe the initial conditions for our classical MD simulation. It can be difficult to obtain an equilibrium crystal configuration for a large system involving a mixture of ions. Therefore, we start with a very small system of 432 ions with random coordinates at a high temperature and cool the system a number of times by re-scaling the velocities until the system solidifies. Here the velocities of all of the ions are multiplied by a common factor so that the kinetic energy per ion is $3T/2$ for a series of decreasing temperatures $T$. Next four copies of this solid configuration were placed in the top half of a larger simulation volume along with four copies of a 432 ion liquid configuration. The resulting system with 3456 ions was evolved in time until it fully crystalized. Finally, four copies of this 3456 ion crystal were placed in the top half of the final simulation volume along with four copies of a 3456 ion liquid configuration. This final system has 27648 ions and consists of a solid phase above a liquid phase. Note that the initial compositions of these two phases are equal. Our results can be scaled to different densities. For historical reasons, our simulation was run at a relatively high ion density of $n=7.18\times 10^{-5}$ fm$^{-3}$. This corresponds to a mass density of $1.04\times 10^{13}$ g/cm$^3$. However, this density can be scaled to any desired value $\hat n$ by also changing the temperature $\hat T$ so that $\hat n/\hat T^3=7.18\times 10^{-5}/(0.34360)^3$ (MeV-fm)$^{-3}$ this insures the value of $\Gamma$, see Eqs. \ref{gamma},\ref{gammamix}, remains the same. Note that our simulations depend on the electron screening length $\lambda$ only in the ratio of $\lambda/a$. For relativistic electrons, this ratio is independent of density. Therefore, the above scaling works even with electron screening effects. This is because the only length scale in the problem for both electron and ion interactions is related to $n^{-1/3}$. Many run parameters are collected in Table \ref{tabletwo}. We evolve the system in time using the simple velocity Verlet algorithm \cite{verlet} with a time step $\Delta t=25$ fm/c. We use periodic boundary conditions. Our simulation volume is large enough so that the box length $L=727.5$ fm is much larger than the electron screening length $\lambda$. Indeed $L/2\lambda=13.9$. The screened potential, for two ions separated by a distance $L/2$, is very small. This helps to reduce finite size effects. We include the interactions between all particles and do not cutoff the potential at large $r$. We evaluate the interaction between two particles as the single interaction with the nearest periodic image. We do not include an Ewald sum over further periodic images because our box is so large that interactions with periodic images other than the nearest one are very small. \begin{table} \caption{Simulation Parameters, see text. The energy per ion at simulation time $t$ is $E(t)$.} \begin{tabular}{ll} Parameter & Value \\ $n$ & $7.18\times 10^{-5}$ fm$^{-3}$ \\ $\lambda$ & 26.17 fm \\ $E(t=6\times 10^6 {\rm fm/c})$ & 328.747886 MeV \\ $E(t=151 \times 10^6 {\rm fm/c})$ & 328.747877 MeV \\ $\langle V\rangle/N$ & 328.23243 MeV \\ $T$ & 0.3436 MeV \\ \end{tabular} \label{tabletwo} \end{table} We start by evolving the system at fixed temperature by periodically re-scaling the velocities. We adjust the temperature so that approximately half of the system remains solid and half liquid. After a simulation time of $5\times 10^6$ fm/c we switch to evolution at constant energy and no longer re-scale the velocities. Thus most of our simulation is in the microcanonical ensemble at fixed energy and volume. We evolve the system at constant energy until the total simulation time is $151 \times 10^6$ fm/c. Energy conservation is excellent. The total energy per ion only changed by 3 parts in $10^8$ from $t=6 \times 10^6$ fm/c to $t=151\times 10^6$ fm/c, see Table \ref{tabletwo}. The simulation was performed on an accelerated MDGRAPE-2 board \cite{mdgrape} and took approximately nine weeks. \section{Results} \label{results} In this section we first test our molecular dynamics procedure by simulating a pure system. Then we present results for our mixture. A 3456 ion pure system, where each ion has the same charge ($Z=29.4$) and mass, is simulated. One half of the initial configuration is solid and the other half is liquid. The system is evolved at constant energy for approximately 300000 fm/c. During this time, the temperature is expected to evolve to the melting temperature because of the release of latent heat, as new solid melts or forms. Near the end of the simulation, we evaluate the temperature as 2/3 of the kinetic energy per ion and from this we determine $\Gamma$. We find $\Gamma=176.1\pm 0.7$. The $\pm 0.7$ error is statistical only and does not include possible errors from finite size or non-equilibrium effects. Our result is in good agreement with the known $\Gamma=175$ melting point of the OCP \cite{potekhin}. This shows that our molecular dynamics procedure can accurately describe crystallization, at least for a pure system. Next, we calculate the latent heat by determining the potential energy difference of 3456 ion pure liquid and pure solid configurations. The potential energy difference is equal to the latent heat if one assumes the difference in density between the phases is small. We find the potential energy difference per ion is $0.758 \pm 0.002 T_M$, where $T_M$ is the melting temperature. Again, the $0.002$ error is statistical only and does not include finite size effects. Our result is in reasonable agreement with the potential energy difference for the OCP of $0.7789 T_M$ \cite{ocp}. Our slightly lower melting temperature and latent heat may reflect screening length effects in a Yukawa fluid compared to the OCP \cite{hamaguchi}. Alternatively, our slightly lower latent heat may reflect finite size effects for a 3456 ion system. This latent heat is probably not an important heat source compared to the larger energy released from nuclear reactions \cite{gupta}. We go on to present results for our mixture with 27648 ions. The potential energy per ion $\langle V\rangle/N$ slowly decreases with simulation time until $t\approx 70 \times 10^6$ fm/c. This decrease may be associated with the change in composition of the two phases, see below. Next small fluctuations are observed in $\langle V\rangle/N$ for later times that appear to be associated with fluctuations in the amount of solid phase present in the simulation. The potential energy averaged over the last $20\times 10^6$ fm/c is given in Table \ref{tabletwo}. The temperature is evaluated as 2/3 of the kinetic energy per ion and we find $T=0.3436$ MeV. The parameter $\Gamma$, Eq. \ref{gamma}, can be evaluated for a mixture of ions. For a single ion of charge $Z_i$, the ion sphere radius $a_i$ is the radius of a sphere that contains $Z_i$ electrons, \begin{equation} a_i=\Bigl[\frac{3Z_i}{4\pi \rho_{ch}}\Bigr]^{1/3}\, , \end{equation} with $\rho_{ch}$ the electron density (or ion charge density). Therefore $\Gamma_i$ for this ion is, $\Gamma_i =Z_i^2 e^2/(a_i T)$ and averaging this over a distribution of ions yields $\Gamma$ for the mixture, \begin{equation} \Gamma = \frac{\langle Z^{5/3}\rangle e^2}{T}\Bigl[\frac{4\pi \rho_{ch}}{3}\Bigr]^{1/3}\, . \label{gammamix} \end{equation} Note that for a pure system, this equation reduces to Eq. \ref{gamma}. Table \ref{tablethree} gives values for $\langle Z^{5/3}\rangle$ and $\Gamma$. The value we find for our mixture $\Gamma=247$ is higher than that for a pure OCP ($\Gamma=175$). This suggests that all of the impurities in our crystal phase have somewhat lowered its melting temperature. However, see the discussion below about chemical separation. The configuration of the 27648 ions at the end of the simulation is shown in Fig. \ref{Fig3}. The solid phase is visible in the upper half of the simulation volume where the crystal planes are clearly evident. The first interface between solid and liquid is just below the center of the box and the second interface is near the top of the box. Thus the liquid phase extends from the bottom to the top of the box because of periodic boundary conditions. Figure \ref{Fig4} shows the final configuration of the 832 oxygen ions $Z=8$. The oxygen ions are clearly not distributed uniformly. Comparing Fig. \ref{Fig4} to Fig. \ref{Fig3} we see that the oxygen is greatly depleted in the solid and enriched in the liquid phase. This directly demonstrates phase separation and shows that the composition of the liquid is different from that of the solid. \begin{figure}[ht] \begin{center} \includegraphics[width=3in,angle=0,clip=true] {render151.eps} \caption{(Color on line) Configuration of the 27648 ions at the end of the simulation. The crystal planes of the solid phase are visible in the upper half of the figure. The lower half of the figure shows a liquid phase. The simulation volume is a cube 727.5 fm on a side.} \label{Fig3} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=3in,angle=0,clip=true] {o151.eps} \caption{(Color on line) Configuration of the 832 oxygen ions at the end of the simulation. Oxygen is depleted in the solid phase, compare with Fig. \ref{Fig3}.} \label{Fig4} \end{center} \end{figure} To further explore composition differences, we divide the ions into 15 groups according to their $z$ coordinates. The first group includes $z$ values from 0 to $L/15$, etc. The average charge of all of the ions in each group is plotted in Fig. \ref{Fig5}. Groups 1-5 have a relatively small $\langle Z\rangle$ near $\langle Z\rangle\approx 28$ while groups 8-13 have a large $\langle Z\rangle\approx 30.5$. After comparing Fig. \ref{Fig3} with Fig. \ref{Fig5}, we somewhat arbitrarily identify groups 1-5 as containing liquid phase, groups 8-13 solid phase and groups 6-7 and 14-15 as containing the two interfaces. See Table \ref{tablethree}. \begin{figure}[ht] \begin{center} \includegraphics[width=2.75in,angle=270,clip=true] {profile151.ps} \caption{Average ion charge $\langle Z\rangle$ in each of 15 sub-volumes. Group 1 is at the bottom and group 15 is at the top of the simulation volume.} \label{Fig5} \end{center} \end{figure} The composition of the liquid (groups 1-5) and solid (groups 8-13) are plotted in Fig. \ref{Fig1}, note the log scale, and listed in Table \ref{tableone}. The compositions of the liquid and solid are very different. Chemical elements with $Z\le20$ are greatly depleted in the solid phase, while most high $Z$ elements are enhanced in the solid phase. Figure \ref{Fig7} plots the ratio of the composition in the solid to that in the liquid phase for different simulation times. This ratio, at $t=151\times 10^6$ fm/c, is approximately linear in $Z$ for $15\le Z\le 36$. This suggest the affinity of a given element for the solid decreases as $Z$ decreases from that of the dominant crystal species $Z=34$. Elements with even smaller $Z<15$, while still greatly depleted in the solid, do not follow this linear trend. Perhaps very small $Z$ ions can occupy interstitial sites in the solid in addition to replacing higher $Z$ ions at normal lattice sites. This could enhance their concentration in the solid. Finally the highest charge ions $Z=47$ are, in fact, depleted in the solid. This goes against the general rule that the solid is enriched in high $Z$ ions. Note that there are only a few $Z=47$ ions in the simulation. Therefore statistical errors could be large. Perhaps this enhancement of $Z=47$ in the liquid is a non-equilibrium effect and could go away with further time evolution. However we note that for $Z=47$ the ratio of solid concentration to that in the liquid has been decreasing with simulation time. Therefore it may be unlikely for the ratio to change direction and finally increase with further simulation time. Instead, the reduction in concentration of the solid may be because $Z=47$ is a much larger charge than the dominant $Z=34$ of the crystal lattice. This large charge may fit poorly into the existing lattice and so the ions may move, instead, into the liquid phase. We now address the important question of a further time dependence of the composition and if our simulation has reached thermodynamic equilibrium. In Fig. \ref{Fig7} we plot the ratio of the composition of the solid to that in the liquid for different simulation times $t$. This ratio starts at one and decreases, at small $Z$, with increasing time. Comparing the ratio at $t=113\times 10^6$ fm/c with that at $151\times 10^6$ fm/c reveals a small but perhaps systematic difference. However, we caution that this figure is based on our somewhat arbitrary choices of liquid and solid regions at different times. If this difference with time is real it may suggest that the composition will continue to evolve very slowly for even larger simulation times. This is an important open question. In the future we will present results for longer simulation times and for simulations that start with very different compositions for the liquid and solid. Nevertheless, we believe the ratio in Fig. \ref{Fig7} clearly shows that the liquid and solid are expected to have very different compositions. \begin{figure}[ht] \begin{center} \includegraphics[width=2.75in,angle=270,clip=true] {ratiotime.ps} \caption{(Color on line) Ratio of composition in the solid phase to that in the liquid phase versus atomic number $Z$, for simulation times $t$ of $10\times 10^6$ fm/c (dotted circles) to $151\times 10^6$ fm/c (solid downward pointing red triangles).} \label{Fig7} \end{center} \end{figure} Finally, we discuss the charge and mass densities of the two phases, see Table \ref{tablethree}. Within small statistical errors, we find that the charge density of the liquid is equal to that of the solid. This implies that the number density of ions is larger in the liquid phase because the average ion charge $\langle Z\rangle$ is larger in the solid phase. This equality of charge densities is expected in order to cancel the electron charge density. We find that the average mass number $\langle A\rangle$ is lower in the liquid than in the solid phases. Finally the baryon density of the liquid is slightly smaller than that of the solid phase. Note that this small difference in density may have a significant statistical error and may be sensitive to the original distribution of $Z$ and $A$ that we use \cite{gupta}. Neutron stars have very large gravitational fields. Therefore, chemical separation followed by the sinking of the denser phase, can provide a significant source of heating. Although we find only a small density difference, this should be checked in future work involving different initial compositions. Table \ref{tablethree} also lists values of $\langle Z^{5/3}\rangle$ and $\Gamma$, see Eq. \ref{gammamix}, for the different phases. Because $\langle Z^{5/3} \rangle$ is smaller in the liquid phase we find that $\Gamma$ is about 10\% smaller in the liquid phase compared to that in the solid phase. \begin{table} \caption{Properties of the original mixture and of the final liquid and solid phases after a simulation time of $151\times 10^6$ fm/c, see text. The impurity parameter $Q$ gives the mean square dispersion in charge, see Eq. \ref{eqq}, $\rho_{ch}$ is the ion charge density and $\rho_b$ is the baryon density. } \begin{tabular}{llll} Parameter & Mixture & Liquid & Solid \\ $\langle Z\rangle$ & 29.30 & 28.04 & 30.48 \\ $Q=(\Delta Z)^2$ & 38.9 & 52.7 & 22.3 \\ $\langle Z^{5/3}\rangle$ & 285.8 & 269.0 & 301.5 \\ $\langle A\rangle$ & 87.62 & 83.8 & 91.2 \\ $\rho_{ch}$ (fm$^{-3}$) & $2.104 \times 10^{-3}$ & $2.100 \times 10^{-3}$ & $2.103 \times 10^{-3}$ \\ $\rho_b$ (fm$^{-3}$) & $6.291\times 10^{-3}$ & $6.277\times 10^{-3}$ & $6.294\times 10^{-3}$ \\ $\Gamma$ & 247 & 233 & 261 \\ \end{tabular} \label{tablethree} \end{table} \section{Discussion and Conclusions} \label{conclusions} How will chemical separation change the structure of a neutron star? Consider a steady state situation where matter accretes onto a thin ocean while ocean material crystallizes to form new neutron star crust. We assume the mass of the crust is much larger than that of the ocean. In steady state, the rate of crystallization is equal to the accretion rate. Furthermore, let us assume the composition of the crust is uniform. Steady state equilibrium than requires the composition of the crust to be equal to that of the accreting material. However, the composition of the thin ocean must become significantly enriched in light elements so that this liquid can be in thermodynamic equilibrium with the solid crust. Note that the ocean became enriched in light elements because the first material to crystallize was depleted in light elements. Furthermore, this initial change in composition of the crystallized material will not noticeably change the net composition of the crust because the crust is assumed to be much more massive than the ocean. We find a significant enrichment of oxygen $Z=8$ in the liquid. Our original composition did not include any carbon $Z=6$ because Gupta et al. \cite{gupta} found the carbon was burned to oxygen. However if this incorrect and carbon is present, it should also be enriched in the liquid because it has a similar atomic number to oxygen. Therefore carbon could be significantly enriched in the ocean compared to its concentration in either the accreting material or in the crust. Alternatively, carbon may burn, either stably or unstably, before it reaches this phase transition region. In this case, because there is no carbon remaining, it will not be enriched in the liquid. Very energetic type I X-ray bursts known as superbursts \cite{superobserve,sbo2} are thought to involve unstable carbon burning. Cumming and Bildsten argue that the mass fraction of carbon must be large, $X_{12}\approx 0.05-0.10$, in order for carbon to burn explosively \cite{superbursts,superbursts3}. Chemical separation, which we find upon crystallization, could possibly change carbon concentrations. In addition, chemical separation could change the thermal conductivity of the crust. This will be discussed in later work, and could impact how the ignition temperature is reached for superbursts. In addition, the release of latent heat and or gravitational potential energy could change the temperature profile of the star. However, the small latent heat of a pure system, that we found at the beginning of section \ref{results}, and the small density difference between our liquid and solid phases suggests that both of these heat sources may be small. We find a lower melting temperature for our mixture compared to that for a one component plasma. Table \ref{tablethree} lists $\Gamma=233$ for our liquid phase, compared to a pure one component plasma, that melts near $\Gamma=175$. Presumably this is due to the large range of charges $Z$ that are present in our liquid phase. This change in melting point could significantly increase the thickness of the liquid ocean in accreting neutron stars. If the melting point does occur at $\Gamma = 233$, this implies that for accreting neutron stars with typical crust temperature $\approx 5\times 10^8\;\mathrm{K}$ that the density at which crystallization occurs is, \begin{equation} \rho = 2.1\times 10^{10}\;\mathrm{g\;cm^{-3}} (T/5\times 10^8\;\mathrm{K})^3 (\Gamma/233)^3\, , \end{equation} for the $\langle Z\rangle$, $\langle Z^{5/3} \rangle$ and $\langle A\rangle$ values in table III. A rather high density of $2.1\times 10^{10}$ g/cm$^3$ for crystallization may be an order of magnitude higher than the density where $^{12}$C fuses. Note that the frequency drifts of oscillations observed during X-ray bursts may be a way to test the depth of the crust/ ocean interface \cite{piro}. However, we caution that the melting point could change if our simulation is not fully in thermodynamic equilibrium. The phase diagram for multi-component systems can be very complicated. For example, an additional new solid phase could form with a lower $Z$ composition after most of the high $Z$ ions have solidified. Therefore it is important to study further the melting point of these complex mixtures. In future work we will also study phase separation for superburst ashes. Our initial composition was not determined in a way that is consistent with chemical separation. Gupta et al. \cite{gupta} calculated how electron capture and light particle reactions change the composition of rp process ash as it is compressed to higher densities. However, they assumed the composition does not change upon crystallization. We now find the composition changes significantly. Therefore, one should recalculate electron capture and light particle reactions consistently with chemical separation. We have calculated results for only one initial composition. We expect our general result, that the liquid is greatly enriched in low $Z$ elements, to hold for a variety of different compositions. Nevertheless, it is important to study chemical separation for other compositions. Itoh and Kohyama find the thermal conductivity of an impure crystal to be proportional to $1/Q$ where the impurity parameter $Q$ is the square of the dispersion in the ion charges\cite{thermalcond}, \begin{equation} Q=(\Delta Z)^2 = \langle Z^2\rangle - \langle Z\rangle^2\, . \label{eqq} \end{equation} We find that chemical separation reduces $Q$ from 38.9 in the original mixture to 22.3 in the solid phase, see Table \ref{tablethree}. This is because the solid contains far fewer low $Z$ ions. {\it Therefore, chemical separation may significantly change the thermal conductivity of the crust.} Note that $Q$ for the liquid phase is also greatly changed. In future work we will present molecular dynamics simulation results for the static structure factor of both the liquid and solid phases and calculations of the thermal conductivity. The assumptions, that the composition of the crust is uniform and that the system is in steady state equilibrium, are likely to be oversimplified. Instead the crystallization rate and composition may be time dependent. Chemical separation could lead to the formation of layers in the crust. There may be bands of high $Z$ material above or below bands of low $Z$ material. This will increase the complexity of the crust and it will likely impact many crust properties. For example, these layers could decrease the net thermal conductivity and change the temperature profile. Alternatively, if the layers are position dependent and dynamically stable, they could change the mass quadruple moment of the star and enhance the radiation of continuous gravitational waves. The possibility of layers should be studied in future work. In conclusion, nucleosynthesis on the surface of accreting neutron stars likely produces a range of chemical elements. We have performed molecular dynamics simulations of crystallization to see how this complex material forms new neutron star crust. We find chemical separation, with the liquid ocean phase greatly enriched in low atomic number elements compared to the solid crust. This change in composition can change many crust properties such as the thermal conductivity or shear modulus. \section{Acknowledgments} We thank Dobrin Bossev, Jeremy Heyl, Gerardo Ortiz, Joerg Rottler, and Andrew Steiner for helpful discussions and acknowledge the hospitality of the Pacific Institute of Theoretical Physics where this work was started. This work was supported in part by DOE grant DE-FG02-87ER40365 and by Shared University Research grants from IBM, Inc. to Indiana University. Support for this work was also provided by the National Aeronautics and Space Administration through Chandra Award Number TM7-8003X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060.
1,108,101,563,744
arxiv
\section{Introduction} The Haldane--Shastry (HS) spin chain describes $N$ spins equally spaced on a circle with an interaction inversely proportional to the square of their chord distance~\cite{Ha88,Sh88}. The original motivation for studying this model is the fact that it possesses an exact Jastrow-product ground state, which coincides with the $U\to\infty$ limit of Gutzwiller's variational wave function for the Hubbard model~\cite{Gu63,GV87,GJR87}, and also with the one-dimensional version of the resonating valence bond state introduced by Anderson~\cite{ABZH87}. Since its very introduction, the HS spin chain has been extensively studied as a completely integrable model~\cite{FM93} solvable by the asymptotic Bethe ansatz~\cite{Ha91,Ka92,HH93}, whose spinon excitations provide a simple example of a system obeying fractional statistics~\cite{Ha91b}. The energy spectrum of the HS Hamiltonian with spin $1/2$ was partially computed in the original papers of Haldane and Shastry. In a subsequent publication~\cite{HHTBP92}, Haldane et al.~empirically found a complete description of the spectrum for arbitrary spin, and explained its highly degenerate character by the symmetry of the model under the Yangian algebra ${\mathcal Y}(\mathrm{sl}_m)$. These results were rigorously established in Ref.~\cite{BGHP93} by explicitly constructing a transfer matrix in terms of the Dunkl operators~\cite{Du89,Po92} of the trigonometric Sutherland dynamical model~\cite{Su71,Su72}. In this approach, the spectrum is obtained by considering all possible \emph{motifs} $\delta\equiv(0\mspace{1mu}\delta_1\dots\delta_{N-1}\mspace{1mu} 0)$, where each $\delta_j$ is either $0$ or $1$ and the maximum number of consecutive $1$'s is $m-1$. Indeed, the energy associated with a motif $\delta$ is given by the compact formula \begin{equation}\label{Ep} E_{\mathrm{HS}}(\delta)=\sum_{j=1}^{N-1}\delta_j\mspace{1mu} j(j-N)\,. \end{equation} The degeneracy of a level $E_\mathrm{HS}$ is obtained by summing the degeneracies corresponding to all the motifs $\delta$ such that $E_\mathrm{HS}(\delta)=E_\mathrm{HS}$. Although there is a well-defined algorithm for computing the degeneracy of each motif, in practice the computation becomes quite involved except for $m=2$. It is therefore difficult to derive in this way an exact expression for the partition function valid for arbitrary values of $N$ and $m$. Perhaps as a consequence of this fact, little attention has been paid in the literature to the global properties of the spectrum of the HS chain. Some authors~\cite{HB00,HW03} have suggested that the main obstacle in computing the partition function of the HS chain in closed form is the fact that the dispersion relation~\eqref{Ep} is nonlinear in $j$, in contrast with the Polychronakos rational chain~\cite{Fr93,Po93}. In a recent paper~\cite{EFGR05}, however, the partition function of the trigonometric HS spin chain of $BC_N$ type has been exactly computed applying what is known as Polychronakos's \emph{freezing trick}~\cite{Po94}, notwithstanding the fact that these chains have a nonlinear dispersion relation similar to~\eqref{Ep}. In fact, we shall prove in what follows that the partition function of the chain~\eqref{HS} can also be computed using the freezing trick. {}From the partition function it is straightforward to generate the spectrum of the HS chain for a wide range of values of $N$ and $m$, and thus study global properties thereof such as the level density or the distribution of the spacing between consecutive levels. \section{Partition function} For convenience, we shall take the Hamiltonian of the (antiferromagnetic) Haldane--Shastry spin chain as \begin{equation}\label{HS} H=\frac12\sum_{i<j}\sin(\xi_i-\xi_j)^{-2}(1+S_{ij})\,, \end{equation} where $\xi_i=i\pi/N$ and $S_{ij}$ is the spin permutation operator of particles $i$ and $j$. Here and throughout the paper all sums and products run from $1$ to $N$ unless otherwise specified. The Hamiltonian of the original HS spin chain is given by $H_{\mathrm{HS}}=H-E_{\mathrm{max}}$, where \begin{equation}\label{Emaxdef} E_{\mathrm{max}}\equiv\sum_{i<j}\sin(\xi_i-\xi_j)^{-2} \end{equation} is the highest energy of $H$. In order to apply the freezing trick, we need to introduce the Sutherland spin model \begin{equation} \label{Hstar} H^*=-\sum_i \partial_{x_i}^2+ a\,\sum_{i\neq j}\sin(x_i-x_j)^{-2}\,(a+S_{ij})\,, \end{equation} and its scalar version \[ H_0=-\sum_i \partial_{x_i}^2+ a(a-1)\,\sum_{i\neq j}\sin(x_i-x_j)^{-2}\,. \] We thus have \begin{equation} \label{h} H^*=H_0+4a\mathsf{H}\,, \end{equation} where $\mathsf{H}$ is obtained from $H$ by the replacement $\xi_i\to x_i$. The freezing trick is based on the fact that for $a\to\infty$ the particles ``freeze'' at the equilibrium positions of the scalar part of the potential in $H^*$, which are simply the lattice points of the chain~\eqref{HS}. In this limit, the spin degrees of freedom decouple from the dynamical ones, so that by Eq.~\eqref{h} the energies of the dynamical spin model are approximately given by~\cite{EFGR05} \begin{equation}\label{Eij} E^*_{ij}\simeq E_{0,i}+4a\mspace{1mu} E_j\,, \end{equation} where $E_{0,i}$ and $E_j$ are \emph{any} two levels of $H_0$ and $H$. Hence the partition functions $Z$, $Z^*$, and $Z_0$ of $H$, $H^*$, and $H_0$, respectively, satisfy the approximate equality \[ Z^*(T)\simeq Z_0(T)Z\big({\textstyle\frac T{4a}}\big)\,, \qquad a\gg1\,. \] The latter equation leads to the \emph{exact} formula \begin{equation} \label{Z} Z(T)=\lim_{a\to\infty}\frac{Z^*(4aT)}{Z_0(4aT)}\,, \end{equation} which we will use to compute the partition function of the chain~\eqref{HS} in closed form. In order to evaluate the RHS of~\eqref{Z}, we need to compute the spectra of $H^*$ and of its scalar limit $H_0$. These spectra can be obtained in a unified way by considering the scalar differential-difference operator \begin{equation}\label{BH} \,\overline{\!H}{}=-\sum_i \partial_{x_i}^2+ a\,\sum_{i\neq j}\sin(x_i-x_j)^{-2}\,(a-P_{ij})\,, \end{equation} where $P_{ij}$ permutes the coordinates $i$ and $j$. The operator $\,\overline{\!H}{}$ is represented by an upper triangular matrix in a (non-orthonormal) basis whose elements are of the form \begin{equation}\label{phis} \phi_{\mathbf{p}}(\mathbf{x})=\mathrm{e}^{2\mathrm{i}\mathbf{p}\cdot\mathbf{x}}\prod_{i<j}\sin^a(x_i-x_j)\,, \end{equation} where the vector $\mathbf{p}=(p_1,\dots,p_N)\in{\mathbb R}^N$ is such that the differences $p_i-p_{i+1}$, $1\leq i\leq N-1$, are integers. The basis elements~\eqref{phis} should be ordered in a suitable way that we shall now describe. We shall say that a vector $\hat\mathbf{p}=(\hat{p}_1,\dots,\hat{p}_N)$ is \emph{nonincreasing} if $\hat{p}_{i+1}\leq\hat{p}_i$ for $i=1,\dots,N-1$. Given two nonincreasing vectors $\hat\mathbf{p}$ and $\hat\mathbf{p}'$, we shall write $\hat\mathbf{p}\prec\hat\mathbf{p}'$ if $\hat{p}_1-\hat{p}_1'=\cdots=\hat{p}_{i-1}-\hat{p}_{i-1}'=0$ and $\hat{p}_i<\hat{p}_i'$. Finally, we say that the basis element $\phi_{\mathbf{p}}$ precedes $\phi_{\mathbf{p}'}$ if $\hat\mathbf{p}\prec\hat\mathbf{p}'$, where $\hat\mathbf{p}$ and $\hat\mathbf{p}'$ are the unique nonincreasing vectors obtained from $\mathbf{p}$ and $\mathbf{p}'$ by reordering their components. It can then be shown that the matrix of $\,\overline{\!H}{}$ in the basis $\{\phi_\mathbf{p}\}$ with the order just defined is indeed upper triangular, with diagonal elements~\cite{BGHP93,Ba96} \begin{equation}\label{BE} \,\overline{\!E}{}(\mathbf{p})=\sum_i \big(2\hat{p}_i+a(N+1-2i)\big)^2\,. \end{equation} We shall now see how the spectrum of $H^*$ follows easily from that of $\,\overline{\!H}{}$. To this end, let us introduce the total antisymmetrizer $\Lambda$ with respect to simultaneous permutations of the spatial and spin coordinates. We can construct a (non-orthonormal) basis of the Hilbert space of the Hamiltonian $H^*$ with states of the form \begin{equation}\label{psis} \psi_{\mathbf{p},\mathbf{s}}(\mathbf{x})=\Lambda\big(\phi_\mathbf{p}(\mathbf{x})\ket\mathbf{s}\big)\,, \end{equation} where $\ket\mathbf{s}\equiv\ket{s_1,\dots,s_N}$ is an element of the spin basis and the vector $\mathbf{p}$ satisfies the following conditions:\smallskip {\leftskip.8cm\parindent=0pt% \cond The differences $n_i\equiv p_i-p_{i+1}$, $1\leq i\leq N-1$, are nonnegative integers. \cond At most $m$ components of $\mathbf{p}$ can be equal.\par \cond The total momentum vanishes, i.e., $\sum_i p_i=0$.\smallskip } \ni The first two conditions are a direct consequence of the antisymmetric nature of the states~\eqref{psis}. The last condition reflects the fact that, since $H^*$ is translationally invariant, we can work in the center of mass frame. The basic states $\psi_{\mathbf{p},\mathbf{s}}$ should be ordered in such a way that $\psi_{\mathbf{p},\mathbf{s}}$ precedes $\psi_{\mathbf{p}'\!,\mathbf{s}'}$ if $\mathbf{p}\prec\mathbf{p}'$ (note that the vectors $\mathbf{p}$ and $\mathbf{p}'$ are nonincreasing by condition i)). {}From the elementary relations $P_{ij}\Lambda=-S_{ij}\Lambda$ and the fact that $\,\overline{\!H}{}$ clearly commutes with $\Lambda$, it follows that \begin{align*} H^*\psi_{\mathbf{p},\mathbf{s}}&=\,\overline{\!H}{}\psi_{\mathbf{p},\mathbf{s}} =\Lambda\big((\,\overline{\!H}{}\phi_\mathbf{p})\ket\mathbf{s}\big)\\[1mm] &=\Lambda\Big(\,\overline{\!E}{}(\mathbf{p})\phi_\mathbf{p}\ket\mathbf{s}+\sum_{\mathbf{p}'\prec\mathbf{p}}c_{\mathbf{p}\bp'}\phi_{\mathbf{p}'}\ket\mathbf{s}\Big)\\ &=\,\overline{\!E}{}(\mathbf{p})\psi_{\mathbf{p},\mathbf{s}}+\sum_{\mathbf{p}'\prec\mathbf{p}}c_{\mathbf{p}\bp'}\psi_{\mathbf{p}'\!,\mathbf{s}}\,. \end{align*} Hence the Hamiltonian $H^*$ of the Sutherland spin model is upper triangular in the basis $\{\psi_{\mathbf{p},\mathbf{s}}\}$, with diagonal elements \begin{equation}\label{E*} E^*(\mathbf{p},\mathbf{s})=\sum_i \big(2p_i+a(N+1-2i)\big)^2\,, \end{equation} where $\mathbf{p}$ satisfies conditions i)--iii) above. The spectrum of $H_0$ can be derived by a similar argument, noting that $H_0=\,\overline{\!H}{}$ on scalar symmetric states of the form $\psi_\mathbf{p}=\Lambda_{\mathrm s}\phi_\mathbf{p}$, where $\Lambda_{\mathrm s}$ is the symmetrizer with respect to the spatial coordinates and $\mathbf{p}$ satisfies only conditions i) and iii) above. Hence~\cite{Su72} the eigenvalues $E_0(\mathbf{p})$ of $H_0$ are also given by the RHS of~\eqref{E*}, where now $\mathbf{p}$ is not restricted by condition ii). {}From the above results it is easy to compute the partition functions $Z_0(4aT)$ and $Z^*(4aT)$ in the limit $a\to\infty$. For the computation of $Z_0(4aT)$, we start by expanding the eigenvalues of $H_0$ in powers of $a$ as \begin{equation}\label{E0a} E_0(\mathbf{p})=a^2E^0+4a\sum_i(N+1-2i)p_i+\mathrm{O}(1)\,, \end{equation} where \[ E^0=\sum_i(N+1-2i)^2=\frac13\,N(N^2-1). \] Since $E^0$ does not depend on $\mathbf{p}$, and therefore contributes the same overall constant factor to both $Z_0$ and $Z^*$, we shall henceforth drop the first term in Eq.~\eqref{E0a}. With this convention, for $a\gg1$ the denominator in Eq.~\eqref{Z} is given by \[ Z_0(4aT)\simeq \sum_\mathbf{p} q^{\sum_i p_i(N+1-2i)}\,, \] where $q=\mathrm{e}^{-1/(k_{\mathrm{B}}T)}$ and the outer sum runs over all vectors $\mathbf{p}$ satisfying conditions i) and iii) above. Setting $n_N\equiv p_N$ we have \[ \sum_i p_i(N+1-2i)=\sum_{j\geq i}n_j(N+1-2i)=\sum_{j=1}^{N-1}j(N-j)n_j\,. \] Taking into account that $n_N$ is determined by the remaining $n_i$'s by condition iii), we finally obtain \begin{multline}\label{Z0} Z_0(4aT)\simeq \sum_{n_1,\dots,n_{N-1}\geq0} \,\prod_{j=1}^{N-1}q^{j(N-j)n_j}\\ =\prod_{j=1}^{N-1}\Big(1-q^{j(N-j)}\Big)^{-1}\,. \end{multline} In order to compute the partition function $Z^*(4aT)$ for $a\gg 1$, it is convenient to represent the vector $\mathbf{p}$ labeling the energies~\eqref{E*} of $H^*$ as \begin{equation} \label{nm} \mathbf{p} = \big(\overbrace{\vphantom{1}\rho_1,\dots,\rho_1}^{k_1},\dots, \overbrace{\vphantom{1}\rho_r,\dots,\rho_r}^{k_r}\big). \end{equation} Note that $\sum\limits_{i=1}^r k_i=N$, so that $\mathbf{k}=(k_1,\dots,k_r)$ belongs to the set ${\mathcal P}_N$ of partitions of $N$ (taking order into account). Calling \begin{equation}\label{Ki} K_i=\sum\limits_{j=1}^i k_j, \end{equation} and dropping again the term $a^2E^0$, in the large $a$ limit Eq.~\eqref{E*} becomes \[ E^*(\mathbf{p},\mathbf{s})\simeq 4a\sum_{i=1}^r \rho_i\sum_{j=K_{i-1}+1}^{K_i}(N+1-2j)=4a\sum_{i=1}^r \rho_il_i\,, \] where \begin{equation}\label{li} l_i=k_i(N-2K_i+k_i). \end{equation} Since $E^*(\mathbf{p},\mathbf{s})$ does not depend on the spin coordinates~$\mathbf{s}$, the degeneracy associated with this eigenvalue is given by \[ d(\mathbf{k})=\prod\limits_{i=1}^r\binom m{k_i}, \] so that $d(\mathbf{k})=0$ if $k_i>m$ for some $i$, in accordance with condition ii). Hence \begin{equation}\label{Z*1} Z^*(4aT)\simeq \sum_{\mathbf{k}\in{\mathcal P}_N}d(\mathbf{k})\! \sum_{\substack{\rho_1>\cdots>\rho_r\\[2pt]k_1\rho_1+\cdots+k_r\rho_r=0}}\! q^{\,\sum\limits_{i=1}^r\rho_il_i}. \end{equation} Calling $\nu_i=\rho_{i}-\rho_{i+1}\in{\mathbb N}$, $i=1,\dots,r-1$, and $\nu_r=\rho_r$, we have \begin{equation}\label{nuili} \sum_{i=1}^r\rho_i l_i=\sum_{1\leq i\leq j\leq r}l_i\nu_j=\sum_{j=1}^r\nu_jN_j\,, \end{equation} where \[ N_j=\sum_{i=1}^jl_i=K_j(N-K_j)\,, \] by Eq.~\eqref{li}. Note, in particular, that the numbers $N_j$ depend on $\mathbf{k}$ through the partial sums~\eqref{Ki}. Substituting~\eqref{nuili} into~\eqref{Z*1}, and taking into account that $K_r=N$ implies $N_r=0$, we obtain \begin{multline}\label{Z*} Z^*(4aT)\simeq \sum_{\mathbf{k}\in{\mathcal P}_N}d(\mathbf{k}) \sum_{\nu_1,\dots,\nu_{r-1}>0}\prod_{j=1}^{r-1} q^{N_j\nu_j}\\ =\sum_{\mathbf{k}\in{\mathcal P}_N}d(\mathbf{k})\prod_{j=1}^{r-1}\frac{q^{N_j}}{1-q^{N_j}}\,. \end{multline} Combining Eqs.~\eqref{Z0} and~\eqref{Z*}, the partition function $Z$ can be expressed in closed form as \begin{equation}\label{Z1} Z(T)=\prod_{j=1}^{N-1}\Big(1-q^{j(N-j)}\Big) \sum_{\mathbf{k}\in{\mathcal P}_N}d(\mathbf{k}) \prod_{i=1}^{r-1}\frac{q^{N_i}}{1-q^{N_i}}\,. \end{equation} Note that, by definition, the partial sums $K_i$ are natural numbers satisfying $1\leq K_1<\cdots<K_{r-1}\leq N-1$. Denoting by $K'_1<\dots<K'_{N-r}$ the elements of the set \[ \{1,\dots,N-1\}-\{K_1,\dots,K_{r-1}\}\,, \] and setting \[ N'_i=K'_i(N-K'_i)\,, \] we have \[ \prod_{j=1}^{N-1}\Big(1-q^{j(N-j)}\Big) =\prod_{i=1}^{r-1}\big(1-q^{N_i}\big)\prod_{i=1}^{N-r}\big(1-q^{N'_i}\big)\,. \] This identity and Eq.~\eqref{Z1} yield the following remarkable formula for the partition function of the spin chain~\eqref{HS}: \begin{equation}\label{Zfinal} Z(T)=\sum_{\mathbf{k}\in{\mathcal P}_N}\prod_{i=1}^r\binom{m}{k_i}\, q^{\,\sum\limits_{i=1}^{r-1}N_i} \prod_{i=1}^{N-r}\big(1-q^{N'_i}\big). \end{equation} {}From the previous formula it follows that the energy levels of $H$ are of the form \begin{equation}\label{Edep} E(\delta')=\sum_{j=1}^{N-1}\delta'_{\!j}\, j(N-j)\,, \end{equation} where $\delta'_{\!j}=1$ if $j$ is one of the partial sums $\widetilde{K}_i$ corresponding to a partition $(\tilde{k}_1,\dots,\tilde{k}_r)\in{\mathcal P}_N$ with $\tilde{k}_l\leq m$ for all $l$ (by condition ii)), and $\delta'_{\!j}=0$ otherwise. In order to relate Eq.~\eqref{Edep} with the known expression~\eqref{Ep} for the energies of the original HS Hamiltonian, we need to evaluate the maximum energy $E_\mathrm{max}$. {}From Eq.~\eqref{Emaxdef} we have \begin{align*} E_\mathrm{max}&=\sum_{j=1}^{N-1}(N-j)\csc^2\Big(\frac{j\pi}N\Big)\\ &=\sum_{j=1}^{N-1}j\,\csc^2\bigg(\frac{(N-j)\pi}N\bigg) =\sum_{j=1}^{N-1}j\,\csc^2\Big(\frac{j\pi}N\Big). \end{align*} Hence \begin{equation}\label{Emax} E_\mathrm{max}=\frac N2\sum_{j=1}^{N-1}\csc^2\Big(\frac{j\pi}N\Big)=\frac N6(N^2-1)\,, \end{equation} where the last sum is evaluated in Ref.~\cite{CP78}. Since the RHS of~\eqref{Emax} coincides with the sum $\sum_{j=1}^{N-1}j(N-j)$, Eq.~\eqref{Edep} implies Eq.~\eqref{Ep} with $\delta_j=1-\delta'_{\!j}$. In particular, from the latter relation between $\delta$ and $\delta'$ it follows that $\delta$ is a motif with no more than $m-1$ consecutive $1$'s. \section{Level density and spacings distribution} The RHS of Eq.~\eqref{Zfinal} is a polynomial in $q$ whose evaluation with a symbolic algebra package is straightforward once $N$ and $m$ are fixed. In this way we have been able to compute the spectrum of the chain~\eqref{HS} for relatively large values of $N$ and $m$, for which the usual motif approach becomes inefficient due to the difficulty of computing the degeneracies. {}From the analysis of the spectral data thus obtained one can infer several global properties of the spectrum that we shall now discuss. In the first place, it is apparent that for $N\gg 1$ the level density is Gaussian to a very high degree of accuracy, as in the HS spin chain of $\mathrm{BC}_N$ type studied in Ref.~\cite{EFGR05}. In other words, for large $N$ the cumulative level density \[ F(E)=m^{-N}\sum\limits_{i;E_i\leq E}d_i \] is approximately given by \[ G(E)=\frac12\bigg[1+\operatorname{erf}\bigg(\frac{E-\mu}{\sqrt 2\sigma}\,\bigg)\bigg], \] where $d_i$ is the degeneracy of the energy $E_i$, and $\mu$ and $\sigma$ are respectively the mean and the standard deviation of the energy. This can already be seen, for instance, in the case $N=15$ and $m=2$ presented in Fig.~\ref{levden}. The agreement between $F$ and $G$ rapidly improves as $N$ and/or $m$ grow, e.g., for $m=2$ the mean square error decreases from $5.2\times 10^{-5}$ for $N=15$ to $5.6\times 10^{-6}$ for $N=20$, or from $2.6\times 10^{-5}$ for $N=15$ to $2.6\times 10^{-6}$ for $N=20$ when $m=3$. \begin{figure}[h] \psfrag{F}[Bc][Bc][1][0]{\begin{footnotesize}$F(E),\,G(E)$\end{footnotesize}} \psfrag{E}{\begin{footnotesize}$E$\end{footnotesize}} \includegraphics[width=8cm]{levden.eps} \caption{Cumulative distribution functions $F(E)$ (at its discontinuity points) and $G(E)$ (continuous line) for $N=15$ and $m=2$.\label{levden}} \end{figure} Since, by the previous discussion, for large $N$ the level density is characterized by $\mu$ and $\sigma$ through the Gaussian law, it is of interest to compute these parameters in closed form as functions of $N$ and $m$. In the first place, using the identity $\operatorname{tr} S_{ij}=m^{N-1}$ and Eqs.~\eqref{Emaxdef} and~\eqref{Emax}, we obtain \[ \mu=\frac{\operatorname{tr} H}{m^N}=\frac{m+1}{2m}\sum_{i<j}\csc^2(\xi_i-\xi_j) =\frac{m+1}{12m}N(N^2-1). \] Similarly, the formula \[ \operatorname{tr}(S_{ij}S_{kl})=m^{N-2+2\delta_{ik}\delta_{jl}+2\delta_{il}\delta_{jk}} \] yields \begin{align*} \sigma^2&=\frac{\operatorname{tr}(H^2)}{m^N}-\frac{(\operatorname{tr} H)^2}{m^{2N}} =\frac{m^2-1}{4m^2}\sum_{i<j}\csc^4(\xi_i-\xi_j)\\ &=\frac{(m^2-1)N}{8m^2}\sum_{j=1}^{N-1}\csc^4\xi_j\\ &=\frac{m^2-1}{360\mspace{1mu} m^2}\,N(N^2-1)(N^2+11) \end{align*} (cf.~Ref.~\cite{CP78}~for the last equality). The level density is also Gaussian as $N\to\infty$ for the so-called ``embedded Gaussian ensemble'' (EGOE)~\cite{MF75} in Random Matrix Theory. Note, however, that in the EGOE this property is valid provided that the number of one-particle states tends to infinity faster than $N$. This additional condition clearly does not hold in our case, since the number of one-particle states (i.e., $m$) is fixed. Another characteristic feature of the EGOE is the fact that the nearest-neighbor spacing distribution $p(s)$ is approximately given by Wigner's law \[ p(s)=(\pi/2)\mspace{1mu} s\exp(-\pi s^2/4), \] as for the classical Gaussian orthogonal ensemble~\cite{Ko01}. On the other hand, since the HS spin chain is integrable, one would expect that its nearest-neighbor spacing distribution obey Poisson's law $p(s)=\mathrm{e}^{-s}$, according to the conjecture of Berry and Tabor for a generic integrable model~\cite{BT77}. This conjecture has been verified for a variety of integrable many-body problems, such as the Heisenberg chain, the $t\mspace{1mu}$-$J$ model, the Hubbard model~\cite{PZBMM93}, and the chiral Potts model~\cite{AMV02}. One of the main results of this paper is the fact that the nearest-neighbor spacing distribution of the HS chain deviates substantially from both Wigner's and Poisson's laws. In order to correctly take into account the effect of the local level density in the study of $p(s)$, one must first apply to the ``raw'' spectrum the so-called \emph{unfolding} mapping~\cite{Ha01}. This mapping is defined by decomposing the cumulative level density $F(E)$ as the sum of a fluctuating part $F_{\mathrm{f{}l}}(E)$ and a continuous part $\xi(E)$, which is then used to transform each energy $E_i$, $i=1,\dots,n$, into an unfolded energy $\xi_i=\xi(E_i)$. The function $p(s)$ is defined as the density of the normalized spacings $s_i=(\xi_{i+1}-\xi_i)/\Delta$, where $\Delta=(\xi_{n}-\xi_1)/(n-1)$ is the mean spacing of the unfolded energies. By the previous discussion, in our case we can take the unfolding mapping $\xi(E)$ as the cumulative Gaussian distribution $G(E)$ with parameters $\mu$ and $\sigma$ given by the previous formulas. As for the level density, to compare the discrete distribution function $p(s)$ with a continuous distribution it is more convenient to work with the cumulative spacing distribution $P(s)=\int_0^s p(x)\mspace{1mu}\mathrm{d} x$. Our computations for a wide range of values of $N$ and $m$ show that $P(s)$ is essentially different from either Poisson's or Wigner's law, since its slope tends to infinity both as $s\to 0$ and $s\to s_{\mathrm{max}}$, where $s_{\mathrm{max}}$ is the largest spacing. In fact, it turns out that in all cases $P(s)$ is well approximated by a cumulative distribution of the simple form \begin{equation}\label{tP} \widetilde{P}(s)=t^\alpha\big[1-\gamma(1-t)^\beta\big], \end{equation} where $t=s/s_{\mathrm{max}}$ and $0<\alpha,\beta<1$. The parameter $\gamma$ is fixed by requiring that the average spacing be equal to~$1$, with the result \begin{equation}\label{ga} \gamma=\Big(\frac 1{s_{\mathrm{max}}}-\frac\alpha{\alpha+1}\Big)\Big/B(\alpha+1,\beta+1), \end{equation} where $B$ is Euler's Beta function. For instance, for $N=26$ and $m=2$ the largest spacing is $s_{\mathrm{max}}=3.06$, and the best least-squares fit parameters $\alpha$ and $\beta$ are respectively $0.31$ and $0.23$, with a mean square error of $4.1\times 10^{-4}$ (see Fig.~\ref{spacings}). \begin{figure}[h] \psfrag{P}[Bc][Bc][1][0]{\begin{footnotesize}$P(s)$\end{footnotesize}} \psfrag{s}{\begin{footnotesize}$s$\end{footnotesize}} \includegraphics[width=8cm]{spacings.eps} \caption{Cumulative spacing distribution $P(s)$ and its approximation $\widetilde{P}(s)$ (grey line) for $N=26$ and $m=2$. For convenience, we have also represented Poisson's (long dashes) and Wigner's (short dashes) cumulative distributions.\label{spacings}} \end{figure} For a fixed value of $m$, the parameters $\alpha$, $\beta$ and $s_{\mathrm{max}}$ vary smoothly with $N\gtrsim 15$, provided that $N$ has a fixed parity\footnote{% Our computations show that the number of levels, and hence of different spacings, increases monotonically with $N$ of a fixed parity, but decreases when $N$ jumps from $2j$ to $2j+1$.}. For instance, in Fig.~\ref{albesmax} we plot these parameters for $m=2$ and odd $N$ running from $15$ to $27$ (the plot for even $N$ is very similar). In all cases, the fit of the distribution \eqref{tP} to the data is quite good, the mean square error never exceeding $7.4\times 10^{-4}$. We have performed a similar analysis for $m=3$ and $15\leq N\leq22$, obtaining totally analogous results. \begin{figure}[h] \psfrag{N}{\begin{footnotesize}$N$\end{footnotesize}} \includegraphics[width=8cm]{albesmax.eps} \caption{Values of $\alpha$ (box), $\beta$ (rhombus), and $s_{\mathrm{max}}/10$ (cross) for $m=2$ and odd $N$.\label{albesmax}} \end{figure} The divergence of the nearest-neighbor spacing distribution $p(s)$ for small $s$ is probably related to the flatness of the tail of the Gaussian distribution. It could also be argued that, since the Haldane--Shastry chain is completely integrable, the full spectrum is a superposition of the spectra of the Hamiltonian restricted to subspaces of common eigenfunctions of a suitable family of commuting first integrals. It is well known, in this respect, that a superposition of a large number of unrelated spectra leads to a sharp increase in the number of very small spacings~\cite{RP60}. On the other hand, we do not have a clear explanation of the fact that $p(s)$ also diverges when $s$ approaches the largest spacing $s_{\mathrm{max}}$. This fact, which certainly deserves further study, could be a characteristic property of all spin chains of Haldane--Shastry type. Our results also imply that Berry and Tabor's conjecture does not hold for the HS spin chain, even if we restrict ourselves to a subspace of the whole Hilbert space with well-defined quantum numbers. Indeed, the nearest-neighbor spacing distribution of the superposition of even a small number of spectra with Poisson-distributed spacings must also be of Poisson type~\cite{RP60}. As an illustration of these assertions, we present in Fig.~\ref{fixspin} a plot of the cumulative spacing distribution corresponding to the restriction of the Hamiltonian~\eqref{HS} to the subspace with zero total spin and odd parity for $N=13$ and $m=2$, obtained by a numerical computation of the spectrum of $H$ restricted to this subspace. It is apparent from this plot that $P(s)$ is neither Poissonian nor of Wigner type, and that it is well approximated by a function of the form~\eqref{tP} for spacings $s\gtrsim0.25$. It is also clearly noticeable that $p(s)$ tends to infinity as $s$ approaches the maximum spacing $s_{\mathrm{max}}\simeq1.73$. \begin{figure}[t] \psfrag{P}[Bc][Bc][1][0]{\begin{footnotesize}$P(s)$\end{footnotesize}} \psfrag{s}{\begin{footnotesize}$s$\end{footnotesize}} \includegraphics[width=8cm]{fixspin.eps} \caption{Cumulative spacing distribution $P(s)$ (solid dots) for states with zero total spin and odd parity when $N=13$ and $m=2$. For comparison purposes, we have represented Poisson's (long dashes) and Wigner's (short dashes) cumulative distributions.% \label{fixspin}} \end{figure} The non-Poissonian behavior of the spacing distribution could in principle be due to finite-size effects \cite{KD05}. Although this possibility should be explored in more detail, our data clearly show that the cumulative spacing distribution $P(s)$ is of the form~\eqref{tP} for a wide range of values of $N\leq27$. Note, finally, that an interesting integrable model not obeying the Berry--Tabor conjecture has been recently constructed in Ref.~\cite{RDGR04}. In contrast with the HS spin chain, the latter model is a non-generic element of a class depending on a large number of parameters, and involves many-body interactions. \medskip \begin{acknowledgments} This work was partially supported by the DGI under grant No.~BFM2002--02646. The authors would like to thank J.~Retamosa for several helpful discussions. \vspace*{.5cm} \end{acknowledgments}
1,108,101,563,745
arxiv
\section{Introduction} \label{sec:intro} \input{sections/intro.tex} \section{Why Now? The Rise of Full Stack Bottlenecks in ML} \label{sec:why-now} \input{sections/why-now.tex} \section{SysML: Building a New Conference at the Intersection of Systems + Machine Learning} \label{sec:sysml} \input{sections/sysml.tex} \section{Conclusion} \label{sec:conclusion} \input{sections/conclusion.tex}
1,108,101,563,746
arxiv
\section{Introduction} \label{introduction} Topology is the study of shape apart from angles and distance. For example, a triangle is topologically equivalent to a circle, but distinct from a figure-eight. Combinatorial topology is the study of combinations of simple pieces; for example collections of line segments connected at endpoints, surfaces constructed by connecting triangles connected along their edges, or higher dimensional manifold created be connecting tetrahedra or higher dimensional analogs called simplexes along their faces. In a similar way, neural networks are constructed of simple components (neurons), and it is the global properties between the neurons from the weights that define the profound learned information. In this paper we provide computation of the set separating classes learned by a neural network as a set of line segments connected at their endpoints, and explicitly compute a topological invariant for this set called the homology groups. The example is simple for the sake of exposition, but the general principle that weights and architecture of a neural network can be used to compute topological invariants is very interesting. An important use of topology that motivated its development is the ability to understand behavior in dynamical systems (for example, flows or solutions to differential equations of a vector field) even when the computation of analytic solutions unreasonably complicated or impossible. There is a rich variety of mathematical machinery that has been developed along this generally calculus-based approach. Morse theory describes the topology of a manifold based on a gradient function, and index theory gives relationships between the topology of manifolds and vector fields. For example the 'hairy ball' theorem says that any vector field on a 2-dimensional sphere with have locations where the vector length is zero, and in addition the sum of the indexes for these locations will equal the Euler characteristic for the sphere, which is 2. The hairy ball theorem is an example of the Poincaré–Hopf index theorem (the sum of the indexes of a vector field on a manifold equals the Euler characteristic of the manifold), and more generally Conley Index Theory and The Fundamental Theorem of Dynamical Systems~\cite{norton1995fundamental}. In this paper we consider the vector field on the input space $X$ for a neural network defined by the gradient of the probability function for a class $G$, specifically $\nabla P_G(x)$ for $x\in X$ and where $P_G(x)$ is the probability for class $G$ assigned to input $x$. One can also study the gradient field $\nabla P(x)$ where $P(x)=\textrm{max}\{P_G(x)\}$ where the max is over all classes $G$; this vectorfield is defined everywhere except the separating manifolds. We construct a neural network trained on the standard MINST dataset, explicitly compute the gradient vectorfield from the weights in the network, and follow the orbits of the MINST data under the flow of this gradient vector field. It appears there are ten sinks (maxima for the gradient), and the basic of attraction for each is the classification region for each class. Interestingly, digit image for every sink has no resemblance to the digit it is representing, an interesting connection to adversarial networks. We suggest that, in general, a neural network learns weights, but at the same time what is being learned is probability density functions on the (often high-dimensional) input space, and the topology of these pdfs can be studied by looking at the gradient functions, with a rich mathematical theory being available for connecting topology with vector fields. Topological methods have been used to study neural networks recently. For example, it has been shown that viewing the data passung through successive layers, the topology of the classes tends to become simpler from each layer to the next through decreasing Betti numbers, computational homology, and visual inspection~\cite{naitzat2020topology}. Neural networks have been used to learn topological signatures or features in images, with layers to exploit these features for learning tasks~\cite{hofer2017deep}. Manifold geometry and topology has been used to study manifolds that approximate data clouds~\cite{hofer2017deep}. In this paper, our emphasis is on using topology and dynamical systems theory to understand the classification structures, separating manifolds, and probability density functions learned by a network and used for further prediction. While beyond the scope of this paper, it is worth noting that our two approaches, combinatorial topology from discrete pieces of a network/manidolf and vectorfields with calculus, are connected in a deeply mathematical sense. The homology groups we compute are computed from combinations of simplicies. There are theories of cohomology groups defined using integrals of vectors fields and more generally differential forms on manifolds. De Rham's Theorem says that under certain definitions for these groups and assumptions, the discrete-based homology groups are isomorphic to the calculus-based chomology groups. Hopefully this gives some sense of coherence between our two topological approaches to study 'what is learned' by a neural network. \section{Homology of Separating Manifold: A Simple Example} \label{homology} We constructed the neural netowork with two classes, a single hidden layer of 3 ReLU neurons, and a final classification layer with a single neuron for classification. The network with trained weights on a simple 2-class problem is shown in Figure~\ref{simpleNN}. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{simpleNNvert.png}} \caption{A simple neural network with trained weights.} \label{simpleNN} \end{center} \vskip -0.2in \end{figure} It is not hard then to show that the boundary between the classes is the set $f_1^2(x_1,x_2)=0$ consists of the line segments labeled with their formulas as shown in Figure~\ref{boundaryHomology}, where $f_i^1(x_1,x_2)$ is the output of the ReLU function with the inputs and biases shown in figure~\ref{simpleNN}. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{boundaryHomology.png}} \caption{The separating surface where the class probabilities are equal is a set of line segments which make up the hexegon. The vertices of the hexagon are labeled $a,b,c,d,e,f$. The output of the neural network is positive inside the hexagon and zero outside. For each $f_i^1$, the line where $f_i^1=0$ is shown as a bold dashed line, and the shaded side indicates the side where the output of the ReLU $F_i^1$ is increasing. The arrows show the direction of increase in output of the neural network. Inside the center triangle, $f_1=f_2=f_3=0$ and this the output of the network is $b_2*W^2=3\times3.4=11.4$.} \label{boundaryHomology} \end{center} \vskip -0.2in \end{figure} For a definition of homology and background material, see~\cite{Hatcher2000}, but some definitions are as follows. An $n-$dimensional simplex (or just \textit{$n-$simplex}) $(n\geq 0)$ is the convex hull of $n+1$ points (called vertices) such that no three of the vertices are coplanar; for example a point, a line segment, a triangle, tetrahedron, etc. A \textit{simplicial complex} is a set of simplexes such that if any two simplexes intersect, they do so along one faces (the lower dimensional sub-simplexes). The separating set $S=\{(x_1,x_2)|f_1^2(x_1,x_2)=0\}$ is a simplicial complex consisting of six 1-dimensional simplexes and six 0-dimensional simplexes (points). It is a continuous (but not smooth) 1-dimensional manifold. For a simplicial complex, the group of $n-$chains is \[ C_n(X)=\{\sum_i m_i \sigma_i |m_i\in\mathbb{Z}\} \] and each $\sigma_i$ is an $n-$dimensional simplex in the complex. The boundary $\partial$ of an $n-$simplex is the set of $(n-1)-$dimensional faces (any convex hull of $(n-1)$ of the vertices, which is of course itself an $(n-1)$-dimensional simplex) with the orientation inherited from the simplex. Then $\partial_k:C_k(X)\to C_{k-1}(X)$, and the $k^\textrm{th}$ homology group of $X$ is defined to be \[ H_k(X)=\text{ker}(\partial_k)/\textrm{Im}(\partial_{k+1}). \] Clearly, for the separating manifold $S$, \[ H_n(S)=0,\textrm{ for }n>1 \] since there are no $n-$chains with $n>1$. The group $C_1$ has the basis $\{\overline{ab},bc,cd,de,ef,fa\}$ and $C_0$ has the basis $\{a,b,c,d,e,f\}$. Clearly $\partial_2(mmab+mbc+mcd+mde+mef+mfa)=0$ for any integer $m$ and this defines $\textrm{ker}(\partial_1)$. Then \[ H_1(S)= \text{ker}(\partial_1)/\textrm{Im}(\partial_2 \cong \mathbb{Z}/0 \cong \mathbb{Z} \] and \[ H_1(S)= C_0/\textrm{Im}(\partial_1) \cong \mathbb{Z}. \] This shows that the separating manifold (curve) has the homology groups of a circle, and thus is (isomorphic to) a circle. While this example is trivial, it suggests the tools of computational topology (an excellent text is~\cite{Edelsbrunner2010} while introductory topics are covered in~\cite{Basener2007} and also see~\cite{Kaczynski2004}) may be useful in determining homology for more complex situations. At minimum, it shows that the homology groups can be computed from just knowing the weights and architecture. \section{Dynamical Systems and Index Theory} A central theorem in dynamical systems and index theory is the Poincar\'e-Hopf Index theorem, which says that for a vectorfield $v:M\to\mathbb{R}^n$ on an $n-$dimensional manifold $M$, the sum of the index of the fixed points (aka zeros, equilibria, or singularities) is equal to the Euler characteristic of the manifold, \[ \sum_i I(x_i) = \chi (M), \] where the index $I$ of a fixed point (or zero) of the vector field is defined by taking an isolating sphere $S$ around the fixed point and the index is equal to the degree of the map $S \to S$ defined by \[ x\mapsto v(x)/||V(x)||. \] Poincar\'e proved this in two dimensions, and Hopf extended it to arbitrary dimensions. Examples are shown in Figure~\ref{Index}, which is in two dimensions where the index is equal to the number of counterclockwise rotations of the vector as you travel counterclockwise around a circle isolating the fixed point. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{Index.jpg}} \caption{The index for a few different vector fields in 2-dimensions, in which case this index is equal to the number of counterclockwise rotations of the vector as you travel counterclockwise around a circle isolating the fixed point. (Figure is from~\cite{Basener2007} used with permission.)} \label{Index} \end{center} \vskip -0.2in \end{figure} There are numerous good sources on the Index Theory for vectorfields and applications such as~\cite{zhang2006vector, josevichpoincare, reininghaus2011combinatorial, libgober2012euler} and work on computing and visualizing the topology of vector fields~\cite{Scheuermann1998, Reininghaus2011}. This theory was extended by Cayley and James Clerk Maxwell to study topography, and generalized to gradients of functions on manifolds by Marsten Morse in what is now called Morse Thoery~\cite{milnor2016morse}. This line of theory culminated in Conley Index theory published by Conley in~\cite{conley1978isolated} focusing on isolated invariant sets (not just fixed points), and also see~\cite{mischaikow2002conley}, and is connected to Floer homology~\cite{salamon1990morse}. One of Conley's theorems proving all dynamical systems are composed of (chain recurrent) invariant sets and gradient-like sets known as the The Fundamental Theorem of Dynamical Systems~\cite{norton1995fundamental}. The triangle in the center of Figure~\ref{boundaryHomology} is an isolated invariance set under the flow moving in the direction of increasing probability. A deep and profound concept throughout these topics, evident even in the 2-dimensional vector fields, is that discrete objects like the index of a fixed point or Euler characteristic of a manifold, which are stable under continuous perturbations, are connected to integrals and differential constructions. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{PHThmDL.png}} \caption{Computations of the degrees of isolated invariant sets by counting rotations of the vector field around a isolating curve (topological circle), where the vectorfield is the gradient of the probability output of the NN for the class shown in blue. Observe that the index around the outer cueve equals the sum of the indices of the smaller circles. This is the Poincar\'e Hopf Theorem applied to the probability gradient of a NN. (This is a notional depiction of a neural network for the sake of illustrating the theorem created by taking the output of a NN and modifying so the probabilities close to zero around the edges of the region $[-6,6]\times[-6,6]$ and arrows are added visually to illustrate the theorem.)} \label{PHThmDL} \end{center} \vskip -0.2in \end{figure} We suggest that the concepts of index theory and related topological tools can be used to study the structure of 'what is learned' by a neural network. Specifically, we can apply index theory and general theory for dynamical systems to the gradient vector field of probability functions on the state space that are learned by the network. We do this by training a neural network on the MINST dataset and solving explicitly for the gradient of the probability functions for the classes. The input images are $28$ by $28$ pixels (so our vector field is in $28\times28=784$-dimensional space) and we have a single hidden layer of 256 ReLU neurons and an output layer of 10 softmax classification neurons (one for each of the 0 through 9 digit classes). The network has a $97\%$ validation accuracy which is reasonable but relatively well below what can be achieved by deep CNNS which can achieve $99.77\%$ accuracy or better~\cite{ciregan2012multi}. Our input space is $\mathbb{R}^784$ (restricted the unti cube with all values in $[0,1]$), and we denote the weight from input $i$ to neuron $j$ in the first layer by $W_{ij}^1$, with these weights making the $784\times 256$ matrix $W^1$. The weight from hidden layer neuron $i$ to softmax neuron $k$ is denoted $W_{jk}^2$, and these weights make up the $256\times 10$ matrix $W^2$. The biases are denoted by $b_j^1$ for the $j$-th neuron in the first layer and $b_k^2$ for the $k$-th neuron in the classification layer. We denote the $j$-th neuron in the hidden layer by $f_j$ and the $k$-th neuron function in the softmax classification layer by $g_k$. The gradient is a simple application of the chain rule, with \[ \nabla g_k(x_1,..,x_784) = \left( \frac{\partial g_k}{\partial x_1},...,\frac{\partial g_k}{\partial x_{784}}\right) \] where \[ \frac{\partial g_k}{\partial x_i} = \sum_{j=1}^{256} \frac{\partial g_k}{\partial f_j} \frac{\partial f_j}{\partial x_i}. \] In Figure~\ref{MINSTpca} we show the projection of the MINST data on the first two PCAs colored by class. Figure~\ref{Iteration_9} shows this data on the same PCA projection after 9 iterations of Eulers method approximation to the gradient flow $x'=\nabla g_k(x)$ stepping by $x \mapsto x + 0.05\nabla g_k(x)$. Figure~\ref{Iteration_99} show the data after 99 iterations, projected on the first two PCs computed from that data. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{Iteration_0.png}} \caption{The MIST data shown with points colored by class membership. This data is shown projected onto its first two PCs.} \label{MINSTpca} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{Iteration_9.png}} \caption{The MIST data after 9 iterations of the gradient flow Euler's approximation. This data is shown projected onto the first two PCs computed from the original data.} \label{Iteration_9} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{Iteration_99.png}} \caption{The MIST data after 99 iterations of the gradient flow Euler's approximation. This data is shown projected onto its first to PCs.} \label{Iteration_99} \end{center} \vskip -0.2in \end{figure} In Figure~\ref{digitIterations}, we shown the digit images for 10 iterations each of a 4 (top two rows) and a 0 (bottom two rows). In each iteration the probability is increasing. However, observe that the visual look of the image is becoming less like a digit. This same phenomena is consistent across all digits observed. Clearly, this is not desirable behaviour as the non-digit-like images are being classified with very high probability to a single class. It seems there is a string attracting stable manifold for the probability max for each class; that is, there is a submanifold in the data space that is far from the actual data, and under the gradient flow orbits are attracted to this manifold and approach the max along this manifold. The behavior is depicted notionally in Figre~\ref{notionalAttractingDirection} using the linear differential equation $x'=-4x+6y, y'=x-y2$. While our network architecture is known to be sub-optimal, observed phenomena like this could be used to improve optimization strategies and cost functions. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{digitIterations.png}} \caption{The first 10 iterations of a handwritten 4 digit is shown in the top 2 rows, and the first 10 iterations of a handwritten 0 digit is shown in the bottom 2 rows. Note that with each successive iteration the probability increases but the image looks less like a number.} \label{digitIterations} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{notionalAttractingDirection.png}} \caption{A vector field and phase plane for a 2D linear differential equation that has a strong attracting direction. This is a notional depiction of what seems to be happening in the probability gradient flow for the NN on the MINST data, where the circles in the figure notionally represent a distribution of data that is off the strong attracting manifold, and the 1d strong attracting direction notionally represents a higher dimensional strong stable manifold.} \label{notionalAttractingDirection} \end{center} \vskip -0.2in \end{figure} The function $g_k(f_1,...,f_{256})$ is defined by \[ g_k(f_1,...,f_{256}) = \frac{e^{\sum_j W_{jk}^2 f_{j}+b_k}}{ \sum_{k'}e^{\sum_j W_{jk'}^2 f_{j}+b_{k'}}} \] which leads to \[ \frac{\partial g_k}{\partial f_j} = g_k \left[ W_{jk}^2 - \sum_{k'\neq k} W{jk'}^2 g_{k'} \right] \] Define $\delta_j(x)$ to be the function defined by \[ \delta_j(x)= \begin{cases} 1 & \text{if }\sum_i W_{ij}^2 x_i + b_j^1 > 0\\ \textrm{undefined} & \text{if }\sum_i W_{ij}^2 x_i + b_j^1 = 0\\ 0 & \text{otherwise} \end{cases} \] (For numerical stability we use $\delta_j(x)=0$ when $\sum_i W_{ij}^2 x_i + b_j^1 > 0$ in our simulations.) The the partial derivatives of $f_j$ are defined by \[ \nabla f_j = \left( \frac{\partial f_j}{\partial x_1},...,\frac{\partial f_j}{\partial x_{256}}\right) = \delta_j(x) (W_{1j}^1,...,W_{256j}^1) \] So \[ \frac{\partial g_k}{\partial x_i} = \sum_{j=1}^{256} \delta_j(x) g_k \left[ W_{jk}^2 - \sum_{k'\neq k} W{jk'}^2 g_{k'} \right]W_{jk}^1 \] More concisely \[ \nabla g_k(x) = g_k(x) \widetilde{W}^1(x) \left[\Phi_k(x) + \Gamma(x) \right] \] where $\Phi_k$ is the $256\times 1$ column vector whose $j$-th value is $(1+g_k(x))W_{jk}^2$; $\Gamma$ is the $256\times 1$ column vector $W^2g(x)$; and $\widetilde{W}^1(x)$ is a $784\times 256$ matrix \[ \widetilde{W}^1_{ij}(x)=\delta_j(x)W_{ij}^1. \] We now consider the zeros and/or singularities for $\nabla g_k(x)$. It is clear that $g_k(x)>0$ for all $x$. The vector $\Phi_k$ is nonzero (except in the degenerate case where $W_{jk}^2=0$ for all $j=1,...,256$). The vector $\Gamma$ is also nonzero. Thus, the attracting sets (possibly simplexes) of the gradient flow are the points where $\widetilde{W}^1_{ij}(x)$ is undefined or zero. \section{Conclusions} In this paper we presented how the homology of a separating manifold can be computed from the weights in a simple neural network. We suggest that this may be helpful in understanding the complexity of the separating surfaces/manifolds and complexity of learned information. Our example used ReLU activation functions so the results were combinatorial and readily understood with combinatorial homology, but other activation functions would lead to smooth gradients and differential tolopogy tools that mirror the combinatorial ones (cohomology, Jacobians of equilibria points, etc.). We then showed how dynamical systems and index theory can be applied to give insight to what is leaned by a neural network. We showed an expository example in 2 dimensions where the NN and theorem can be understood and computed visually. We then gave an example in 784 dimensions where numerically numerical approximations suggest that there are 10 attracting invariant sets. (Notably there are 10 classes, so the gradient at each point is the gradient of the class probability for the most probable class.) We showed that the analytic formula for the gradient can be computed explicitly for this example. We also showed that this dynamical systems approach uncovers an apparent weakness in the neural network; although the accuracy is over 97\% on validation data, the attracting sets where each class probiablity is maximized correspond to images that look nothing line the original data of handwritten digits. This work suggests a number of lines for future research. Can topology be used to better understand the probability functions - for example are the basins of attraction for the attractors simply connected, in our case for MINST or under certain circumstances? Examples shown in Figures~\ref{Index} and ~\ref{PHThmDL} show the obvious fact that basins of attraction are neither always connected nor always comprised of simply connected components. The differential equations for the gradient flow are exact and the probability is a Lyapunov function, constraining the type of dynamics (no periodic orbits or chaos, for example). The most general question seems to be what topological information can generally be computed for neural networks, and what are the connections between this topology and the weights and architecture of the network. Computing topology is interesting, but is not only an end in itself. It is likely that some topological / dynamical system / geometric information is desirable, and thus can be used to either understand current optimization methods or heuristically observed improvements. For example, rapidly fluctuating topology durring optimization is an indicator of instability and need for regularization. In general, perhaps simple topology is preferable for robustness. Our observation that data flowing under the gradient system is attract to regions far from the original data as shown in Figure~\ref{digitIterations} may inspire new methods. It appears that there the attracting set has a strongly attracting direction/manifold that is far from the data; data moves quickly to this manifold and then follows this manifold slowly to the attracting set. It may be preferable to have this attracting manifold approximate the original data, and it may be that the strong manifold is connected to the weights - for example, a submanifold where some of the neuron outputs are zero. Perhaps these are useful for deriving improved optimization methods or give improved understanding to currently observed useful methods.
1,108,101,563,747
arxiv
\chapter{Market Microstructure Knowledge Needed for Controlling an Intra-Day Trading Process} \begin{abstract} A \rred{great deal} of academic and theoretical work \rred{has} been dedicated to optimal liquidation of large orders these last twenty years. The optimal split of an order through time (`optimal trade scheduling') and space (`smart order routing') is of high interest \rred{to} practitioners because of the increasing complexity of the market micro structure \rred{because of} \rred{the evolution recently} of regulations and liquidity worldwide. This chapter \rred{translates into} quantitative terms these regulatory issues and, more broadly, current market design. It \rred{relates} the recent advances in optimal trading, order-book simulation and optimal \rred{liquidity to} the reality of trading in an emerging global network of liquidity. \end{abstract} \tableofcontents \clearpage \section{Market Microstructure Modeling and Payoff Understanding are Key Elements of Quantitative Trading}\label{lehalle_sec1} As \rred{is well} known, optimal (or quantitative) trading is about finding the proper balance between providing liquidity \rred{in order} to minimize the impact of the trades, and consuming liquidity \rred{in order} to minimize the market risk exposure, while taking profit \rred{through potentially} instantaneous trading signals, supposed to be triggered by liquidity inefficiencies. The mathematical framework required to solve this kind of optimization \rred{problem} needs: \begin{itemize} \item a model of the consequences of the different ways \rred{of interacting} with liquidity (\rred{such as the} market impact model \citep{citeulike:4325901, Bouchaud06, citeulike:5177397}); \item a proxy for the `market risk' (the most natural of them being the high frequency volatility \citep{AITJAC07,scales05,citeulike:8317402}); \item and a model \rred{for quantifying} the likelihood of the liquidity state of the market \citep{citeulike:7344893,citeulike:8318790}. \end{itemize} A utility function \rred{then allows these different effects to be consolidated} with respect to the goal of the trader: \begin{itemize} \item minimizing the impact of large trades under price, duration and volume constraints (typical for brokerage trading \citep{OPTEXECAC00}); \item providing as \rred{much} liquidity as possible under inventory constraints (typical for market-makers \cite{avst08} or \rred{\cite{GLFT}}); \item or following a belief \rred{about} the trajectory of the market (typical of arbitrageurs \citep{citeulike:5094012}). \end{itemize} Once these key elements \rred{have been} defined, rigorous mathematical optimization methods can be used to derive \rred{the} optimal behavior \citep{citeulike:5797837,citeulike:8531791}. Since the optimality of the result \rred{is strongly dependent} on the \rred{phenomenon being modeled}, \rred{some} understanding of the market microstructure is a prerequisite \rred{for ensuring} the applicability of a given theoretical framework. The \emph{market microstructure} is the ecosystem \rred{in which buying and selling interests meet}, giving birth to trades. Seen from outside the microstructure, the prices of the traded shares are often uniformly sampled to build time series that are modeled via martingales \citep{citeulike:1681881} or studied using econometrics. Seen from the inside of electronic markets, buy and sell open interests (i.e. passive limit orders) form \emph{limit order books}, where an impatient trader can find two different prices: the highest of the resting buy orders if he needs to sell, and the lowest of the selling ones if he needs to buy (\rred{see} Figure \ref{fig:LOB}). The \rred{buying and selling} price are thus not equal. Moreover, the price will monotonically increase (for impatient buy orders) or decrease (for impatient sell orders) with the quantity to trade, following a concave function \citep{farmer03a}: the more you trade, the \rred{worse the price you will get}. \begin{figure}[!h] \input LOB.tex \caption{\rred{Idealized} order-book} \label{fig:LOB} \end{figure} The market microstructure is \rred{strongly} conditioned by the \emph{market design}: \begin{itemize} \item the set of explicit rules governing the \emph{price formation process} (PFP); \item the type of auction (fixing or continuous ones); the tick size (i.e. the minimum allowed difference between two consecutive prices); \item the interactions between trading platforms (such as `trade-through rules', pegged orders, interactions between visible and hidden orders, etc.); \end{itemize} are typical elements of the market design. The market microstructure of an asset class is a mix of the market design, the trading behaviors of trading agents, the regulatory environment, and the availability of correlated instruments (such as Equity Traded Funds, Futures or any kind of derivative products). Formally, the microstructure of a market can be seen as several sequences of auction mechanisms taking place in parallel, each of them having its \rred{own particular characteristics}. For instance the German market place is mainly composed (as of 2011) of the Deutsche B\"orse regulated market, the Xetra mid-point, the Chi-X visible order book, Chi-delta (the Chi-X hidden mid-point), Turquoise Lit and Dark Pools, BATS pools. The regulated market implements a sequence of fixing auctions and continuous auctions (one open fixing, one continuous session, one mid-auction and one closing auction); others implement only continuous auctions, and Turquoise mid-point implements optional random fixing auctions. To optimize his behavior, a trader has to choose an abstract description of the microstructure of the markets he will interact with: this will be his model of market microstructure. It can be a statistical `macroscopic' one as in the widely-used Almgren--Chriss framework \citep{OPTEXECAC00}, in which the time is sliced \rred{into intervals of 5 or 10 minutes duration} during which the interactions with the market \rred{combine} two statistical phenomena: \begin{itemize} \item the market impact as a function of the `participation rate' of the trader; \item and the volatility as a proxy of the market risk. \end{itemize} It can also be a microscopic description of the order book behavior as in the Alfonsi--Schied proposal \citep{citeulike:6615020} in which the shape of the order book and its resilience to liquidity-consuming orders is modeled. This chapter will thus \rred{describe} some relationships between the market design and the market microstructure using \rred{European and American examples} since they have seen regulatory changes (in 2007 for Europe with the MiFI Directive, and in 2005 for the USA with the \rred{NMS regulation}) as much as behavioral changes (with the financial crisis of 2008). A detailed description of some important elements of the market microstructure will be conducted: \begin{itemize} \item dark pools; \item impact of fragmentation on the price formation process; \item tick size; \item auctions, etc. \end{itemize} Key events like the 6 May 2010 flash crash in the US market and some European market outages will \rred{also receive attention}. To obtain an optimal trading trajectory, a trader needs to define its payoff. Here also, choices have to be made from a mean-variance \rred{criterion} \citep{OPTEXECAC00} to stochastic impulse control \citep{citeulike:5797837} going through stochastic algorithms \citep{citeulike:5177512}. This chapter \rred{describes the} statistical viewpoint of the Almgren--Chriss framework, showing how practitioners can use it to take into account a large variety of effects. It ends with comments on an order-flow oriented view of optimal execution, dedicated to smaller time-scale problems, \rred{such as} `\emph{Smart Order Routing}' (SOR). \section{From Market Design to Market Microstructure: Practical Examples}\label{lehalle_sec2} The recent history of the French equity market is archetypal in the sense that it went from a highly concentrated design with only one electronic platform \rred{hosted} in Paris \citep{MUN03} to a fragmented pan-European one with four visible trading pools and more than twelve `dark ones', located in London, in less than four years. \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{prefrag.png} \caption{\rred{Idealized} pre-fragmentation market microstructures} \label{fig:MMS} \end{figure} Seen by economists and from \rred{outside the microstructure}, the equity market is a place where \emph{listed firms} raise capital offering shares \rred{for sale}. Once shares are available \rred{for buying and selling} in the market place, the mechanism of balance between offer and demand (in terms of intentions to buy and intentions to sell) forms a \emph{fair price}. At the microstructure scale, the market place is more sophisticated. Market participants are no \rred{longer} just listed firms and investors making rational investment decisions; microstructure focuses on the process \rred{that allows investors to buy from, or sell to, one another}, putting emphasis on the \emph{Price Formation Process}, also \rred{known as} \emph{Price Discovery}. Moreover, recent regulations promote the use of electronic markets, \rred{since they are compatible} with the recording and traceability levels \rred{such markets provide}, leading to \emph{fragmented markets}. It is worthwhile \rred{differentiating} between two states of the microstructure: \rred{pre- and post-fragmentation}, see Figure~\ref{fig:MMS} and \ref{fig:MMSfrag}: \begin{itemize} \item \emph{Pre-fragmented microstructure}: before Reg NMS in the US and MiFID in Europe, the microstructure \rred{can} be \rred{pictured as} three distinct layers: \begin{itemize} \item investors, taking buy or sell decisions; \item intermediaries, giving \rred{unbiased advice} (through financial analysts or strategists) and providing access to trading pools they are members of; low frequency market makers (or maker-dealers) can be considered to be part of this layer; \item market operators: hosting the trading platforms, NYSE Euronext, NASDAQ, BATS, Chi-X, belong to this layer. They are providing matching engines to other market participants, hosting the \emph{Price Formation Process}. \end{itemize} These three layers are simply connected: intermediaries concentrate a fraction of the buying and selling flows in a (small) \emph{Over the Counter} (OTC) market, the remaining open interests are \rred{placed} in the order books of the market operators. Facilitators (i.e. low frequency market makers or specialists), localized in the same layer \rred{as} the intermediaries, provide liquidity, thus minimizing the \emph{Market Impact} of \rred{orders from under-coordinated} investors (i.e. when a large buyer comes \rred{to} the market two hours after a large seller, any liquidity provider \rred{that is} able to sell to the first one and buy to the later will prevent a price oscillation; on the one hand he will be `rewarded' for this service through the bid--ask spread he will demand \rred{of} the two investors; on the other hand he will take the risk of a large change \rred{in} the \emph{fair price} \rred{that is} in between the two transactions \citep{citeulike:7360166}, see Figure \ref{fig:syncorders}). \begin{figure}[!h] \input async_MM.tex \caption{\rred{Idealized} kinematics of market impact caused by bad synchronization (A1--A2--A3 sequence) and preservation of the market depth thanks to a market maker agreeing to support market risk (B1--B2-B3 sequence).} \label{fig:syncorders} \end{figure} \item \emph{Post-fragmented markets}: regulations \rred{have} evolved \rred{with the aim of} implementing more competition across each layer of \rred{Figure \ref{fig:syncorders}} (especially across market operators) and \rred{increasing} transparency: \begin{itemize} \item in the US, Reg NMS decided to keep the competition inside the layer of market operators: it \rred{requires} an Exchange or an Electronic Communication Network (ECN) to route an order to the platform that offers the best match (it is called the \emph{trade-through rule}). For instance, if a trader sends a buy order at \$10.00 to BATS where the best ask price is \$9.75 and if the best ask for this stock is \$9.50 on NYSE, BATS has to re-route the order to NYSE. This regulation needs two important elements: \begin{enumerate \item[(1)] a way of pushing to all market operators the best bid and ask of any available market with accuracy (it raises concerns linked to the latency of market data); \item[(2)] that buying at \$9.50 on NYSE is always better for a trader than buying at \$9.75 on BATS, meaning that the other trading costs (especially clearing and settlement costs) are the same. \end{enumerate} The data conveying all the best bid and asks is called the \emph{consolidated pre-trade tape} and its best bid and offer is called the \emph{National Best-Bid and Offer} (NBBO). \item in Europe, mainly because of the diversity of the clearing and settlement channels, MiFID \rred{allows the competition to be extended} to the intermediaries: they are in charge of defining their \emph{Execution Policies} describing how and why they will route and split orders across market operators. The European Commission thus relies on competition between execution policies \rred{as the means of selecting} the best way of splitting orders, taking into account all trading costs. As a consequence, Europe does not have any officially consolidated pre-trade tape. \end{itemize} \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{postfrag.png} \caption{\rred{Idealized} post-fragmentation market microstructure.} \label{fig:MMSfrag} \end{figure} Despite these differences, European and US electronic markets have a lot in common: their microstructures evolved similarly to a state where latency is crucial and \emph{High Frequency Market-Makers} (also called \emph{High Frequency Traders}) became the main liquidity providers of the market. Figure \ref{fig:MMSfrag} gives \rred{an idealized} view of this fragmented microstructure: \begin{itemize} \item A specific class of investors: the \emph{High Frequency Traders} (HFT) \rred{are} an essential part of the market; \rred{by} investing more than other market participants in technology, thus reducing their latency to markets, they \rred{have} succeeded in: \begin{itemize} \item implementing market-making-like behaviors at high frequency; \item providing liquidity at the bid and ask prices when the market has \rred{low probability} of moving (thanks to statistical models); \item \rred{being} able to cancel very \rred{quickly} resting orders \rred{in order} to minimize the market risk exposure of their inventory; \end{itemize} they are said to be \rred{feature in} 70\% of the transactions in US Equity markets, 40\% in Europe and 30\% in Japan in 2010. Their interactions with the market have been intensively studied by \cite{citeulike:8423311}. \item Because they are the main customers of market operators, HFTs \rred{offered} new features \rred{making it easier to conduct their business}: low latency access to matching engines (better quality of service and \emph{co-hosting}; i.e. the ability to locate their computers physically close to the ones of the matching engines), and even \emph{flash orders} (\rred{knowing} before other market participants that an order is being inserted in the order-book). \item Market participants \rred{that were} not proprietary high-frequency traders \rred{also}\break \rred{sought} specific features of the order books, mainly to hide their interests \rred{from} high frequency traders: \emph{Dark Pools}, implementing anonymous auctions (i.e. partially observable), are part of this offer. \item \rred{The} number of market operators as firms does not increase that much when a market goes from non-fragmented to fragmented, because of high technological costs linked to a fragmented microstructure. On the other hand, each operator offers more products (order books) to clients when fragmentation increases. The BATS and Chi-X Europe merged and the London Stock Exchange--Milan Stock Market--Turquoise trading \rred{also formed a single group}. Looking at the European order-books offered by NYSE-Euronext \rred{in 2011 only}, we have: \begin{itemize} \item several visible (i.e. \emph{Lit}) \emph{order books}: one for Paris--Amsterdam--Brussels stocks, another (NYSE--Arca Europe) for other European names; \item \emph{Mid-points}: an order book with only one queue \emph{pegged} at the mid-price of a reference market (SmartPool); \item \emph{Dark pools}: an anonymous order book (i.e. market participants can send orders \rred{as} in a Lit book, but no-one can read the state of the book); \item \emph{Fixing auctions}, opening and closing the continuous auctions on visible books. \end{itemize} \end{itemize} The result is an interconnected network of liquidity in which each market participant is no \rred{longer} located in one layer only: HFTs are \rred{simultaneously} investors and also very close to market operators, intermediaries are offering \emph{Smart Order Routers} to split optimally orders across all available trading pools \rred{whilst} taking into account the specific liquidity needs of each investor. \rred{Thus, market operators are close to technology providers}. \end{itemize} \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{FC_quotes_SP} \caption{The `Flash Crash': 6 May 2010, US market rapid down-and-up move by almost 10\% was only due to market microstructure effects.} \label{fig:6may} \end{figure} The regulatory task is thus more sophisticated in a fragmented market rather than in a concentrated one: \begin{itemize} \item the \emph{Flash Crash} of 6 May 2010 \rred{in} US markets raised concerns about the stability of such a microstructure (see Figure~\ref{fig:6may}); \item the cost of surveillance of trading flows across a complex network is higher than in a concentrated one. \end{itemize} Moreover, elements of the market design play \rred{many} different roles: the \emph{tick size} for instance, is not only the minimum between two consecutive different prices, \rred{i.e.,} a constraint on the bid-ask spread, it is \rred{also} a key in the competition between market operators. In June 2009, European market operators tried to gain market shares \rred{by} reducing the tick size on their order books. Each time one of them offered a lower tick than others, it gained around 10\% of market shares (see Figure \ref{fig:tickw}). After \rred{a} few weeks of competition on the tick, they limited this kind of infinitesimal \rred{decimation} of the tick thanks to a gentleman's agreement obtained under the umbrella of the FESE (Federation of European Security Exchanges): such a \rred{decimation had been expensive} in CPU and memory demand for their matching engines. \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{tick_war_MS} \caption{The `Tick war' in June 2009, in Europe. The increase of market share of Turquoise (an European Multilateral Trading Facility; MTF) on five Stocks listed on the London Stock Exchange following a decrease of the tick size. When other MTFs lowered the tick size, the market share \rred{returned} to the previous level.} \label{fig:tickw} \end{figure} \paragraph{\rred{An idealized} view of the `Flash Crash'.} The flash crash \rred{was accurately} described in \cite{citeulike:8676220}. The sequence of events that \rred{led} to a negative jump \rred{in} price and a huge increase \rred{in} traded volumes in few minutes, followed by a return to normal in less than 20 minutes can be \rred{pictured as follows}: \begin{enumerate \item[(1)] A final investor decided to sell a large amount $v^*$ of shares of the E-Mini future contracts, \rred{asking} a broker to take care of this sell by electronic means on his behalf. \item[(2)] The broker decided to use a \emph{PVOL} (i.e. Percentage of Volume) algo, with the instruction to follow almost uniformly 9\% of the market volume without regard \rred{to} price or time. This participation rate is not uncommon (it is usual to see PVOL algos with the instruction to follow 20\% of the market volume). \item[(3)] The trading algorithm \rred{could} be seen as a trade scheduler splitting the order in slices of \rred{one-minute intervals}, expecting to see a traded volume $V_t$ during the $t$th slice (meaning that $\mathbb{E}(V_t)\simeq \overline{V}/500$, where $\overline{V}$ is the expected daily traded volume). \item[(4)] For its first step, the algo began to sell on the future market around $v_0=\mathbb{E}(V_0) \times 9/(100-9)\simeq \overline{V}/500\times 0.09$ shares, \item[(5)] The main buyers of these shares had been intra-day market makers; say that they bought $(1-q)$ of them. \item[(6)] Because the volatility was quite high on 6 May 2010, the market makers did not feel comfortable with such an imbalanced inventory, \rred{and so} decided to hedge it on the cash market, selling $(1-q)\times v_t$ shares of a properly weighted basket of equities. \item[(7)] Unfortunately the buyers of most of these shares (say $(1-q)$ of them again) were intra-day market makers themselves, who decided \rred{in} their turn to hedge their risk on the future market. \item[(8)] It immediately increased the traded volume on the future market by $(1-q)^2 v_0$ shares. \item[(9)] Assuming that intra-day market makers could play this \emph{hot potato game} (as it \rred{was called} in the SEC--CFTC report), $N$ times in 1 minute, the volume traded on the future market \rred{became} $\sum_{n\leq N} (1-q)^{2n} v_0$ larger than expected by the brokerage algo. \item[(10)] Back to step (4) at $t+1$, the PVOL algo is now late by $\sum_{n\leq N} (1-q)^{2n} v_0 \times 8/(100-8)$, and has to sell $\overline{V}/500 \times 8/(100-8)$ again; i.e. selling $$v_{t+1}\simeq\left( N\times v_t + \frac{\overline{V}}{500} \right)\times 0.08.$$ \end{enumerate} \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{FC_simulation} \caption{Traded volume of the future market according to the simple \rred{idealized} model with $\overline{V}=100$, $T=10$ and $N=2$.} \label{fig:modFC} \end{figure} Figure \ref{fig:modFC} shows how explosive \rred{the \emph{hot potato game} between intra-day market makers can be}, even with \rred{not that high a} frequency trading rate (here $N=1.1$). Most of this trading flow \rred{was a selling flow, pushing most US prices} to very low levels. For instance Procter and Gamble quoted from \$60 to a low of \$39.37 in approximately 3.5 minutes. In reality other effects contributed to the flash crash: \begin{itemize} \item only \rred{a few} trading pools implemented circuit breakers that \rred{ought to have frozen} the matching engines in case of sudden liquidity event; \item most market participants only looked at the \emph{consolidated tape} for market data, preventing them \rred{noticing} that \rred{trading was frozen on some pools}; \item in the US, most retail flow is internalized by market makers. \rred{At} one point in the day these intermediaries decided to hedge their positions on the market \rred{on} their turn, \rred{further affecting the prices}. \end{itemize} This glitch in the electronic structure of markets is not an isolated case, even if it \rred{was} the largest one. The \rred{combination} of a failure in each layer of the market (an issuer of large institutional trades, a broker, HF market-makers, market operators) with a highly uncertain market context is \rred{surely} a crucial element of this crash. It has moreover shown that most orders do \rred{indeed} reach the order books only through electronic means. European markets did not suffer from such \emph{flash crashes}, but they have not seen many months in 2011 without an outage of a matching engine. \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{mrks_outage_prices} \caption{Examples of outages in European equity markets on 25 February 2011. The price (top) and the volumes (bottom) when the primary market \rred{opened only} after 12:15 (London time). The price did not move \rred{much}.} \label{fig:Eoutages} \end{figure} \paragraph{European outages.} Outages are `simply' bugs in matching engines. In such cases, the matching engines of one or more trading facilities can be frozen, or just stop \rred{publishing} market data, becoming true \emph{Dark Pools}. From a scientific viewpoint, and because in Europe there is no \emph{consolidated pre-trade tape} (i.e. each member of the trading facilities needs to build by himself his \emph{consolidated view} of the current European best bid and offer), they \rred{can} provide examples of behavior of market participants when they do not all share the same level of information \rred{about} the state of the offer and demand. For instance: \begin{itemize} \item when no information is available on primary markets but trading remains open: two price formation processes can take place in parallel, one for market participants having access to other pools, and the other for participants who just looked at the primary market; \item (Figure \ref{fig:Eoutages}) when the primary market does not start trading at the very beginning of the day: the price does not really move on alternative markets; no `real' price formation process takes place during such European outages. \end{itemize} The flash crash in US and the European outages emphasizes the \emph{role of information in the price formation process}. When market participants are confident that they have access to a reliable source of information (during the flash or during some European outages), they continue to \emph{mimic} a price formation process which output can be far from efficient. \rred{By contrast}, if they do not believe in the information they have, they just freeze their price, \rred{observe} behavior and trade at the \emph{last confident price}, \rred{while} waiting for reliable updates. \section{Forward and Backward Components of the Price Formation Process}\label{lehalle_sec3} The literature on market microstructure can be split in two generic subsets: \begin{itemize} \item papers with a \emph{Price Discovery} viewpoint, in which the market participants are injecting into the order book their views on a fair price. In these papers (see for instance \cite{RePEc:ide:wpaper:825,RePEc:eee:jfinec:v:9:y:1981:i:1:p:47-73,citeulike:7604491}), the \emph{fair price} is assumed to exist for fundamental reasons (at least in the mind of investors) and the order books are implementing a Brownian-bridge-like trajectory targeting this evolving fair price. This is a \emph{backward} view of the price dynamics: the investors are updating assumptions on the future value of tradeable instruments, and send orders in the electronic order books according to the distance between the current state of the offer and demand and this value, driving the quoted price to \rred{some average of what they expect}. Figure \ref{fig:pdiscovery} shows a price discovery pattern: the price of the stock changes for fundamental reasons, and the order book dynamics react accordingly generating more volume, more volatility, and a price jump. \end{itemize} \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{CAGR_PA} \caption{A typical \emph{Price Discovery} exercise: the 30th of November, 2011 on the Cr\'edit Agricole share price (French market). The two stable states of the price are materialized using two dark dotted lines, one before and the other after the announcement by major European central banks a coordinated action to provide liquidity.} \label{fig:pdiscovery} \end{figure} \begin{itemize} \item Other papers rely on a \emph{Price Formation Process} viewpoint. For their authors (most of them econophysicists, see for instance \cite{farmer03a, citeulike:1618840} or \cite{citeulike:5823204} for a review of agent based models of order books) the order books are \emph{building the price} in a forward way. The market participants take decisions with respect to the current orders in the books making assumptions of the future value of their inventory; it is a \emph{forward} process. \end{itemize} Following \cite{citeulike:7621540}, \rred{one can try to crudely model} these two dynamics simultaneously. In a framework with an infinity of agents (using a Mean Field Game approach, see \cite{citeulike:3614137} for more details), the order book at the bid (respectively at the ask), is a density $m_B(t,p)$ (resp. $m_A(t,p)$) of agents agreeing at time $t$ to buy (resp. sell) at price $p$. In such a continuous framework, there is no bid--ask spread and the \emph{trading price} $p^*(t)$ is such that there is no offer at a price lower than $p^*(t)$ (and no demand at a price greater then $p^*(t)$). \rred{Assuming diffusivity}, the two sides of the order book are \rred{subject to the following simple partial differential equations}: \begin{eqnarray*} \partial_{t}m_B\left(t,p\right)-\frac{\varepsilon^{2}}{2}\partial_{pp}^{2}m_B(t,p)&=&\lambda(t)\delta_{p=p^*(t)}\\ \partial_{t}m_A\left(t,p\right)-\frac{\varepsilon^{2}}{2}\partial_{pp}^{2}m_A(t,p)&=&\lambda(t)\delta_{p=p^*(t)}. \end{eqnarray*} \rred{Moreover}, the trading flow at $p^*(t)$ is clearly defined as $$\lambda(t) = -\frac{\varepsilon^{2}}{2}\partial_{p}m_B\left(t,p^*(t)\right) = \frac{\varepsilon^{2}}{2}\partial_{p}m_A\left(t,p^*(t)\right). $$ It is then possible to define a regular order book $m$ joining the \rred{bid and ask sides} by $$m(t,p) = \left\lbrace \begin{array}{r c l} m_B(t,p) &, \quad& \mbox{\rm if} \ p \le p^*(t) \\ -m_A(t,p) & , \quad & \mbox{\rm if} \ p > p^*(t) \\ \end{array} \right.$$ which satisfies a \rred{single} parabolic equation: \begin{equation} \label{eq:macro:3} \partial_{t}m\left(t,p\right)-\frac{\varepsilon^{2}}{2}\partial_{pp}^{2}m(t,p)= -\frac{\varepsilon^{2}}{2}\partial_{p}m\left(t,p^*(t)\right) \left(\delta_{p=p^*(t)-a}-\delta_{p=p^*(t)+a}\right) \end{equation} with a limit \rred{condition} $m(0,\cdot)$ given on the domain $[p_{\min},p_{\max}]$ and, for instance, Neumann conditions at $p_{\min}$ and $p_{\max}$. Such a \emph{forward process} describes the order book dynamics without any impact \rred{on} investors' fundamental views (it is a \emph{price formation process} model). \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{result20min} \caption{Simulation of the dynamics \rred{modeling an order book using} a forward--backward approach: the `fair price' is the continuous grey line and the realized price is the stepwise dark one.} \label{fig:mfg} \end{figure} \rred{Lehalle et al.} then introduce a more complex source to re-inject the orders in \rred{books containing market participants' forward views} on the price. For instance, a trend follower with a time horizon of $h$ buying at price $p^*(t)$ at time $t$ \rred{aims} to unwind \rred{his} position at a higher (i.e. `trend targeted') price and thus insert an order in the book accordingly (around $p^*(t)+(p^*(t)-p^*(t-h))$: see the paper for more details). Figure \ref{fig:mfg} shows an example of such a dynamic. This is a way of introducing investor-driven views \rred{into} the model, which are essentially \emph{backward}: a trend follower \rred{agrees to be} part of a transaction because he believes that the price will continue to move in the same direction \rred{over} his investment time scale. This future price of the share is at the root of his decision. This is an injection of a \emph{price discovery} component in the model. \section{From Statistically Optimal Trade Scheduling to Microscopic Optimization of Order Flows}\label{lehalle_sec4} Modeling the price formation dynamics is of interest for \rred{both regulators and policy makers}. It enables them to understand the potential effects of a regulatory or rule change on the efficiency of the whole market (see for instance \cite{FOU06} for an analysis of the introduction of competition among trading venues on the efficiency of the markets). It thus helps in understanding potential links between market design and systemic risk. In terms of risk management inside a firm hosting trading activities, it is more important to understand the trading cost of a position, which can be understood as its \emph{liquidation risk}. From the viewpoint of one trader \rred{versus} the whole market, three key phenomena have to be controlled: \begin{itemize} \item the \emph{market impact} (see \cite{citeulike:3320208,NAT03,citeulike:4368376,citeulike:4325901,Bouchaud06}) \rred{which} is the market move generated by selling or buying a large amount of shares (all else being equal); it comes from the forward component of the price formation process, and can be temporary if other market participants (\rred{they are} part of the backward component of the price discovery dynamics) provide enough liquidity to the market to bring back the price \rred{to} its previous level; \item \emph{adverse selection}, capturing the fact that providing too much (passive) liquidity via limit orders \rred{enables} the trader \rred{to} maintain the price at an artificial level; not a lot of literature is available \rred{about} this effect, which has been nevertheless identified by practitioners \citep{citeulike:6716078}; \item and the uncertainty on the fair value of the stock that can move the price during the trading process; it is often referred as the \emph{intra-day market risk}. \end{itemize} \subsection{Replacing market impact by statistical costs} A \rred{framework now widely used for controling} the overall costs of the liquidation of a portfolio was proposed by Almgren and Chriss in the late 1990s \cite{OPTEXECAC00}. Applied to \rred{the trade of a} single stock, this framework: \begin{itemize} \item cuts the trading period into an arbitrary number of intervals $N$ of a chosen duration $\delta t$, \item models the \emph{fair price} moves thanks to a Gaussian random walk: \begin{equation} \label{eq:9} S_{n+1}=S_n + \sigma_{n+1}\sqrt{\delta t} \; \xi_{n+1} \end{equation} \item models the \emph{temporary market impact} $\eta_n$ inside each time bin using a power law of the trading rate (i.e. the ratio of the traded shares $v_n$ by the trader over the market traded volume during the same period $V_n$): \begin{equation} \label{eq:2} \eta(v_n)=a\,\psi_n+\kappa\, \sigma_n\sqrt{\delta t} \left(\frac{v_n}{V_n}\right)^\gamma \end{equation} where $a$, $\kappa$ and $\gamma$ are parameters, and $\psi$ is the half bid-ask spread; \item \rred{assumes the \emph{permanent market impact} is linear} in the participation rate; \item uses a mean--variance criterion and minimizes it to obtain the optimal sequence of shares to buy (or sell) through time. \end{itemize} It is \rred{important first} to notice that there is an implicit relationship between the time interval $\delta t$ and the temporary market impact function: without changing $\eta$ and simply by choosing a different \rred{time slice}, the cost of trading \rred{can be} changed. It is in fact not possible to choose $(a,\kappa,\gamma)$ and $\delta t$ independently; they have to be chosen \rred{according} to the decay of the market impact on the stock, provided that most of the impact is kept in a time bin of size $\delta t$. Not all the decay functions are compatible with this view (see \cite{citeulike:10363463} for details about available market impact models and their interactions with trading). Up to now the terms in $\sqrt{\delta t}$ have been ignored. \rred{Note also that the parameters $(a,\kappa,\gamma)$ are relevant at this time scale}. \rred{One should not regard this framework as if it were based on structural model assumptions} (i.e. that the market impact really has this shape, or that the price moves really are Brownian), \rred{rather, as if it were a statistical one}. With such a viewpoint, any practitioner can use the database of its past executed orders and perform an econometric study of its `trading costs' on any interval, $\delta t$, of time (see \cite{citeulike:4368376} for an analysis of this kind on the whole duration of the order). If a given time scale succeeds \rred{in capturing, with enough accuracy, the parameters of a trading cost model, then that model can be used to optimize trading. Formally, the result of such a statistical approach would be the same as that of a structural one, as we will show below. But it is possible to go one step further, and to take into account the statistical properties of the variables (and parameters) of interest}. Going back to the simple case of the liquidation of one stock without any permanent market impact, the value (which is a random variable) of a buy of $v^*$ shares in $N$ bins of size $v_1, v_2,\ldots, v_N$ \rred{is} \begin{eqnarray} \nonumber W(v_1, v_2,\ldots, v_N) % &=& \sum_{n=1}^N v_n ( S_n + \eta_n(v_n) )\\ \nonumber &=& S_0 v^* + % \underbrace{\sum_{n=1}^N \sigma_n \xi_n x_n}_{\mbox{market move}} \\ \label{eq:1} && \qquad +\underbrace{\sum_{n=1}^N a\, \psi_n (x_n-x_{n+1}) + \kappa \,\frac{\sigma_n}{V_n^\gamma} \, (x_n-x_{n+1})^{\gamma+1}}_{\mbox{market impact}},\qquad~ \end{eqnarray} using the \emph{remaining quantity to buy}: \rred{that is,} $x_n=\sum_{k\geq n} v_k$ instead of the instantaneous volumes $v_n$. To obtain \rred{an answer in as closed a form} as possible, $\gamma$ will be taken equal to 1 (i.e. linear market impact). \rred{(See \cite{citeulike:5797837} for a more sophisticated model and more generic utility functions rather than the idealized model which we adopt here in order to obtain clearer illustrations of phenomena of interest.)} To add a practitioner-oriented flavor to our upcoming optimization problems, just introduce a set of independent random variables $(A_n)_{1\leq n\leq N}$ to model the \emph{arbitrage opportunities} during time slices. It will reflect \rred{our expectation} that the trader will be able to buy shares at price $S_n-A_n$ during slice $n$ rather than at price $S_n$. Such an effect can be used to inject a statistical arbitrage approach into optimal trading or to take into account the \rred{possibility of crossing} orders at mid price in Dark Pools or Broker Crossing Networks (meaning that the expected trading costs should be smaller during given time slices). Now the cost \rred{of buying} $v^*$ shares is: \begin{eqnarray} \label{eq:4} W(\mathbf{v}) &=& S_0 v^* + \sum_{n=1}^N \sigma_n \xi_n x_n + \sum_{n=1}^N (a\, \psi_n - A_n) v_n + \kappa \, \frac{\sigma_n}{V_n} \, v_n^2 \end{eqnarray} \paragraph{Conditioned expectation optimization.} The expectation of this cost, $$\mathbb{E}(W\vert (V_n,\sigma_n,\psi_n)_{1\leq n\leq N}),$$ given the market state, \rred{can be written as} \begin{equation} \label{eq:3} C_0= % S_0 v^* + \sum_{n=1}^N (a\, \psi_n - \mathbb{E} A_n) v_n + \kappa \, \frac{\sigma_n}{V_n} \, v_n^2 , \end{equation} \rred{A} simple optimization under constraint (to ensure $\sum_{n=1}^N v_n=v^*$) gives \begin{equation} \label{eq:5} v_n = w_n \left( v^* + \frac{1}{\kappa}\left( \left(\mathbb{E} A_n -\sum_{\ell=1}^N w_\ell \mathbb{E} A_\ell\right) -a \left( \psi_n -\sum_{\ell=1}^N w_\ell\psi_\ell \right) \right) \right), \end{equation} where $w_n$ are weights proportional to the inverse of the market impact factor: $$w_n=\frac{V_n}{\sigma_n}\left(\sum_{\ell=1}^N \frac{V_\ell}{\sigma_\ell}\right)^{-1}.$$ Simple effects can be deduced from this first \rred{idealization}. \begin{enumerate \item[(1)] Without any arbitrage opportunity and without any bid-ask cost (i.e. $\mathbb{E} A_n=0$ for any $n$ and $a=0$), the optimal trading rate is proportional to the inverse of the market impact coefficient: $v_n=w_n\cdot v^*$. Moreover, when the market impact has no intra-day seasonality, $w_n=1/N$ implying that the optimal trading rate is linear. \item[(2)] Following formula (\ref{eq:5}) it can be seen that the \rred{greater} the expected arbitrage gain (or the lower the spread cost) on a slice compared to the market-impact-weighted expected arbitrage gain (or spread cost) over the \rred{full trading} interval, the \rred{larger the} quantity to trade during this slice. More quantitatively: $$\df{v_n}{\mathbb{E} A_n}=\frac{w_n}{2\kappa} (1-w_n) >0,\; \df{v_n}{\psi_n}=-\frac{a}{2\kappa}(1-w_n)w_n<0.$$ This result gives the \emph{adequate weight} \rred{for applying} to the expected arbitrage gain \rred{in order} to translate it \rred{into} an adequate trading rate \rred{so as to profit} on arbitrage opportunities on average. Just note that usually the expected arbitrage gains increase with market volatility, \rred{so} the $w_n$-weighting is consequently of interest to balance this effect optimally. \end{enumerate} \paragraph{Conditioned mean-variance optimization.} Going back to a mean--variance optimization of the cost \rred{of buying} progressively $v^*$ shares, the criterion for minimizing (using a risk aversion parameter $\lambda$) \rred{becomes} \begin{eqnarray} \label{eq:6} C_\lambda &=& \mathbb{E}(W\vert (V_n,\sigma_n,\psi_n)_{1\leq n\leq N}) + \lambda \mathbb{V}(W\vert (V_n,\sigma_n,\psi_n)_{1\leq n\leq N})\nonumber\\ &=& S_0 v^* + \sum_{n=1}^N (a \psi_n-\mathbb{E} A_n) \X(n) + \left(\kappa\frac{\sigma_n}{V_n}+\lambda \mathbb{V} A_n\right) \X(n)^2 + \lambda \sigma_n^2 x_n^2.\qquad~ \end{eqnarray} To minimize $C_\lambda$ \rred{when it is} only constrained by terminal conditions on $x$ (i.e. $x_0=v^*$ and $v_{N+1}=0$), it is enough to cancel its derivatives with respect to any $x_n$, leading to \rred{the} recurrence relation \begin{eqnarray} \nonumber \left( \frac{\sigma_n}{V_n}+\frac{\lambda}{\kappa} \mathbb{V} A_n\right) x_{n+1} &=& % \frac{1}{2\kappa}(a ( \psi_{n-1}-\psi_n) - (\mathbb{E} A_{n-1}-\mathbb{E} A_n) )\\ \nonumber &&\quad + \left(\frac{\lambda}{\kappa} \sigma_n^2+ \left(\frac{\sigma_n}{V_n}+\frac{\lambda}{\kappa} \mathbb{V} A_n + \frac{\sigma_{n-1}}{V_{n-1}}+\frac{\lambda}{\kappa} \mathbb{V} A_{n-1}\right) \right) x_n \\ &&\qquad - \left( \frac{\sigma_{n-1}}{V_{n-1}}+\frac{\lambda}{\kappa} \mathbb{V} A_{n-1}\right) x_{n-1} . \end{eqnarray} \rred{This} shows that the variance of the arbitrage has an effect similar \rred{to that of} the market impact (through a risk-aversion rescaling), and that the risk-aversion parameter acts as a multiplicative factor \rred{on} the market impact, meaning that within an arbitrage-free and spread-costs-free framework (i.e. $a=0$ and $\mathbb{E} A_n=0$ for all $n$), the market impact model \rred{for} any constant $b$ has no effect on the final result as \rred{long} as $\lambda$ is replaced by $b\lambda$. Figure \ref{fig:optrate1} compares optimal trajectories coming from different criteria and parameter values. \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{x} \caption{Examples of optimal trading trajectories for mean--variance criteria: the classical result (Almgren--Chriss) \rred{is the} solid line, the dotted line is for high variance of the variable of interest ($\sigma/V$), the semi-dotted ones for an arbitrage opportunity ($A_{11+}$ means after the 11th period; and $A_{11+}+V\! A$ means adding expected variance to the arbitrage opportunity).} \label{fig:optrate1} \end{figure} \paragraph{A statistical viewpoint.} The two previous examples show how easy it is to include effects in this sliced mean-variance framework. \rred{The} implicit assumptions are: \begin{itemize} \item \rred{within} one time-slice, it is possible to capture the market impact (or \emph{trading costs}) using model (\ref{eq:2}); \item the trader \rred{knows} the traded volumes and market volatility in advance. \end{itemize} \rred{In practical terms}, the two assumptions come from statistical modeling: \begin{itemize} \item The market impact parameters $a,\kappa$ and $\gamma$ are estimated on a large database of trades using a maximum likelihood or MSE methods; the reality is consequently that the market model has the following shape: \begin{equation} \label{eq:7} \eta(v_n)=a\,\psi_n+\kappa\, \sigma_n\sqrt{\delta t} \left(\frac{v_n}{V_n}\right)^\gamma + \varepsilon, \end{equation} where $\varepsilon$ is an i.i.d. noise. \item Moreover, the market volatility and traded volumes are estimated using historical data and market context assumptions (to take into account at least the scheduled news, such as the impact of the expiry of derivative products on the volume of the cash market; see Figure \ref{fig:vcurves} for typical estimates). \end{itemize} \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{histocurves} \caption{Typical intra-day traded volume (top left) and realized volatility (bottom left) profiles (i.e. intra-day seasonalities on traded volumes and market volatility) with their quantiles of level 25\% and 75\%. The $x$-axis is time. The top right chart contains the quantiles of the ratio of interest $\sigma/V$. The bottom right ones shows the difference between the expectation of the ratio (solid line) and the ratio of the expectations (dotted line).} \label{fig:vcurves} \end{figure} Taking these statistical modeling steps into account in the classical mean--vari\-ance criterion of (\ref{eq:6}), changes \rred{that equation} into its unconditioned version: \begin{eqnarray} \label{eq:8} \nonumber {\tilde C}_\lambda &=& \mathbb{E}(W) + \lambda \mathbb{V}(W)\\ \nonumber &=& % S_0 v^* + \sum_{n=1}^N (a \mathbb{E}\psi_n-\mathbb{E} A_n) \X(n) \\ \nonumber && % \quad + \left(\kappa\,\mathbb{E}\left(\frac{\sigma_n}{V_n}\right)+\lambda (a^2\mathbb{V} \psi_n + \mathbb{V} A_n + \mathbb{V} \varepsilon)\right) \X(n)^2 \\ && % \qquad + \lambda \sigma_n^2 x_n^2 + \lambda \kappa^2\mathbb{V}\left(\frac{\sigma_n}{V_n}\right)\X(n)^4 . \end{eqnarray} The \rred{consequences} of using this criterion rather than the conditioned one are clear: \begin{itemize} \item the simple plug-in of empirical averages of volumes and volatility in criterion (\ref{eq:6}) instead of the needed expectation of the overall trading costs leads \rred{us} to use $(\mathbb{E}\sigma_n)/(\mathbb{E} V_n)$ instead of $\mathbb{E}(\sigma_n/V_n)$. Figure \ref{fig:vcurves} shows typical differences between the two quantities. \item If the uncertainty on the market impact is huge (i.e. the $\mathbb{V} \varepsilon$ term dominates all others), then the optimal trading strategy is to trade linearly, which is also the solution of a \rred{purely} expectation-driven minimization with no specific market behavior linked with time. \end{itemize} Within this new statistical trading framework, the inaccuracy of the models and the variability of the market context are taken into account: the obtained optimal trajectories will no \rred{longer} have to follow sophisticated \rred{paths} if the models are not realistic enough. Moreover, it is not difficult to solve the optimization program associated to this new criterion; the new recurrence equation is a polynomial of degree 3. Figure \ref{fig:optrate1} gives illustrations of \rred{the results obtained}. \rred{Many other effects can be introduced} in the framework, \rred{such as} auto-correlations on the volume--volatility \rred{pair}. This statistical framework does not embed recent and \rred{worthwhile} proposals such as the decay of market impact \citep{citeulike:10363463} or a set of optimal stopping times \rred{that avoid a uniform and a priori sampled} time \citep{citeulike:5797837}. It is nevertheless simple enough so that most practitioners can use it in order to include their views of the market conditions and the efficiency of their interactions with the market \rred{on} a given time scale; it can be compared to the Markowitz approach for quantitative portfolio allocation \citep{citeulike:571949}. \subsection{An order-flow oriented view of optimal execution} \rred{Though price dynamics in quantitative finance are often modeled using diffusive processes}, just looking at prices of transactions in a limit order book convinces \rred{one} that a more discrete and event-driven class of model \rred{ought} to be used; at a time scale of several minutes or more, the \rred{assumptions of diffusivity} used in equation (\ref{eq:9}) to model the price are not that \rred{bad}, but even at this scale, the `\emph{bid--ask bounce}' has to be taken into account \rred{in order} to be able to estimate with enough accuracy the intra-day volatility. The effect on volatility estimates of the rounding of a diffusion \rred{process} \rred{was} first studied in \cite{VQ96}; \rred{since} then other effects have been taken into account, such as an additive \emph{microstructure noise} \citep{scales05}, sampling \citep{AITJAC07} or \emph{liquidity thresholding} -- \rred{also known as} uncertainty zones -- \citep{citeulike:8317402}. Thanks to \rred{all these models}, it is now possible to use high frequency data to estimate the volatility of an underlying diffusive process generating the prices without being polluted by the signature plot effect (i.e. an explosion of the usual empirical estimates of volatility when high frequency data are used). Similarly, advances have been made \rred{in obtaining} accurate estimates of \rred{the} correlations between two underlying prices \rred{thereby} avoiding the drawback of the \emph{Epps effect} (i.e. a collapse of usual estimates of correlations at small scales \citep{YOSHI05}). To optimize the interactions of trading strategies with the order-books, it is \rred{necessary} to zoom \rred{in} as much as possible and to model most known effects taking place at this time scale (see \cite{Bouchaud06,citeulike:1618840}). Point processes have been \rred{successfully used for this purpose, in particular} because they can embed \rred{short-term} memory modeling \citep{citeulike:7012175,citeulike:7012187}. Hawkes-like processes have most of these interesting properties and exhibit diffusive behavior when the time scale is zoomed out \citep{citeulike:7344893}. To model the prices of transactions at the bid $N^b_t$ and at the ask $N^a_t$, two coupled Hawkes processes can be used. Their intensities $\Lambda^b_t$ and $\Lambda^a_t$ are stochastic and are governed by $$\Lambda^{a/b}_t = \mu^{a/b} + c\, \int_{\tau<t} e^{ -k (t-\tau)} \, dN^{b/a}_t;$$ \rred{here $\mu^b$ and $\mu^a$ are constants}. In such a model the more transactions at the bid (resp. ask), the more \rred{likely will there be} one at the opposite price in \rred{the near} future. The next qualitative step \rred{is} to link the prices with the traded volumes. It has recently been shown that under some assumptions that are almost \rred{always} true for very liquid stocks in a calm market context (a constant bid--ask spread and no dramatic change in the dynamics of \rred{liquidity-providing} orders), there is a correspondence between a two-dimensional point process of the quantities available at the first limits and the price of the corresponding stock \citep{citeulike:8318790,citeulike:8531765}. To understand the mechanism underlined by such an approach, just notice that the set of stopping times defined by the instants when the quantity at the first ask crosses zero (i.e. ${\cal T}^a=\{\tau:Q^a_\tau=0\}$) exactly maps the increases of prices (if the bid--ask spread is constant). Similarly the set ${\cal T}^b=\{\tau:Q^b_\tau=0\}$ maps the decreases of the price. Despite these valuable proposals for modeling the dynamics of the order book at small time scales, \rred{they have not yet been directly used in an optimal trading framework}. The most sophisticated approaches for optimal trading including order book dynamics are based on continuous and martingale assumptions \citep{citeulike:6572400} or on Poisson-like point processes \citep{GLFT}. On another hand, focusing on the optimality of very-short-term trading strategies (such as \emph{Smart Order Routing}) let us build optimal tactics with assumptions in accordance with \rred{recetn high-level} views on order book dynamics. Smart Order Routers (SOR) are software devices dedicated to splitting an order across all available trading venues \rred{in order} to obtain the desired quantity as fast as possible implementing a so-called \emph{liquidity capturing scheme}. With the rise of \rred{fragmented electronic equity markets} (see Figure \ref{fig:frag}), it is impossible to access more than 60\% of European liquidity without a SOR. \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{fei_dax_cac_ftse} \caption{Fragmentation of European markets: the market share of the primary market decreases continuously \rred{after} the entry in force of the MiFI Directive; the use of an SOR is mandatory \rred{for accessing} the liquidity of the whole market. This graph monitors the \emph{normalized entropy} of the fragmentation: if the market shares (summing to $1$) over $K$ exchanges are $q_1,\ldots,q_K$, the indicator is $- \sum_k q_k\ln q_k$ renormalized so that its maximum is 100\% (i.e. divided by $\ln(K)$).} \label{fig:frag} \end{figure} Optimal policies for SOR have been proposed in \cite{citeulike:5177512} and \cite{citeulike:7500879}. The latter used censored statistics to estimate liquidity available and to build an optimization framework \rred{on} top of it; the former built a stochastic algorithm and proved that it asymptotically converges to a state \rred{that minimizes} a given criterion. To \rred{get a feel for} the methodology associated with the stochastic algorithm viewpoint, just consider the following optimization problem. \paragraph{Optimal liquidity seeking: the expected fast end \rred{criterion}.} To define the \rred{criterion to be optimized}, first assume that $K$ visible order books are available (for instance BATS, Chi-X, Euronext, Turquoise for an European stock). At time $t$, a buy order of size $V_t$ has to be split over the order books according to a key $(r_1,\ldots,r_K)$, given that on the $k$th order book: \begin{itemize} \item the resting quantity `cheaper' than a given price $S$ is $I^k_t$ (i.e. the quantity at the bid side posted at a price higher or equals to $S$, or at a lower price at the ask); \item the incoming flow of sell orders consuming the resting quantity at prices cheaper than $S$ follows a Poisson process $N_t^k$ of intensity $\lambda^k$, i.e. $$\mathbb{E}( N_{t+\delta t}^k - N^k_t) = \delta t\cdot \lambda^k;$$ \item the waiting time on the $k$th trading destination to consume a volume $v$ added on the $k$th trading destination at price $S$ in $t$ is \rred{denoted} $\Delta T_t^k(v)$; it is implicitly defined by: $$\Delta T_t^k(v)=\arg\min_\tau \{ N_{t+\tau}^k - N^k_t \geq I_t^k + v\}.$$ \end{itemize} Assuming that there is no specific toxicity in available trading platforms, a trader would like to split an incoming order at time $t$ of size $V_t$ according to an allocation key $(r^1,\ldots,r^K)$ \rred{in order to minimize the waiting time criterion. Thus}: \begin{equation} \label{eq:10} {\cal C}(r^1,\ldots,r^K) = \mathbb{E} \max_k\{ \Delta T_t^k(r^k V_t)\}. \end{equation} \rred{This} means that the trader \rred{aims at optimizing} the following process: \begin{enumerate \item[(1)] an order of size $V_{\tau(u)}$ is to be split at time $\tau(u)$, the set of order arrival times \rred{being} ${\cal S}=\{\tau(1),\ldots ,\tau(n),\ldots\}$; \item[(2)] it is split over the $K$ available trading venues thanks to an `allocation key', $\mathbf{R}=(r^1,\ldots,r^K)$: \rred{a portion} $r^kV_{\tau(u)}$ is sent to the $k$th order book (all the quantity is spread, i.e. $\sum_{k\leq K} r_k=1$); \item[(3)] the trader waits the time needed to consume all the sent orders. \end{enumerate} The criterion ${\cal C}(\mathbf{R})$ defined in (\ref{eq:10}) reflects the fact that the faster the allocation key \rred{lets us} obtain liquidity, the better: the obtained key is well suited for a \emph{liquidity-seeking algorithm}. First \rred{denote by} $k^*_t(\mathbf{R})$ the last trading destination to consume the order sent at $t$: $$k^*_t(\mathbf{R}) = \arg\max_k \{ \Delta T_t^k(r^k V_t)\}.$$ \rred{A} gradient approach to \rred{minimizing} ${\cal C}(\mathbf{R})$ \rred{means we must} compute $\partial \Delta T_t^u(r^u V_t)/\partial r^k$ for any pair $(k,u)$. To respect the constraint, just replace an arbitrary $r^\ell$ by $1-\sum_{u\neq \ell} r^u$. \rred{Consequently}, $$\df{\Delta T_t^u(r^u V_t)}{r^k}=\df{\Delta T_t^k(r^k)}{r^k} \,{1\!\!1}_{k^*_t(\mathbf{R})=k} + % \df{\Delta T_t^\ell(r^\ell)}{r^k} \,{1\!\!1}_{k^*_t(\mathbf{R})=\ell} $$ where ${1\!\!1}$ is a delta function. \rred{With} the notation $\Delta N^k_t = N^k_t - N^k_{t-}$, we can write that any allocation key $\mathbf{R}$ such that, for any pair $(\ell, k)$, \begin{equation} \label{eq:11} \mathbb{E}\left( V_t\cdot D^k_t(r^k)\cdot {1\!\!1}_{k^*_t(\mathbf{R})=k}\right) = % \mathbb{E}\left( V_t\cdot D^\ell_t(r^\ell)\cdot {1\!\!1}_{k^*_t(\mathbf{R})=\ell}\right) \end{equation} where $$D_t^k(r^k)=\frac{1}{\Delta N^k_{t+\Delta T_t^k(r^k)}}\,{1\!\!1}_{\left(\Delta N^k_{t+\Delta T_t^k(r^k)}>0\right)}$$ is a potential minimum for the criterion ${\cal C}(\mathbf{R})$ (the \rred{proof of this result} will not be provided here). Equation (\ref{eq:11}) \rred{can also be written as}: $$\mathbb{E}\left( V_t\cdot D^k_t(r^k)\cdot {1\!\!1}_{k^*_t(\mathbf{R})=k}\right) = \frac{1}{K}\sum_{\ell=1}^K % \mathbb{E}\left( V_t\cdot D^\ell_t(r^\ell)\cdot {1\!\!1}_{k^*_t(\mathbf{R})=\ell}\right) .$$ It can be shown (see \cite{citeulike:6053468} for generic results of this kind) that the asymptotic solutions of the following stochastic algorithm on the allocation weights through time \begin{eqnarray} \nonumber \forall k,\,r^k(n+1) &=& r^k(n) - \gamma_{k+1} \,\left( V_{\tau(n)}\cdot D^k_{\tau(n)}(r^k(n))\cdot {1\!\!1}_{k^*_{\tau(n)}(\mathbf{R}(n))=k} - \phantom{\sum_{\ell=1}^K}\right.\\ \label{eq:12} &&\left.\qquad\qquad\qquad \frac{1}{K}\sum_{\ell=1}^K V_{\tau(n)}\cdot D^\ell_{\tau(n)}(r^\ell(n))\cdot {1\!\!1}_{k^*_{\tau(n)}(\mathbf{R}(n))=\ell} \right)\qquad~ \end{eqnarray} \rred{minimize} the expected fast end criterion ${\cal C}(\mathbf{R})$, \rred{provided there are strong enough ergodicity assumptions on the $(V,(N^k)_{1\leq k\leq K},(I^k)_{1\leq k\leq K})$-multidimensional process}. Qualitatively, \rred{we read this update rule to mean that} if a trading venue $k$ demands more time to execute the fraction of the volume that it receives (taking into account the \rred{combination} of $I$ and $N$) than the average waiting time on all venues, \rred{then the fraction $r^k$ of the orders to send to $k$ has to be decreased for future use}. \section{Perspectives and Future Work}\label{lehalle_sec5} The needs of intra-day trading practitioners are currently focused on optimal execution and trading risk control. \rred{Certainly} some improvements \rred{on} what is actually available \rred{have been} proposed by academics, \rred{in particular}: \begin{itemize} \item provide optimal trading trajectories taking into account \emph{multiple trading destinations} and different type of orders: liquidity-providing (i.e. limit) ones and liquidity-consuming (i.e. market) ones; \item the \emph{analysis of trading performances} is also an important topic; models are needed to understand what part of the performance and risk are due to the planned scheduling, the interactions with order books, the market impact and the market moves; \item \emph{stress testing}: before \rred{executing} a trading algorithm \rred{in} real markets, \rred{we must} understand its \rred{dependence on} different market conditions, from volatility or momentum to bid--ask spread or trading frequency. The study of the `Greeks' of the payoff of a trading algorithm is not straightforward since it is inside a closed loop of liquidity: its `psi' should be its derivative with respect to the bid--ask spread, its `phi' with respect to the trading frequency, and its `lambda' with respect to the liquidity available in the order book. For the special case of portfolio \rred{liquidity} studied in this chapter (using the payoff $ {\tilde C}_\lambda$ defined by equality~(\ref{eq:8})), these trading Greeks would be: $$\Psi=\left(\frac{\partial {\tilde C}_\lambda}{\partial \psi_\ell}\right)_{1\leq \ell\leq N},\quad % \Phi= \frac{\partial {\tilde C}_\lambda}{\partial N},\quad % \Lambda = \frac{\partial {\tilde C}_\lambda}{\partial \kappa} .$$ \end{itemize} \rred{Progress} in \rred{the above} three directions will provide a better understanding of the price formation process and the whole cycle of asset allocation and hedging, taking into account execution costs, closed loops with the markets, and portfolio trajectories at any scales. \paragraph{Acknowledgments} Most of the data and graphics used here come from the work of Cr\'edit Agricole Cheuvreux Quantitative Research group.
1,108,101,563,748
arxiv
\section{INTRODUCTION} NGC~1068 is perhaps the best studied Seyfert 2 galaxy, with X-ray observations by {\it EXOSAT}, {\it GINGA}, {\it ROSAT}, {\it BBXRT} and {\it ASCA} (for references and general description see Marshall {\it et al.}; 1993). The most striking X-ray feature is the strong (equivalent width, EW$=\simeq 3$~keV) \ifmmode {\rm K}\alpha \else K$\alpha$\fi\ complex that is split into three components corresponding to ``neutral'' (6.4--6.5 keV), He-like (6.7 keV) and H-like (6.96 keV) iron lines. The large EW is probably the result of the obscured central continuum and the directly viewed line producing gas (Krolik \& Kallman; 1988). This iron line complex was first resolved in {\it BBXRT} data (Marshall {\it et al.}\ 1993) and was later studied by Ueno {\it et al.}\ (1994) and Iwasawa, Fabian and Matt (1997). Marshall {\it et al.}\ (1993) suggested a two component model to explain the X-ray emitting gas in NGC~1068. The first is a ``warm'' component with a typical temperature of 2$\times 10^5$ K, and the second a more highly ionized ``hot'' component, with T$\sim 4 \times 10^6$ K. Both components are ionized by the central X-ray source and co-exist, with pressure equilibrium, throughout the nucleus. Both components reflect the optical-ultraviolet-X-ray continuum, and the broad Balmer lines, and are thus the ``electron scattering mirror'' in this source. That analysis showed that the iron abundance, as calculated from the EW of the 6.4 keV line, is high, 2--3 times solar. The absence of a detectable \ifmmode {\rm O}\,{\sc viii}\,653~{\rm eV\ line has been interpreted, within the framework of the model, as indication for a very small O/H. The implication is that the O/Fe abundance ratio is very small, an order of magnitude below solar. Ueno {\it et al.}\ (1994) measured a very weak \ifmmode {\rm O}\,{\sc viii}\,653~{\rm eV\ line and confirmed the {\it BBXRT} measurements of the \ifmmode {\rm K}\alpha \else K$\alpha$\fi\ lines. These authors took a different approach and modeled the observed \ifmmode {\rm K}\alpha \else K$\alpha$\fi\ lines, as well as several softer lines, by a thermal plasma. Their underlying assumption is that collisionally ionized gas is producing the observed emission lines. However, such models require abnormally low metallicity since the predicted soft X-ray lines are much stronger than those observed. Marshall {\it et al.}\ have also noted the extreme weakness of the \OIIIb\ ultraviolet line in this galaxy and suggested that the oxygen abundance anomaly extends to the cooler gas, in the narrow line region (NLR). The ultraviolet spectrum has been investigated, in more detail, by Netzer (1997), reaching a similar conclusion about the O/C and O/N abundance ratio. We have undertaken a more detailed study of the spectrum of NGC~1068, aiming at better constraints on the soft X-ray lines and the chemical composition of the X-ray emitting gas. We have used the recent {\it {\it ASCA}} data set, as described in \S2, and measured many soft X-ray lines. We have modeled the gas in various ways, as discussed in \S3 and the findings are discussed in \S4. \section{DATA SELECTION AND ANALYSIS} NGC~1068 was observed by {\it {\it ASCA}} on 1993 July 25 with an on-source time $\sim 39$ ksec. The {\it ASCA}\ satellite and instruments are described in Tanaka, Inoue \& Holt (1994). In summary, four co-aligned, grazing-incidence, foil-mirror telescopes direct X-rays simultaneously onto four focal-plane instruments. There are two CCD detectors -- the Solid-state Imaging Spectrometers (SIS) and two gas--scintillation proportional--counters -- the Gas Imaging Spectrometers (GIS). NGC~1068 was observed with the SISs in 4-CCD mode, with data accumulated in both 'FAINT' and 'BRIGHT' telemetry modes. As NGC~1068 has a relatively low count rate in the SIS, the superior resolution available in 'FAINT' mode could not be utilized. Therefore the FAINT and BRIGHT mode data were combined for the analysis presented here. We extracted the ``raw'' event files from the {\it ASCA} archive and used these as a starting point for our analysis. These files were created from the original telemetry data and have been corrected to produce linearized detector coordinates, gain corrected pulse--height values and sky co-ordinates determined from the spacecraft aspect solution. We applied data selection and cleaning algorithms using FTOOLS/XSELECT v3.5. Data were rejected by removing 'hot' and 'flickering' pixels in the SISs; removing data accumulated during passages through the South Atlantic Anomaly; imposing a minimum geomagnetic rigidity of 6 GeV/$c$; removing data accumulated when the angle from the Earth's limb was less than 20$^{\circ}$ during orbit-day and less than $10^{\circ}$ (SIS) or $5^{\circ}$ (GIS) during orbit-night; restricting SIS data to event 'GRADES' 0,2,3 and 4 and rejecting data taken within 200 seconds after crossing the day/night terminator. We combined the {\tt LOW}, {\tt MEDIUM} and {\tt HIGH} bit-rate data. Application of these screening criteria gave a mean effective exposure time of $\sim$39 ks in the GIS instruments and $\sim 10$ ks in the SIS instruments. The SIS exposures were significantly lower because necessary differences in the selection criteria for the two instruments, SIS data in 4--CCD mode are particularly prone to problems with telemetry saturation during periods of LOW bit-rate, these caused data dropouts and hence a reduction in useful exposure time. Images were extracted from the screened and cleaned data from all instruments, and region descriptors defined for the extraction of light curves and spectra. For the two SIS instruments, we used a 3~arcmin circle centered on NGC~1068 with the background taken towards the edge of the same CCD chip. For the two GIS instruments, we used a circular extraction cell of 5~arcmin radius centered on NGC~1068 with the background taken in a nearby source-free region. The {\it ASCA}\ light curves reveal no significant flux variability across the observation, and thus we consider only the mean spectrum in this paper. Data from both pairs of SIS and GIS instruments were analyzed together, but with the normalization of each data set allowed to vary relative to the others (since there are small discrepancies in the absolute flux calibrations of the detectors). We used the SIS response matrices released November 1994, and the GIS response matrices released March 1995. \section{MEASUREMENT AND ANALYSIS} \subsection{Empirical Fits} Inspection of the {\it ASCA}\ spectrum clearly shows a hard continuum source, strong \ifmmode {\rm K}\alpha \else K$\alpha$\fi\ lines, several lower energy emission features and a soft 0.5--3 keV continuum component (Marshall {\it et al.}; 1993, Ueno {\it et al.}; 1994). As argued by Wilson {\it et al.}\ (1992), the soft 0.5--3keV emission is from an extended source, which is contributing at least half the flux at those energies. The likely explanation is an extra-nuclear starburst region. This would imply that the soft X-ray central source is weaker than the one assumed in Marshall {\it et al.}\ (see an extensive discussion by Pier {\it et al.}, 1996). Our approach in this work is to first measure the soft X-ray lines in a way which is independent of any model and then to compare those measurements with the prediction of several specific models. The first step is to fit the 0.6--10 keV continuum and all significant emission features that do not correspond to known detector features. The procedure assumes that the observed continuum can be approximated by two smooth functions, such as two power-laws or an absorbed power-law and a thermal continuum. There are several combinations that produce reasonable fits and we do not attach great significance to the chosen functions since they do not represent physical entities. We accept any combination that fits well the overall underlying spectrum. The soft component was allowed to vary freely in the fit, while the hard component was allowed to range between 1.5 and 1.7 in slope. Fig. 1 shows an example where the high energies are fitted a power-law with absorbed soft X-ray part and the low energies are fitted by a second, unabsorbed power-law. In this example, the hard component photon $\Gamma$ slope is 1.6 ($F_{N(E)} \propto E^{-\Gamma}$) and the soft component slope is $\Gamma=3.4$. Other combinations give equally good fits where in all cases the hard X-ray photon slope is between 1.5 and 1.7. \begin{figure} \epsfxsize\hsize \epsfbox{figure1_X.ps} \caption{ {\it ASCA}\ X-ray spectrum and fitted model for NGC~1068. The various model components are shown at the top and the residuals (data/model) at the bottom.} \end{figure} Next, we have added a large number of narrow gaussian lines as free parameters. These include the strongest H-like and He-like lines of all elements in ION and about 20 Fe-L lines. A large number of those are crowded in a small energy range, especially over the ranges of 0.7--0.9 and 1--1.2 keV, which contain a plethora of iron-L lines. The {\it ASCA}\ resolution at those energies is not sufficient to model individual lines and we have fitted these regions by broad gaussians, where the width and centroid energies were chosen based on the spread on the predicted shape of the line blends. Some line pairs (e.g. the H-like and He-like magnesium lines) are also close in energy and while we attempt to fit them individually, in the error analysis (see below) we consider only the combined strength. The total number of broad and narrow gaussians, including the 3 iron \ifmmode {\rm K}\alpha \else K$\alpha$\fi\ lines, is 15. Fig. 1 shows one of the best fits and Table 1 lists the line intensities corresponding to it. \begin{table}[t] \caption{Observed and calculated X-ray line intensities} \renewcommand{\arraystretch}{0.75} \begin{tabular}{lcccc} \hline \hline Line & Observed flux\footnote{One flux unit corresponds to $7.4 \times 10^{-13}$ ergs cm$^{-2}$s$^{-1}$} & Model 1\footnote{Low oxygen, break at 1 keV}& Model 2\footnote{Solar oxygen, break at 0.5 keV} & 90\% confidence interval \\ \hline \OVII & & 0.47 & 0.54 & \\ \ifmmode {\rm O}\,{\sc viii}\,653~{\rm eV & $<0.4$ & 0.30\footnote{The calculated EW, relative to the observed continuum, is 23 eV.}& 0.26 & $<0.4$ \\ \FeLa &0.98 & 0.42& 0.13 & \\ \NeIX &0.61 &0.36& 0.09 \\ 0.72 - 0.92 keV & 1.11 &0.78& 0.22 &1.08 - 2.1 \\ \NeX &0.32 &0.08& 0.05 \\ \FeLb & 0.35 & 0.18& 0.18 \\ total 1.0 - 1.2 keV & 0.67 &0.26 & 0.23 &0.47 - 0.86 \\ \MgXI & 0.09 & 0.08 & 0.04 \\ \MgXII & 0.0 & 0.03 & 0.03 & \\ total 1.34 - 1.47 keV&0.12 &0.11 &0.07 & 0.07 - 0.17 \\ \SiXIII & 0.16 &0.07 & 0.02& \\ \SiXIV & 0.03 & 0.05& 0.05& \\ total 1.85 - 2.01 keV &0.19 &0.12 & 0.07 & 0.11 - 0.28 \\ S\,{\sc i}-S\,{\sc x} 2.35 keV & 0.07 &0.02 & 0.02 & \\ \SXV & 0.07 & 0.02& 0.01& \\ \SXVI & 0.02 & 0.04 & 0.04& \\ total 2.35 - 2.62 keV &0.16 &0.08 & 0.07 &0.02 - 0.32\\ \ArXVII &0.04 & 0.01 & 0.01& \\ \ArXVIII & 0.06 & 0.02 & 0.02 & \\ total 3.1 - 3.31 keV &0.10 & 0.03 & 0.03 & 0.01 - 0.19 \\ Fe\,{\sc i}-Fe\,{\sc xvi} 6.4 keV & & 0.5 & 0.5 & \\ Fe\,{\sc xvii}-Fe\,{\sc xxiii} 6.5 keV & & 0.36 & 0.15 & \\ Fe\,{\sc i}-Fe\,{\sc xxiii} 6.4--6.5 keV &0.86 &0.86 & 0.65 & 0.63 - 1.14 \\\FeXXV & 0.54\footnote{The EW of the 6.7 keV line is 1.05 keV} & 0.50 & 0.5 & 0.29-0.83 \\ \FeXXVI & 0.29 & 0.30 & 0.29 & 0.09 - 0.52 \\ \hline \end{tabular} \end{table} Error estimates are carried out separately for lines and continuum. The continuum uncertainties are not very important since the acceptable range of slopes is rather small (less than 0.2 dex). As for the lines, we have estimated the 90\% confidence interval for each line. The $\chi^2$ step which corresponds to the 90\% error range for any particular line, depends upon the number of free parameters which are interdependent with that line normalization. This in turn depends on the relative proximity of other lines, and where the line falls in the spectrum. The line and line-blend estimates are listed in Table 1. Next we introduce various physical models that represent some combinations of photoionized and collisionally ionized plasmas. \subsection{Models Involving Photoionized Gas} We have investigated the possibility that the observed soft X-ray lines are due to photoionized gas in the nucleus of NGC~1068. We have tried this idea in two different ways: by looking at line intensities in a specific, two component photoionization model, and by scanning a grid of models in a search for the best combination of ionization parameter and column density that fit the {\it ASCA} spectrum. The specific photoionization model assumes that all emission lines originate in two distinct photoionized gas component, a highly ionized component (hereafter the ``hot''component) and a moderate ionization component (hereafter the ``warm'' component). Regarding the continuum, the assumption is that there is a ``hard'' (1--10 keV), obscured nuclear part, typical of Seyfert galaxies, and a ``soft'' ($0.5<E<$3 keV) extended part which is of unknown origin and plays no part in exciting the nuclear gas. Thus, the observed continuum is made out of three components: 1. The reflection of the nuclear continuum by the hot photoionized gas. 2. The reflection of the nuclear continuum by the ``warm'' photoionized gas, and 3. The extended, directly observed continuum. The soft component contributes a negligible amount above 5 keV. Thus, the normalization is such that the two reflected components (in about equal amounts, see below) fit the observed {\it ASCA} and {\it GINGA} high energy continuum and that the combination of all three gives a good representation of the overall continuum energy distribution. The spectral energy distribution (SED) of the central continuum is similar to the one used by Marshall {\it et al.}\ (1993), except for the soft X-ray part. It is made of an infrared-optical-UV broken power-law that fits the overall energy budget of NGC 1068 and an X-ray power-law of photon slope $\Gamma=1.5$ that fits the 1--50 keV continuum. The X-ray is added smoothly to the UV part, below 1 keV. The resulting \aox, without taking into account the soft extended continuum, is 1.6. The nuclear continuum produces about 15\% of the total flux at 1 keV and the rest is assumed to be extended (i.e. the 15\% is the fraction of the reflected nuclear source out of the total, reflected plus directly observed extended flux). We have also tried to extend the nuclear component, with the same slope, to lower energies (0.5 keV). As shown below, this helps the oxygen abundance anomaly but introduces other difficulties. The exact optical-UV continuum properties are not very important for the purpose of the present discussion but are influencing the observed ultraviolet lines (see Netzer, 1997). We have used the photoionization code ION (Netzer 1996 and references therein) to model the X-ray ionized gas in NGC~1068. The model inputs are the assumed central continuum, the density and column density, the X-ray ionization parameter (i.e. as defined by the 0.1--10 keV photon flux, see Netzer 1996) and the abundances. We have tested two possible compositions. The first is motivated by the suspected unusual oxygen and iron abundances and assumes the following dust-free composition:\\ H:He:C:N:O:Ne:Mg:Si:S:Ar:Fe= $10^{4}$:10$^3$:3.4:1.1:1.5:1:0.35:0.35:0.16:0.07:1.2, i.e. solar except for iron, which is three times solar and oxygen, which is 0.2 solar. As shown below, this is in good agreement with the observations. The second is the same except that O/H is solar (8$\times 10^{-4}$). The photoionization model which is compared with our line measurements is similar to the one presented by Marshall {\it et al.}\ (1993). The warm component column density is 5$\times 10^{22}$ \hbox{cm$^{-2}$}\ with an illuminated face density of N$_{\rm H}=700$ \hbox{cm$^{-3}$}\ and the hot component column is 1.9$\times 10^{22}$ \hbox{cm$^{-2}$}, with illuminated-face density of 3100 \hbox{cm$^{-3}$}. In both models ${\rm N_H \propto R^{-1.5}}$, which is required to explain the large spatial extent of the gas (the optical ``mirror''). The illuminated face X-ray ionization parameters are 3.5 and 310, for the warm and hot components respectively (for the physical dimensions see Marshall {\it et al.}\ 1993). We have not introduced any expansion motion but assumed local microturbulences that correspond to the local sound speed. This has some minor effects on the observed spectrum since in this reflection-only geometry, continuum fluorescence can influence some observed line intensities (Krolik and Kriss 1995; Netzer 1996). Having defined the gas density, location and abundance, we have calculated the expected X-ray spectrum of the two components. The normalization of the reflected and emitted spectrum of the hot and warm components (i.e. the flux unit in Table 1) is such that the calculated \ifmmode {\rm K}\alpha \else K$\alpha$\fi\ lines match the observed values (Table 1, Model 1). The two separate components, and the resulting composite spectrum, are shown in Fig. 2. Also shown is a comparison of the model (reflected nuclear continuum) with the observations (reflected plus the directly observed extended continuum). \begin{figure} \epsfxsize\hsize \epsfbox{figure2.ps} \caption{ Calculated warm (bottom), hot (middle) and combined (top) photoionized gas spectra for NGC~1068 assuming low O/H and continuum break at 1 keV. The strongest emission lines are marked and the observed continuum is shown at the top in dashed line. The difference between the observed and calculated continua is attributed to the extended component. } \end{figure} As evident in Fig. 2, the warm gas component exhibits a rich soft X-ray spectrum, composed of numerous lines and edges, and strong ``neutral'' \ifmmode {\rm K}\alpha \else K$\alpha$\fi\ lines at 6.4--6.5 keV. The strongest soft X-ray features are \NeIX, \MgXI, \SiXIII\ and several Fe-L lines. There is a noticeable broad absorption feature, centered around 1.5 keV, which is due to the combined opacity of neon and iron. Oxygen opacity is not important because of the low assumed oxygen abundance. The same is true for the strength of the \ifmmode {\rm O}\,{\sc viii}\,653~{\rm eV\ line which, with the assumed composition, is consistent with the measured upper limit. The volume averaged gas temperature in this component is about 1.1$\times 10^5$ K and the highest (illuminated face) temperature is 1.5$\times 10^5$ K. The hot component spectrum is of much higher ionization and temperature (1.5--3.9$\times 10^6$ K). This gas produces strong He-like and H-like iron lines and the strongest soft X-ray lines are \NeX, \MgXII, \SiXIV, \SXVI\ and a few Fe-L lines. Table 1 gives the calculated line intensities, normalized as explained, and compares them with the measurements. It shows that the assumed continuum shape, ionization parameter and metalicity, produce a good fit to the observed spectrum. We have searched for ways to eliminate the need for the abnormally low O/H and found that it is strongly dependent on the 0.5--1 keV incident flux. We have therefore tried a fit assuming a nuclear source with a $\Gamma=1.5$ slope covering the entire 0.5--50 keV range, i.e. much weaker than the previously guessed continuum below 1 keV. In this case, oxygen is less ionized and the \ifmmode {\rm O}\,{\sc viii}\,653~{\rm eV\ equivalent width is consistent with the observed upper limit with solar O/H. However, the model suffers from several inconsistencies. First, the calculated 0.7--0.8 keV Fe-L line fluxes are well below their observed intensity, despite the large assumed Fe/H. Second, some other lines, most notably \NeIX, \MgXI\ and \SiXIII\, are much below their observed flux. Thus, in this case, the abundances of neon, magnesium and silicon are all problematic. As seen in Table 1, the overall fit in this case is much less satisfactory. The second approach is to calculate a grid of hot and warm photoionization models, to combine them with an additional, soft continuum component, and to use a minimization technique to search for the combination that fit best the observed spectrum. This three component fit is chosen from a grid of models (presented as {\it atables} in the fitting routine {\it XSPEC}) covering a range of column density and X-ray ionization parameter, and assuming a single-slope 0.5--50 keV continuum. The soft soft excess is modeled as thermal bremsstrahlung emission (to avoid emission lines, see below) with temperature as the only free parameter. The photoionized gas composition is identical to the one shown above. Using these components, we find a reasonable good fit with gas kT=0.44 keV (T=5.1$\times 10^6\,$K) plasma, column densities of 10$^{22.4}$ \hbox{cm$^{-2}$}\ (warm) and 10$^{22.6}$ \hbox{cm$^{-2}$}\ (hot) and X-ray ionization parameters of 1.3 (warm) and 95 (hot). These ionization parameters are measured at the illuminated face and are thus slightly above the volume averaged ionization parameters of the ${\rm N_H \propto R^{-1.5}}$ models. The reduced $\chi ^2$ is 1.37 for 509 degrees of freedom. We do not show this fit since it is similar in quality, and major features, to the one shown in Fig. 1. We also note that allowing little freedom in the assumed composition can significantly improve the fit. We avoid this additional complication since, with the limited {\it ASCA} capability, it adds nothing to the understanding of the source. \subsection{Models Involving Collisionally Ionized Gas} Our next model is meant to test the hypothesis that most of the {\it line and continuum} soft flux is due to the extended, starburst region. The purpose is to test the required composition and overall 0.6--3 keV continuum shape and not to model individual lines. We have modeled the soft spectrum by a hot plasma ``meka'' model. The meka model describes an emission spectrum from hot diffuse gas based on the model calculations of Mewe \& Gronenschild (1985), Mewe, Lemen, \& van den Oord (1986), and Kaastra (1992). The model inputs are plasma temperature hydrogen density and composition. Meka includes line emission from C, N, O, Ne, Na, Mg, Al, Si, S, Ar, Ca, Fe and Ni. We assumed a gas density of 10$^3$ \hbox{cm$^{-3}$}\ and allowed the plasma temperature and composition to vary. We added a $\Gamma=1.5$ continuum and a broad Gaussian 6.5 keV line to fit the hard X-ray emission and to enable a reasonable parameterization of the overall X-ray spectrum and hence achieve a meaningful $\chi ^2$ minimization. The best fit is obtained with kT=0.59 keV and metallicity of 0.043 solar. We have measured the line emission between 0.6 and 1 keV, which represents most of the line flux in our best fit model. The measured equivalent width (EW) of the line blend relative to the 0.59 keV continuum, is about 400 eV, i.e. similar to the total emission line EW measured by our two photoionization models. This verifies that models with higher metallicity gave unacceptable fits since they produce too much line emission. This is of great importance to NGC 1068, and to starbursts in general, as discussed in the following section. \section{DISCUSSION} Our data analysis and model fitting show clear evidence for several strong soft X-ray lines in the spectrum of NGC~1068. We have measured those with typical uncertainty of a factor two. The EWs are large, compared with those predicted for Seyfert 1 galaxies (Netzer 1996). This indicates either collisionally ionized plasma or else photoionized gas seen against a reflected continuum. Below we examine the consequence of both as well as the new model by Iwasawa {\it et al.}\ (1997). \subsection{Photoionization models} The two component photoionized gas model, with small O/H, large Fe/H and continuum break at 1 keV, nicely reproduces the observed line intensities in NGC 1068, given the uncertainties. The Marshall {\it et al.}\ (1993) result of large Fe/H is confirmed by our fitting of the \ifmmode {\rm K}\alpha \else K$\alpha$\fi\ complex as well as by the measured intensity of the Fe-L lines. Given this continuum, the observed upper limit on the \ifmmode {\rm O}\,{\sc viii}\,653~{\rm eV\ line intensity suggests that O/H is smaller than about 0.25 solar, again in agreement with Marshall {\it et al.}\ The composition of all other metals is within a factor two of solar. Reducing the nuclear flux by moving the break to 0.5 keV, considerably change the conclusion about O/H. This is now consistent with solar but other lines are in bad agreement with the observations. In particular, we could not find a satisfactory fit with solar Ne/H, Mg/H and Si/H. The key future observation is the \OVII\ line which is predicted to be strong in both cases (see Table 1). For example, the line ratios \OVII/\NeIX\ and \ifmmode {\rm O}\,{\sc viii}\,653~{\rm eV/\NeIX\ could be used to confront the two possibilities. AXAF has got the capability to observe this ratio. The unusual gas composition required by our photoionized gas models is a severe theoretical problem, since large enhancement of all metals, including oxygen, is expected in central regions of evolved galaxies. It is possible that the poor calibration of the {\it ASCA}\ detectors below 0.6 keV, and the low signal-to-noise of {\it BBXRT}, are the origins of the discrepancy. Alternatively, the models shown here do not represent the physical conditions in NGC~1068. However, the presence of a rich spectrum of other metal lines, all consistent with each other, hint to some anomaly in the O/Ne, O/Mg, O/Si and O/S ratios, and in particular in O/Fe. We note again the unusually weak \OIIIb\ line in this galaxy, compared with the intensity of semi-forbidden lines of nitrogen and carbon. As explained in Netzer (1997), this is not observed in four other Seyfert galaxies, with measurable ultraviolet narrow lines, and is consistent with low O/N and O/C. Millimeter observations of this galaxy, also hint to an unusual composition (Sterenberg, Genzel, and Tacconi, 1995). The warm photoionized gas component in NGC 1068 is similar in column density and ionization parameter to many ``warm absorber'' systems observed in Seyfert 1 galaxies (see George {\it et al.}\ 1997 for references and review). It is interesting to examine the spectrum of the present warm component when observed against a bare central continuum, since the Seyfert 1 systems are normally assumed to originate in gas which is much closer to the central source. We have examined this possibility and found some differences as well as various dynamical implications. The issue has been addressed by Krolik and Kriss (1995) and Netzer (1996) and is beyond the scope of the present paper. Our photoionization model requires a third, pure continuum component to explain the 0.5--3 keV excess. As discussed below the origin of this component is unclear. \subsection{Collisionally Ionized Plasma} We have attempted to fit the soft X-ray flux by hot plasma models with variable composition. Our single temperature solution requires extremely low (0.04 solar) metallicity. A similar difficulty was encountered by Ueno {\it et al.}\ (1994), who have attempted a multiple component fit to the spectrum. Their best solution is composed of two hot plasma components one of which has 0.03 solar abundance. The origin of this problem is the large EW of lines associated with such a hot plasma. We do not consider such models adequate for NGC 1068, or other evolved regions in galactic nuclei. No optical observations support this and we know of no theoretical model to support this. We note that Pier {\it et al.}\ (1996) suggested that much of the soft excess in NGC 1068 is due to bremsstrahlung emission from the optical ``mirror''. This radiation is specifically included in our calculation and, as seen in Fig. 2, cannot explain the soft excess. The apparent composition anomaly is common to other starburst galaxies. As shown by Serlemitsos, Ptak and Yaqoob (1995), several other sources show soft X-ray lines which, when fitted with multi-component hot plasma models indicate very low metallicity. Interesting examples are shown by Ptak {\it et al.}\ (1997) who studied M82 and NGC 253. Their deduced metallicities are 10--30\% solar. We have remeasured the M82 {\it ASCA} data used by Ptak {\it et al.}\ and found several soft X-ray lines with typical EWs in the range of 50--100 eV, i.e. similar to those observed in NGC 1068. In the context of hot plasma models, the lines indicate very low metallicity similar to the extended X-ray source in NGC 1068. The most likely explanation is that another, pure continuum model is contributing at those energies. Ptak {\it et al.}\ suggested inverse Compton or accretion-driven point sources, possibly X-ray binaries. They also suggest depletion by dust that will help explain the low iron abundance. Given all unknowns, we cannot be sure that most of the soft X-ray lines in NGC 1068 are due to photoionized gas. NGC 1068 differs significantly from starburst galaxies by showing large EW low-ionization, H-like and He-like iron lines. The latter two are most probably due to high temperature photoionized gas and one must consider the consequences for production of other lines. Given solar metallicity, such gas must also produce strong argon, sulphur and silicon lines as well as some magnesium, neon and Fe-L emission (see Fig.2). Most of these are in the 0.6--3 keV range where contribution from the extended component is suspected. This makes the simple hot plasma explanation even more questionable. Another important difference is the intensity of the Fe-L lines. The Ptak {\it et al.}\ (1997) observations of M82 and NGC~253, clearly show these lines to be weaker than computed in solar metallicity hot plasma models. We find the lines, that are produced in our case by recombination, not collisions, consistent with higher-than-solar metallicity. \subsection{Reflection by the central torus} An alternative explanation to the warm photoionized component was recently proposed by Iwasawa {\it et al.}\ (1997). These authors suggested that the 6.4 keV iron line originates in the Compton thick ``walls'' of the central obscuring torus. The model is different from ours in two major ways. First, because of the Compton thick gas, there is more absorption, and hence almost no reflection below about 3 keV (see their Fig. 3). All the X-ray flux below this energy is either due to the hot photoionized gas or the extended emission. Additional manifestations of the Compton thick medium are an extended low energy wing on the 6.4 keV line and very strong 7.1 keV absorption. While we clearly identify some flux excess at energies below 6.4 keV, with EW of about 100 eV, our fit does not require any 7.1 keV absorption. However, the poor signal-to-noise prevents us from reaching a firm conclusion on this point. As for the low-energy wing, its EW is of the same order as expected from a relativisticly broadened nuclear line. If the continuum and line are both scattered, the relativistic disk component would have the same equivalent width with respect to that continuum, with $\sim 100$~eV EW in the line wing itself (cf Nandra {\it et al.}\ 1997), as it does when it is directly observed. Thus an alternative explanation is that the X-ray mirror reflects both central line and continuum. Finally, this explanation would require an even larger iron composition since much of the produced 6.4 keV photons are absorbed by the Compton thick gas. The second major difference between our model and Iwasawa {\it et al.}\ is the origin of the nuclear mirror. This mirror is known to be extended and covers a sizable fraction of the NLR. In our model, the scattering of the optical broad lines, and optical-UV continuum, is due to both warm and hot components. The amount of reflection is consistent with the required column density and covering fraction (see Marshall {\it et al.}, 1993, for details). In the Iwasawa {\it et al.}\ (1997) model, only the hot component is extended, and contributes to the reflection, while the warm X-ray gas occupies a very small ($\sim 1$ pc) region. Since very hot gas would broaden the reflected BLR Balmer lines beyond recognition, the model requires another, yet unknown medium to explain the extended mirror. Iwasawa {\it et al.}\ (1997) also suggested a small redshift of the He-like and H-like iron \ifmmode {\rm K}\alpha \else K$\alpha$\fi\ lines. We find a satisfactory fit for those lines with narrow Gaussian at the systemic velocity. \subsection{Low-Z fluorescence lines} A comment on the intensity of some soft X-ray metal lines is in order. The fluorescence yield of low ionization species is a strongly increasing function of the atomic number. Thus, a fluorescence excitation of low-Z metals, such as oxygen, neon and magnesium, is thought to be negligible. Our calculations clearly show that this is not the case. The reason is the steep high energy continuum of AGN in general, and NGC~1068 continuum in particular. While the yield is indeed small, the K-shell excitation energy of low-Z metals is much lower than in high Z species, and the much larger photon flux more than compensates for the lower yield. We predict that future X-ray experiments of sufficiently high resolution, such as the grating instruments on AXAF and XMM, will discover relatively strong fluorescence lines of low ionization (i.e. lower than Li-like ions) oxygen, neon, magnesium, silicon and sulphur. Acknowledgements: We are grateful to Ian George, Richard Mushotzky, Paul Nandra, Amiel Sternberg and Andy Ptak for useful comments and discussion. This research is supported by a grant of the Israel Science Foundation. We acknowledge the financial support of the Universities Space Research Association (TJT). The analysis was performed using {\sc XSELECT} (version 1.3) and {\sc XSPEC} (version 9). This research made use of data obtained through the HEASARC on-line service, provided by NASA/GSFC. \newpage
1,108,101,563,749
arxiv
\section{Introduction} \label{sec:introduction} In this note we consider the fractional Sobolev inequality \begin{equation} \label{eq:1} \|u\|_{s/2}^2 \geq \cS \left(\int_{\mathbb{R}^{N}}|u|^{q}dx\right)^{\frac{2}{q}} \qquad \text{for all $u\in \mathring{H}^{\frac{s}{2}}(\mathbb{R}^{N})$,} \end{equation} where $0<s<N$, $q=\frac{2N}{N-s}$, and $\mathring{H}^{\frac{s}{2}}(\mathbb{R}^{N})$ is the space of all tempered distributions $u$ such that $$ \hat u \in L^1_{loc}(\mathbb{R}^N) \qquad \text{and} \qquad \|u \|_{s/2}^2: = \int_{\mathbb{R}^{N}}|\xi|^s|\hat u|^2 dx < \infty. $$ Here, as usual, $\hat u$ denotes the (distributional) Fourier transform of $u$. The best Sobolev constant \begin{equation} \label{eq:21} \cS=\cS(N,s)= 2^{s} \pi^{\frac{s}{2}} \frac{\Gamma(\frac{N+s}{2})}{ \Gamma(\frac{N-s}{2})} \Bigl(\frac{\Gamma(\frac{N}{2})}{\Gamma(N)}\Bigr)^{s/N}, \end{equation} i.e., the largest possible constant in (\ref{eq:1}), has been computed first in the special case $s=2$, $N=3$ by Rosen \cite{Rosen} and then independently by Aubin \cite{Aubin} and Talenti \cite{Talenti} for $s=2$ and all dimensions $N$. For general $s \in (0,N)$, the best constant has been given by Lieb \cite{L} for an equivalent reformulation of inequality (\ref{eq:1}), the (diagonal) Hardy-Littlewood-Sobolev inequality. In order to discuss this equivalence in some more detail, we note that \begin{equation} \label{eq:2} \|u \|_{s/2}^2 = \int_{\mathbb{R}^{N}}u(-\Delta)^{s/2}u dx \end{equation} for every Schwartz function $u$, where the operator $(-\Delta)^{s/2}$ is defined by $$ \widehat {(-\Delta)^{s/2} u}(\xi)=|\xi|^{s} \widehat u (\xi) \qquad \text{for a.e. $\xi \in \mathbb{R}^N$.} $$ Moreover, $\mathring{H}^{\frac{s}{2}}(\mathbb{R}^{N})$ is also given as the completion of smooth functions with compact support under the norm $\|\cdot \|_{s/2}$. The (diagonal) Hardy-Littlewood-Sobolev inequality states that \begin{equation} \label{eq:4} \Bigl|\int_{\mathbb{R}^N} \int_{\mathbb{R}^N} \frac{f(x) g(y)}{|x-y|^\lambda}\,dx\,dy\Bigr| \le \pi^{\lambda/2} \frac{\Gamma(\frac{N-\lambda}{2})}{\Gamma(N-\frac{\lambda}{2})} \Bigl(\frac{\Gamma(N)}{\Gamma(N/2)}\Bigr)^{1-\frac{\lambda}{N}} |f|_p |g|_p \end{equation} for all $f,g \in L^p(\mathbb{R}^N)$, where $0<\lambda<N$ and $p=\frac{2N}{2N-\lambda}$. Here and in the following, we let $|\cdot|_r$ denote the usual $L^r$-norm for $1 \le r\le \infty$. The equivalence of (\ref{eq:1}) and (\ref{eq:4}) follows -- by a duality argument -- from the fact that for every $f \in L^{\frac{q}{q-1}}(\mathbb{R}^N)$ there exists a unique solution $(-\Delta)^{-s/2}f \in \mathring{H}^{\frac{s}{2}}(\mathbb{R}^{N})$ of the equation $(-\Delta)^{s/2} u = f$ given by convolution with the Riesz potential, i.e., by \begin{equation} \label{eq:5} [(-\Delta)^{-s/2}f](x) = 2^{-s}\pi^{-\frac{N}{2}} \frac{\Gamma(\frac{N-s}{2})}{\Gamma(s/2)}\int_{\mathbb{R}^N} \frac{1}{|x-y|^{N-s}}f(y)\,dy \qquad \text{for a.e. $x \in \mathbb{R}^N$.} \end{equation} In \cite{L}, Lieb identified the extremal functions for (\ref{eq:4}), and his results imply that equality holds in (\ref{eq:1}) for nontrivial $u$ if and only if $u$ is contained in an $N+2$-dimensional submanifold $\cM$ of $\H$ given as the set of functions which, up to translation, dilation and multiplication by a nonzero constant, coincide with \begin{equation} \label{eq:6} U \in \H, \qquad U(x)=(1+|x|^2)^{-\frac{N-s}{2}}. \end{equation} For the special case $s=2$, i.e., the first order Sobolev inequality, Brezis and Lieb \cite{BL} asked the question whether a remainder term -- proportional to the quadratic distance of the function $u$ to the manifold $\cM$ -- can be added to the right hand side of (\ref{eq:1}). This question was answered affirmatively in the case $s=2$ by Bianchi and Egnell \cite{BE}, and their result was extended later to the case $s=4$ in \cite{LW} and to the case of an arbitrary even positive integer $s<N$ in \cite{BWW}. The main purpose of the present note is to obtain a corresponding remainder term inequality for all (real) values $s \in (0,N)$. Our main result is the following. \begin{theorem} \label{maintheorem} Let \begin{equation} \label{eq:7} \cM:= \Bigl\{ c\, U\bigl(\frac{\cdot - x_0}{\varepsilon}\bigr) \,:\, c \in \mathbb{R} \setminus \{0\}, x_0 \in \mathbb{R}^N, \varepsilon>0\Bigr \} \: \subset \: \H, \end{equation} where $U$ is defined in (\ref{eq:6}). Then there exists a positive constant $\alpha$ depending only on the dimension $N$ and $s\in(0,N)$ such that \begin{equation} \label{eq:10} {d}^{2}(u, {\cM})\geq\int_{\mathbb{R}^{N}}u(-\Delta)^{s/2}(u)dx- \cS \left(\int_{\mathbb{R}^{N}}|u|^{q}dx\right)^{\frac{2}{q}}\geq \alpha\, {d}^{2}(u, \cM) \end{equation} for all $u\in \H$, where ${d}(u, {\cM})=\min\{\|u-\varphi\|_{s/2}\::\: \varphi\in{\cM} \}.$ \end{theorem} We briefly explain the strategy to prove this remainder term inequality which goes back to Bianchi and Egnell \cite{BE} in the case $s=2$. First, the inequality is proved in a small neighborhood of the optimizer $U \in \cM$ defined in (\ref{eq:6}). Considering a second order Taylor expansion of the difference functional $$ u \mapsto \Phi(u):=\|u\|_{s/2}^2 - \cS \left(\int_{\mathbb{R}^{N}}|u|^{q}dx\right)^{\frac{2}{q}}, $$ at $U$, it is not dificult to see that (\ref{eq:10}) holds in a neighborhood of $U$ with some $\alpha>0$ if and only if the second derivative $\Phi''(U)$ is positive definite on the $(N-2)-$codimensional normal space to the manifold $\cM$ at $U$. This normal non-degeneracy property is the crucial step in the argument. Once inequality (\ref{eq:10}) is established in a neighborhood of $U$, it extends to a neighborhood of the whole manifold $\cM$ by as a consequence of the conformal invariance of all terms in (\ref{eq:7}). We will recall this conformal invariance in detail in Section~\ref{sec:preliminaries} below. Finally, to obtain the global version of (\ref{eq:10}), a concentration compactness type argument is applied to show that sequences $(u_n)_n$ in $\H$ with $\Phi(u_n) \to 0$ as $n \to \infty$ satisfy $d(u_n, \cM) \to 0$ as $n \to \infty$.\\ The general idea described here had already been used in \cite{BE,LW,BWW}, but the proofs of the normal non-degeneracy property in these papers strongly rely on the assumption that $s$ is an even positve integer and therefore the eigenvalue problem for $\Phi''(U)$ can be written as a differential equation. In particular, ODE arguments are used to study the radial part of the corresponding eigenvalue problem. This method does not apply for general $s \in (0,N)$. On the other hand, one may observe that the eigenvalue problem has a much simpler form once inequality (\ref{eq:10}) is pulled back on the unit sphere $\mathbb{S}^N \subset \mathbb{R}^{N+1}$ via stereographic projection. The equivalent version of Theorem~\ref{maintheorem} on $\mathbb{S}^N$ is given in Theorem~\ref{maintheorem-reform} below. The idea of studying (\ref{eq:1}) in its equivalent form on ${\mathbb{S}}^N$ also goes back to Lieb's paper \cite{L} where the (equivalent) Hardy-Littlewood-Sobolev inequality was considered. Afterwards it has been applied in many related problems dealing with Sobolev type inequalities and corresponding Euler-Lagrange equations, see e.g. \cite{BSW,D,M,Beckner:93} and the references therein. To our knowledge, its usefulness to identify remainder terms has not been noted so far.\\ About twenty years after the seminal work of Bianchi and Egnell \cite{BE}, the topic of remainder terms in first order Sobolev inequalities (and isoperimetric inequalities) has again attracted a lot of attention in the last years. The recent works use techniques from symmetrization (see, e.g., \cite{CFMP,FMP}), optimal transportation (see, e.g., \cite{FiMaPr}), and fast diffusion (see, e.g., \cite{Do,DoTo,JiXi}); see also \cite{CaFi} for a recent application of remainder terms. However, while these new methods lead to explicit constants and allow to treat non-Hilbertian Sobolev norms, the estimates for the remainder terms are typically weaker than in the result of Bianchi and Egnell. It is not clear to us to which extent the symmetrization and the optimal transportation approach can be extended to give remainder terms in the higher order case or in the case of arbitrary real powers of the Laplacian (see \cite{JiXi} for a fast diffusion approach in the fractional case). We therefore think it is remarkable that the original strategy of Bianchi--Egnell can be generalized to the full family of conformally invariant Hilbertian Sobolev inequalities.\\ As a corollary of Theorem~\ref{maintheorem}, we also derive a remainder term inequality for the function space $\mathring{H}^{\frac{s}{2}}(\Omega)$ which -- for a subdomain $\Omega \subset \mathbb{R}^N$ -- is defined as the completion of $\cC_0^\infty(\Omega)$ with respect to the norm $\|\cdot\|_{s/2}$. In the case where $\Omega$ has a continuous boundary, we have $$ \mathring{H}^{\frac{s}{2}}(\Omega)= \{u \in H^s(\Omega)\::\: \text{$\tilde u \in \H$}\}, $$ where $\tilde u$ denotes the trivial extension of a function $u \in H^s(\Omega)$ on $\mathbb{R}^N$. We also recall that, for $1 < r < \infty$, the weak $L^r$-norm of a measurable function $u$ on $\Omega$ is given by $$ |u|_{w,r,\Omega}= \sup_{\stackrel{A \subset \Omega}{|A|>0}}|A|^{\frac{1}{r}-1} \int_A |u|\,dx, $$ see e.g. \cite{hunt}. \begin{theorem} \label{maincorollary} Let, as before, $q=\frac{2N}{N-s}$. Then there exists a constant $C>0$ depending only on $N$ and $s \in (0,N)$ such that for every domain $\Omega \subset \mathbb{R}^N$ with $|\Omega|<\infty$ and every $u \in \mathring{H}^{\frac{s}{2}}(\Omega)$ we have \begin{equation} \label{eq:32} \|u\|_{s/2}^2 - \cS \left(\int_{\Omega}|u|^{q}dx\right)^{\frac{2}{q}} \ge C |\Omega |^{-\frac{2}{q}} |u|_{w,q/\! 2,\Omega}^2 \end{equation} \end{theorem} For fixed bounded domains $\Omega \subset \mathbb{R}^N$, the existence of a weak $L^{q/2}$-remainder term is due to Brezis and Lieb \cite{BL} in the case $s=2$ and to Gazzola and Grunau \cite{GG} in the case of an arbitrary even positive integer $s<N$. Bianchi and Egnell \cite{BE} gave an alternative proof in the case $s=2$ using the corresponding special case of inequality (\ref{eq:10}). We will follow similar ideas in our proof of Theorem~\ref{maincorollary}, using Theorem~\ref{maintheorem} in full generality. We note that some additional care is needed to get a remainder term which only depends on $|\Omega|$ and not on $\Omega$ itself. The paper is organized as follows. In Section~\ref{sec:preliminaries} we recall the conformal invariance of the problem, and we discuss the framework for an equivalent version of Theorem~\ref{maintheorem} on the sphere $\mathbb{S}^N \subset \mathbb{R}^{N+1}$, see Theorem~\ref{maintheorem-reform}. In Section~\ref{sec:proof-remainder-term} we prove this Theorem, thus completing the proof of Theorem~\ref{maintheorem}. In Section~\ref{sec:weak-lq2-remainder} we give the proof of Theorem~\ref{maincorollary}. We conclude by pointing out the open problem to find an explicit constant $\alpha>0$ in (\ref{eq:10}) via a constructive proof of Theorem~\ref{maintheorem}. For a local version of Theorem~\ref{maintheorem} where the right hand side of (\ref{eq:10}) is replaced by $\alpha d^2(u,\cM) + o(d^2(u,\cM))$ and only $u \in \H$ with $d(u,\cM)<\|u\|_{s/2}$ is considered, the best constant is $\alpha= \frac{2s}{N+s+2}$. This follows from Proposition~\ref{sec:proof-remainder-term-1} below. \section{Preliminaries} \label{sec:preliminaries} In the following, we will denote the scalar product in $\H$ by $$ \langle u,v \rangle_{s/2} = \int_{\mathbb{R}^N} |\xi|^s \hat u(\xi) \overline{\hat v(\xi)}\,d\xi, $$ so that $\|u\|_{s/2}^2 = \langle u,u \rangle_{s/2}$ for $u \in \H$. In the remainder of this section, $0<s<N$ is fixed and we abbreviate $q= 2n/(N-s)$. We recall that the group of conformal transformations on $\mathbb{R}^N$ is generated by translations, rotations, dilations and the inversion $x \mapsto \frac{x}{|x|^2}$. If $h$ is one of these transformations with Jacobian determinant $J_h$, then for any functions $u,v \in \mathring{H}^{\frac{s}{2}}(\mathbb{R}^{N})$ we have $J_h^{\frac{1}{q}} u \circ h, J_h^{\frac{1}{q}} v \circ h \in \mathring{H}^{\frac{s}{2}}(\mathbb{R}^{N})$ and \begin{equation} \label{eq:8} \langle J_h^{\frac{1}{q}} u \circ h , J_h^{\frac{1}{q}} v \circ h \rangle_{s/2}=\langle u,v \rangle_{s/2}. \end{equation} This property is a consequence of the conformal covariance of the operator $(-\Delta)^{s/2}$, i.e. of the equality \begin{equation} \label{eq:11} (-\Delta)^{s/2} (J_h^{\frac{1}{q}} u \circ h) = J_h^{\frac{N+s}{2N}} [(-\Delta)^{s/2} u] \circ h \end{equation} for all conformal transformations $h$ on $\mathbb{R}^N$ and all Schwartz functions $u$. As stated in \cite[Proposition 2.1]{M}, (\ref{eq:11}) is most easily derived by considering the inverse operator $(-\Delta)^{-s/2}$ given in (\ref{eq:5}). Indeed, the identity \begin{equation} \label{eq:12} (-\Delta)^{-s/2} (J_h^{\frac{N+s}{2N}} u \circ h) = J_h^{\frac{1}{q}} [(-\Delta)^{-s/2} u] \circ h \end{equation} is equivalent to (\ref{eq:11}), and it can be verified case by case for dilations, rotations, translations and the inversion. In the latter form related to the Riesz potential, the conformal covariance had already been used by Lieb in \cite{L}. Note that, if $h$ is a conformal transformation on $\mathbb{R}^n$, it follows from (\ref{eq:8}) that the map $u \mapsto J_h^{\frac{1}{q}} u \circ h$ preserves distances with respect to the norm $\|\cdot\|_{s/2}$, i.e. we have \begin{equation} \label{eq:9} \|J_h^{\frac{1}{q}} u \circ h - J_h^{\frac{1}{q}} v \circ h\|_{s/2}=\| u -v \|_{s/2} \qquad \text{for all $u,v \in \H$.} \end{equation} Since the set $\cM$ is also invariant under the transformations $u \mapsto J_h^{\frac{1}{q}} u \circ h$, we conclude that $d(J_h^{\frac{1}{q}} u \circ h, \cM)=d(u,\cM)$ for all $u \in \H$. We also note that \begin{equation} \label{eq:3} |J_h^{\frac{1}{q}} u \circ h|_{q} = |u|_{q} \qquad \text{for any $u \in L^q(\mathbb{R}^N)$} \end{equation} and any conformal transformation $h$ on $\mathbb{R}^N$, which follows by an easy computation. In the following, we consider the inverse stereographic projection $$ \pi: \mathbb{R}^N \to \mathbb{S}^{N} \subset \mathbb{R}^{N+1}, \qquad \pi(x)=(\frac{2x}{1+|x|^2},\frac{1-|x|^2}{1+|x|^2}). $$ We recall that $\pi$ is a conformal diffeomorphism. More precisely, if $g_{\mathbb{R}^N}$ denotes the flat euclidian metric on $\mathbb{R}^N$ and $g_{\mathbb{S}^{N}}$ denotes the metric induced by the embedding $\mathbb{S}^{N} \subset \mathbb{R}^{N+1}$, then the pullback of $g_{\mathbb{S}^{N}}$ to $\mathbb{R}^N$ satisfies \begin{equation} \label{conformfactor} {\pi}^*g_{\mathbb{S}^{N}}=\frac{4}{(1+|\cdot|^2)^2}g_{\mathbb{R}^N}. \end{equation} Moreover, the corresponding volume element is given by \begin{equation} \label{eq:13} J_\pi(x) dx = \Bigl(\frac{2}{1+|x|^2}\Bigr)^N dx, \end{equation} For a function $v:{\mathbb{S}}^N \to \mathbb{R}$, we may now define $$ \cP v: \mathbb{R}^N \to \mathbb{R}, \qquad [\cP v](x)= J_\pi(x)^{\frac{1}{q}} v(\pi(x)) = \Bigl(\frac{2}{1+|x|^2}\Bigr)^{\frac{N-s}{2}} v(\pi(x)). $$ From (\ref{eq:13}), it is easy to see that $\cP$ defines an isometric isomorphism between $L^q(\mathbb{S}^N)$ and $L^q(\mathbb{R}^N)$. We also note that \begin{equation} \label{eq:15} \cP\, 1 = 2^{(N-s)/2} U, \end{equation} where $1$ stands for unit function on $\mathbb{S}^N$ and $U$ is defined in (\ref{eq:6}). Moreover, $H^{\frac{s}{2}}(\mathbb{S}^{N})$ is the completion of the space of smooth functions on ${\mathbb{S}}^N$ under the norm $\|\cdot \|_{*}$ induced by scalar product $$ (u,v) \mapsto \langle u,v \rangle_*= \langle \cP u, \cP v \rangle_{s/2}. $$ We will always consider $H^{\frac{s}{2}}(\mathbb{S}^{N})$ with the norm $\|\cdot\|_*$ induced by this scalar product (for matters of convenience, we suppress the dependence on $s$ at this point). Hence, by construction, $$ \text{$\cP$ is also an isometric isomorphism $(H^{\frac{s}{2}}(\mathbb{S}^{N}),\|\cdot\|_*) \to (\H,\|\cdot\|_{s/2})$.} $$ Next we note that $\langle \cdot,\cdot \rangle_*$ is the quadratic form of a unique positive self adjoint operator in $L^2(\mathbb{S}^N)$ which is commonly denoted by $A_s$ in the literature. This operator is formally given by $$ [A_s w] \circ \pi = J_\pi^{-\frac{N+s}{2N}} (-\Delta)^{s/2} (\cP w). $$ A key ingredient of the proof of Theorem~\ref{maintheorem} is the following representation of $A_s$ as a function of the Laplace-Beltrami Operator $\Delta_{{\mathbb{S}}^N}$ on ${\mathbb{S}}^N$: \begin{equation} \label{eq:14} A_s= \frac{\Gamma(B+\frac{1+s}{2})}{\Gamma(B+\frac{1-s}{2})} \qquad \text{with $B= \sqrt{-\Delta_{{\mathbb{S}}^N} + \bigl(\frac{N-1}{2}\bigr)^2}$.} \end{equation} This formula is most easily derived by considering the inverse of $A_s$ and using the Funk-Hecke formula, see \cite{Beckner:93} and also \cite{M}. It also shows that the domain of $A_s$ coincides with $H^s({\mathbb{S}}^N)$. The following statement is a mere reformulation of (\ref{eq:14}). \begin{proposition} \label{lemma1} The operator $A_{s}$ is self adjoint and has compact resolvent. Its spectrum is given as the sequence of eigenvalues $$ \lambda_k(s)= \frac{\Gamma(\frac{N+s}{2}+k)}{\Gamma(\frac{N-s}{2}+k)}, \qquad k \in \mathbb{N}_0, $$ and the eigenspace corresponding to the eigenvalue $\lambda_k(s)$ is spanned by the spherical harmonics $Y_{k,j},\, j= 1,\dots, {k+N \choose N} - {k+N-2 \choose N}$, of degree $k$. \end{proposition} Next, we note that, via the isometric isomorphism $\cP$, inequality (\ref{eq:1}) is equivalent to \begin{equation} \label{eq:16} \|u\|_*^2 \geq \cS |u|_q^2 \qquad \text{for all $u\in H^{\frac{s}{2}}(\mathbb{S}^{N})$,} \end{equation} with $q=\frac{2N}{N-s}$. Here, in accordance with the previous notation, we also write $|\cdot|_r$ for the $L^r$-norm of a function in $L^r(\mathbb{S}^N)$, $1 \le r \le \infty$. Equality is attained in (\ref{eq:16}) for nontrivial $u$ if and only if $u \in \cM_*$, where $$ \cM_*:= \cP^{-1}(\cM)= \{v \in H^{\frac{s}{2}}(\mathbb{S}^{N})\::\: \cP v \in \cM\}. $$ Moreover, the remainder term inequality (\ref{eq:10}) is equivalent to \begin{equation} \label{eq:17} {d}^{2}(u, {\cM_*})\geq \|u\|_*^2- \cS|u|_q^{2}\geq \alpha\, {d}^{2}(u, \cM_*)\quad \text{for $u\in {H}^{s/2}(\mathbb{S}^{N})$}, \end{equation} where ${d}(u, {\cM}_*)=\min\{\|u-\varphi\|_{*}\::\:\varphi\in{\cM} \}.$ We may therefore reformulate Theorem~\ref{maintheorem} as follows. \begin{theorem} \label{maintheorem-reform} There exists a positive constant $\alpha$ depending only on the dimension $N$ and $s\in(0,N)$ such that (\ref{eq:17}) holds. \end{theorem} We will prove Theorem~\ref{maintheorem-reform} in Section~\ref{sec:proof-remainder-term} below, thus completing the proof of Theorem~\ref{maintheorem}. We close this section with some comments on the conformal invariance of the reformulated problem and the geometry of $\cM_*$. Via stereographic projection, the conformal transformations on ${\mathbb{S}}^N$ are in 1-1-correspondance with the conformal transformations on $\mathbb{R}^N$. So, if $\tau$ is an element of the conformal group of ${\mathbb{S}}^{N}$ with Jacobian determinant $J_\tau$, then (\ref{eq:3}) and (\ref{eq:8}) imply that \begin{equation} \label{eq:3_*} \langle J_\tau^{\frac{1}{q}} u \circ \tau, J_\tau^{\frac{1}{q}} v \circ \tau \rangle_{s/2} = \langle u,v \rangle_* \qquad \text{and}\qquad |J_h^{\frac{1}{q}} u \circ h|_{q} = |u|_{q} \end{equation} for all $u,v \in H^{\frac{s}{2}}(\mathbb{S}^{N})$. From (\ref{eq:15}), we deduce the representation $$ \cM_*=\{cJ_{\tau}^{\frac{1}{q}}|\ \tau\ \text{is an element of the conformal group of}\ S^{N}, c \in \mathbb{R} \setminus \{0\} \}. $$ Since the Jacobian determinant $J_\tau$ of a conformal transformation $\tau$ on ${\mathbb{S}}^N$ has the form $J_{\tau}(\xi)=(1-\xi\cdot\theta)^{-n}$ for some $\theta\in B^{n+1}:=\{x \in \mathbb{R}^{N+1}\::\: |x|<1\}$, $\cM_*$ can be viewed as an $N+2$ dimensional smooth manifold embedded in $H^{\frac{s}{2}}(\mathbb{S}^{N})$ via the mapping \begin{eqnarray} \mathbb{R} \setminus \{0\} \times B^{N+1}\, \to\, H^{\frac{s}{2}}(\mathbb{S}^{N}),\qquad (c, \theta)\,\mapsto\, u_{c,\theta}, \end{eqnarray} where $u_{c,\theta}(\xi)=c(1-\xi\cdot\theta)^{-\frac{N-s}{2}}$ for $\xi \in {\mathbb{S}}^N$. This immediately implies that the tangent space $T_{1}\cM_*$ at the function $1=u_{1,0}$ is generated by the spherical harmonics $Y_0^0=1$ and $Y_1^j$, $j=1,\dots,N+1$, given by $$ Y_1^j(\xi)= \xi_j \qquad \text{for $\xi=(\xi_1,\dots,\xi_{N+1}) \in \mathbb{S}^N \subset \mathbb{R}^{N+1}$.} $$ Hence $T_{1}\cM_*$ coincides precisely with the generalized eigenspace of the operator $A_s$ corresponding to the eigenvalues $\lambda_0(s)$ and $\lambda_1(s)$. Combining this fact with the minimax characterization of the eigenvalue $\lambda_2(s)$, we readily deduce that \begin{equation} \label{eq:30} \lambda_2(s)=\mathop{\text{inf}}_{v\in T_{1}\cM_*^\perp}\frac{\|v\|^2}{|v|_2^2} \end{equation} with \begin{equation} \label{eq:31} T_{1}\cM_*^\perp:= \{v \in H^{\frac{s}{2}}(\mathbb{S}^{N})\::\: \langle v, w \rangle_* =0 \; \text{for all $w \in T_{1}\cM_*$}\}. \end{equation} The identity (\ref{eq:30}) will be of crucial importance for the local verification of (\ref{eq:17}) close to the manifold $\cM_*$. \section{Proof of the remainder term inequality on the sphere} \label{sec:proof-remainder-term} We first prove a local variant of Theorem~\ref{maintheorem-reform}. \begin{proposition} \label{sec:proof-remainder-term-1} For all $u\in H^{\frac{s}{2}}(\mathbb{S}^{N})$ with $d(u,\cM_*)<\|u\|_*$, we have \begin{equation} \label{eq:29} d^2(u,\cM_*) \ge \|u\|_*^2 -\cS |u|_q^2 \geq \frac{2s}{N+s+2} d^{2}(u,\cM_*)+o(d^{2}(u,\cM_*)). \end{equation} \end{proposition} \begin{proof} We consider the functional \begin{equation} \label{eq:18} \Psi: H^{\frac{s}{2}}(\mathbb{S}^{N}) \to \mathbb{R},\qquad \Psi(u)= \|u\|_*^2- \cS |u|_q^2. \end{equation} It is easy to see that $\Psi$ is of class $\cC^2$ on $H^{\frac{s}{2}}(\mathbb{S}^{N}) \setminus \{0\}$. Moreover, \begin{equation} \label{eq:19} \Psi'(u)v = 2 \langle u,v \rangle_*- 2 \cS |u|_q^{2-q} \int_{{\mathbb{S}}^N} |u|^{q-2}u v\,d\xi \end{equation} and \begin{align} \nonumber \frac{1}{2}\Psi''(u)(v,w) = \langle v,w \rangle_*-&\cS (2-q) |u|_q^{2-2q} \int_{{\mathbb{S}}^N} |u|^{q-2}u v\,d\xi\, \int_{{\mathbb{S}}^N} |u|^{q-2}u w\,d\xi\\ -&\cS (q-1) |u|_q^{2-q} \int_{{\mathbb{S}}^N} |u|^{q-2}v w\,d\xi \label{eq:20} \end{align} for $u \in H^{\frac{s}{2}}(\mathbb{S}^{N}) \setminus \{0\}$, $v,w \in H^{\frac{s}{2}}(\mathbb{S}^{N})$.\\ Next, let $u\in H^{\frac{s}{2}}(\mathbb{S}^{N})$ with $d(u,\cM_*)<\|u\|_*$. It is easy to see that $d(u, \cM_*)$ is achieved by some function $c J_{\tau}^{\frac{1}{q}}$ in $\cM_*$ with $c \in \mathbb{R} \setminus \{0\}$ and a conformal transformation $\tau$ on ${\mathbb{S}}^N$. Replacing $u$ with $\frac{1}{c} J_{\tau^{-1}}^{\frac{1}{q}} u \circ \tau^{-1}$ and using (\ref{eq:3_*}), we may assume that $c=1$ and $\tau= \id$, hence we may write $u=1+v$ with $v \in T_1 \cM_*^\perp$, the normal space of $\cM_*$ at $1$ defined in (\ref{eq:31}), and $d(u,\cM_*)= \|v\|_*.$ We note that $\Psi(1)=0$ and $\Psi'(1)=0$ (since the function $1$ is a global minimizer of $\Psi$). Moreover, the condition $v \in T_1 \cM_*^\perp$ in particular implies -- since $1 \in T_1\cM_*$ -- that \begin{equation} \label{eq:orthogonal} \langle 1,v \rangle_*=0 \qquad \text{and}\qquad \int_{{\mathbb{S}}^N} v\,d \xi = 0. \end{equation} In particular, we find that \begin{align*} \Psi(u)&=\Psi(1+v)= \|1\|_*^2+ \|v\|_*^2-\cS |1+v|_q^2 \le \|1\|_*^2+ \|v\|_*^2-\cS |\mathbb{S}^N|^{\frac{2-q}{q}} |1+v|_2^2\\ &= \|1\|_*^2+ \|v\|_*^2 -\cS |\mathbb{S}^N|^{\frac{2-q}{q}} (|\mathbb{S}^N|+|v|_2^2)= \Psi(1)+ \|v\|_*^2 - \cS |\mathbb{S}^N|^{\frac{2-q}{q}}|v|_2^2\\ &\le \|v\|_*^2= d^2(u,\cM_*), \end{align*} and this yields the first inequality in (\ref{eq:29}). Moreover, from (\ref{eq:20}) and (\ref{eq:orthogonal}) we infer that \begin{equation*} \frac{1}{2}\Psi''(1)(v,v) = \|v\|_*^2- (q-1) \cS |{\mathbb{S}}^N|^{\frac{2-q}{q}} \int_{{\mathbb{S}}^N}v^2\,d\xi. \end{equation*} A second order Taylor expansion of $\Psi$ at $1$ thus yields \begin{align*} \Psi(u)= \Psi(1+v)&= \frac{1}{2}\Psi''(1)(v,v) +o(\|v\|_*^2)\\ & =\|v\|_*^2 - (q-1) \cS |{\mathbb{S}}^N|^{\frac{2-q}{q}}|v|_2^2 +o(\|v\|_*^2). \end{align*} Using (\ref{eq:21}) and the identity $|{\mathbb{S}}^N|= 2\pi^{\frac{N+1}{2}} \Gamma(\frac{N+1}{2})^{-1}$, we find by a short computation (using the duplication formula for the Gamma function) that $$ (q-1) \cS |{\mathbb{S}}^N|^{\frac{2-q}{q}}= \frac{N+s}{N-s}\: \cS |{\mathbb{S}}^N|^{-\frac{s}{N}}= \frac{\Gamma(\frac{N+s}{2}+1)}{\Gamma(\frac{N-s}{2}+1)} = \lambda_1(s). $$ Noting moreover that $|v|^2_2 \le \frac{\|v\|_*^2}{\lambda_2(s)} $ as a consequence of (\ref{eq:30}), we conclude that $$ \Psi(u) \ge \|v\|_*^2 \Bigl( 1 - \frac{\lambda_1(s)}{\lambda_2(s)}+o(1) \Bigr)= d(u,\cM_*)^2 \Bigl( \frac{2s}{N+s+2} +o(1) \Bigr) $$ This shows the second inequality in (\ref{eq:29}). \end{proof} The next tool we need is the following property of optimizing sequences for (\ref{eq:1}). \begin{lemma} \label{sec:proof-remainder-term-2} Let $(u_m)_m \subset \H \setminus \{0\}$ be a sequence with $\lim \limits_{m \to \infty} \frac{\|u_m\|_{*}^2}{|u_m|_q^2}= \cS$. Then $\frac{d(u_m,\cM_*)}{\|u_m\|_*} \to 0$ as $m \to \infty$. \end{lemma} \begin{proof} By homogeneity, we may assume that $\|u_m\|_*=1$ for all $m \in \mathbb{N}$, and we need to show that $d(u_m,\cM_*) \to 0$ as $m \to \infty$. We let $v_m= \cP u_m \in \H$ for $m \in \mathbb{N}$; then $\|v_m\|_{s/2}=1$ for all $m$, and \begin{equation} \label{eq:23} \frac{1}{|v_m|_q^2} \to \cS \qquad \text{as $m \rightarrow \infty$}. \end{equation} By the profile decomposition theorem of G\'erard (see \cite[Th\'eor\`eme 1.1 and Remarque 1.2]{gerard:98}) , there exists a subsequence -- still denoted by $(v_m)_m$ -- and\\ $\bullet$ a sequence $(\psi_j)_j$ of functions $\psi_j \in \H$,\\ $\bullet$ an increasing sequence of numbers $l_m \in \mathbb{N}$, $m \in \mathbb{N}$,\\ $\bullet$ a double sequence of values $h_m^j \in (0,\infty)$, $m,j \in \mathbb{N}$,\\ $\bullet$ a double sequence of points $x_m^j \in \mathbb{R}^N$, $m,j \in \mathbb{N}$\\ such that \begin{align} &\Bigl|v_m- \sum_{j=1}^{l_m} \bigl(h_m^j\bigr)^{-\frac{s}{2q}}\, \psi_j \bigl(\frac{\cdot - x_m^j}{h_m^j}\bigr)\Bigr|_q \to 0 \qquad \text{as $m \to \infty$}, \label{eq:25}\\ & |v_m|_{q}^q \to \sum_{j=1}^\infty |\psi_j|_{q}^q \quad \text{as $m \to \infty$} \qquad \text{and} \qquad \sum_{j=1}^\infty \|\psi_j\|_{s/2}^2 \:\le\: 1. \label{eq:26} \end{align} Combining the Sobolev inequality~(\ref{eq:1}) with (\ref{eq:26}) and using the concavity of the function $t \mapsto t^{2/q}$, we find that \begin{equation} \label{eq:28} 1 \ge \cS \sum_{j=1}^\infty |\psi_j|_{q}^2 \ge \cS \Bigl(\sum_{j=1}^\infty |\psi_j|_{q}^q\Bigr)^{2/q}= \cS \lim_{m \to \infty} |v_m|_{q}^2. \end{equation} By (\ref{eq:23}), equality holds in all steps in (\ref{eq:28}). The strict concavity of the function $t \mapsto t^{2/q}$ then shows that $\psi_j \equiv 0$ for all but one $j \in \mathbb{N}$, say, $j=1$, where $\cS |\psi_1|_{q}^2 = 1$ and $\|\psi_1\|_{s/2}=1$ as a consequence of (\ref{eq:26}),~(\ref{eq:28}) and the Sobolev inequality (\ref{eq:1}). Hence $\Psi_1 \in \cM$, and from (\ref{eq:25}) it now follows that $$ \Bigl|v_m- \bigl(h_m^1\bigr)^{-\frac{s}{2q}}\, \psi_1 \bigl(\frac{\cdot - x_m^1}{h_m^1}\bigr)\Bigr|_q \to 0 \qquad \text{as $m \to \infty$.} $$ Therefore, defining $$ \tilde v_m \in \H,\quad \tilde v_m(x)= \bigl(h_m^1\bigr)^{\frac{s}{2q}} v_m(h_m^1 x + x_m^1) \qquad \text{for $m \in \mathbb{N}$,} $$ we have $\tilde v_m \to \psi_1$ in $L^q(\mathbb{R}^N)$ for $m \to \infty$, but then also $\tilde v_m \to \psi_1$ in $\H$ strongly since $\|\tilde v_m\|_{s/2}=\|v_m\|_{s/2} =1=\|\psi_1\|_{s/2}$ for all $m \in \mathbb{N}$. Consequently, $d(\tilde v_m,\cM) \to 0$. By the invariance property (\ref{eq:9}), we then have $d(v_m,\cM) \to 0$ and therefore also $d(u_m,\cM_*) \to 0$ as $m \to \infty$, since $\cP$ is an isometry. \end{proof} \begin{remark}{\rm (i) We note that we do not need the full strength of G\'erard's profile decomposition theorem. Inductively, G\'erard writes $v_m$ as an infinite sum of bubbles, see (\ref{eq:25}) and \cite{gerard:98}. For our proof it is enough to stop this procedure after the very first step. As soon as one bubble is extracted, the strict concavity of the function $t\mapsto t^{2/q}$ implies the convergence.\\ (ii) In the case where $s \in (0,N)$ is an even integer, one could also use a classical concentration compactness result of Lions instead of G\'erard's result, see \cite[Corollary 1]{Lions1}.\\ (iii) For arbitrary $s\in (0,N)$, one could also use the duality between (\ref{eq:1}) and (\ref{eq:4}) explained in the introduction and another concentration compactness result of Lions about optimizing sequences for (\ref{eq:4}), see \cite[Theorem 2.1]{Lions2}. To us it seemed more natural to use a technique directly applicable to optimizing sequences for (\ref{eq:1}).} \end{remark} With the help of Proposition~\ref{sec:proof-remainder-term-1} and Lemma~\ref{sec:proof-remainder-term-2}, we may now complete the \begin{proof}[Proof of Theorem~\ref{maintheorem-reform}] Let $u \in H^{\frac{s}{2}}(\mathbb{S}^{N})$. Since $0 \in \overline {\cM_*}$, we have $d(u,\cM_*) \le \|u\|_*$. If $d(u,\cM_*) < \|u\|_*$, then the first inequality in (\ref{eq:17}) follows from Proposition~\ref{sec:proof-remainder-term-1}, and it is trivially satisfied if $d(u,\cM_*)= \|u\|_*$. To prove the second inequality in (\ref{eq:17}) for some $\alpha>0$, we argue by contradiction. For this we assume that there exists a sequence $(u_{m})_m$ in $H^{\frac{s}{2}}(\mathbb{S}^{N})\setminus \overline{\cM_*} $ with \begin{equation} \label{eq:24} \frac{\|u_m\|_*^2-\cS |u_m|_q^2}{d^{2}(u_{m}, \cM_*)}\rightarrow 0 \quad \text{as $m\rightarrow \infty$.} \end{equation} By homogeneity we can assume that $\|u_m\|_*=1$ for all $m \in \mathbb{N}$, then $d(u_m,\cM_*) \le 1$ for all $m \in \mathbb{N}$ and therefore (\ref{eq:24}) implies that $\lim \limits_{m \to \infty}|u_m|_q^2 =\frac{1}{\cS}$. Hence Lemma~\ref{sec:proof-remainder-term-2} gives $d(u_m,\cM) \to 0$ as $m \to \infty$. But then Proposition~\ref{sec:proof-remainder-term-1} shows that (\ref{eq:24}) must be false. We conclude that there exists $\alpha>0$ such that $$ \|u\|_*^2-\cS |u|_q^2 \ge \alpha\, d^{2}(u,\cM_*) \qquad \text{for all $u \in H^{\frac{s}{2}}(\mathbb{S}^{N})$,} $$ as claimed. \end{proof} \section{The weak $L^{q/2}$ remainder term inequality for domains of finite measure} \label{sec:weak-lq2-remainder} In this section we give the proof of Theorem~\ref{maincorollary}. For this we define $$ U_{\lambda,y} \in \H,\qquad U_{\lambda,y}(x):=\lambda U(\lambda^{\frac{2}{N-s}}(x-y)) $$\ for $c \in \mathbb{R} \setminus \{0\}, \lambda>0$ and $y \in \mathbb{R}^N$, so that $$ \cM= \{c U_{\lambda,y}\::\: c \in \mathbb{R} \setminus \{0\}, \lambda>0, y \in \mathbb{R}^N \}. $$ It will be convenient to adjust the notation for the weak $L^{q/2}$-norm. We fix $q=\frac{2N}{N-s}$ from now on, and we write $$ |u|_{w,\Omega}= \sup_{\stackrel{A \subset \Omega}{|A|>0}}|A|^{-\frac{s}{N}} \int_A |u|\,dx. $$ for the weak $L^{q/2}$-norm of a measurable function $u$ defined on a measurable set $\Omega \subset \mathbb{R}^N$. We note the following scaling property, which follows by direct computation: \begin{equation} \label{eq:33} |U_{\lambda,y}|_{w,\mathbb{R}^N}= |U_{\lambda,0}|_{w,\mathbb{R}^N} = \frac{|U|_{w,\mathbb{R}^N}}{\lambda} \qquad \text{for $\lambda>0$, $y \in \mathbb{R}^N$.} \end{equation} Similarly, for a fixed domain $\Omega \subset \mathbb{R}^N$, $u \in \mathring{H}^{\frac{s}{2}}(\Omega)$ and $\lambda>0$, define $$ \Omega_\lambda:= \lambda^{-2/(N-s)}\Omega \,\subset\, \mathbb{R}^N \qquad \text{and}\qquad u_\lambda \in \mathring{H}^{\frac{s}{2}}(\Omega_\lambda),\quad u_\lambda(x)= \lambda u(\lambda^{\frac{2}{N-s}}x). $$ Then a direct computation shows \begin{equation} \label{eq:37} |\Omega_\lambda|= \lambda^{-q}|\Omega|, \quad |u_\lambda|_{w,\Omega_\lambda}= \frac{|u|_{w,\Omega}}{\lambda} \quad \text{and}\quad d(u_\lambda,\cM)=d(u,\cM). \end{equation} Theorem~\ref{maincorollary} will follow immediately from the following Proposition. \begin{proposition} \label{sec:weak-lq2-remainder-1} There exists a constant $C_0$ depending only on $N$ and $s \in (0,N)$ such that \begin{equation} \label{eq:36} |u|_{w,\Omega} \le C_0 |\Omega|^{\frac{1}{q}}\, d(u,\cM) \end{equation} for all subdomains $\Omega \subset \mathbb{R}^N$ with $|\Omega|< \infty$ and all $u \in \mathring{H}^{\frac{s}{2}}(\Omega)$. \end{proposition} \begin{proof} By the scaling properties noted in (\ref{eq:37}), it suffices to consider a subdomain $\Omega \subset \mathbb{R}^N$ with $|\Omega|=1$ in the sequel. In this case we have, by H\"older's inequality and (\ref{eq:1}), \begin{equation} |u|_{w,\Omega} \le \|u\|_{L^q(\Omega)} \le \|u\|_{L^q(\mathbb{R}^N)} \le \frac{1}{\sqrt{\cS}} \|u\|_{s/2}\quad \text{for every $u \in \H$.} \label{eq:34} \end{equation} In the following, let $\rho \in (0,1)$ be given by \begin{equation} \label{eq:39} \frac{\rho}{\sqrt{\cS} (1-\rho)}= \Bigl(|\mathbb{S}^{N-1}| \int_{1}^\infty \frac{r^{N-1}}{(1+r^2)^{N}}\,dr\Bigr)^{\frac{1}{q}} \end{equation} Let $u \in \mathring{H}^{\frac{s}{2}}(\Omega)$. If $\rho \|u\|_{s/2} \leq d(u,\cM)$, then \begin{equation} \label{eq:38} |u|_{w,\Omega} \le \frac{1}{\rho \sqrt{\cS}} d(u,\cM) \end{equation} as a consequence of (\ref{eq:34}). So in the remainder of this proof we assume that \begin{equation} \label{eq:case} \rho \|u\|_{s/2} > d(u,\cM) \,. \end{equation} By homogeneity we may assume that $\|u\|_{s/2}=1$. Since $\rho<1$, the infimum in the definition of $d(u,\cM)$ is attained as a consequence of (\ref{eq:case}), and we have $d(u,\cM)=\| u - c U_{\lambda,y}\|_{s/2}$ for some $c\in\mathbb{R}$, $\lambda>0$ and $y\in\mathbb{R}^n$. Moreover, \eqref{eq:case} implies that $$ | 1 - c | = \left| \|u\|_{s/2} - \| c U_{\lambda,y} \|_{s/2} \right| \leq d(u,\cM) \leq \rho \,, $$ that is, $1-\rho\leq c\leq 1+\rho$. We note that \begin{align*} d(u,\cM)^2 & =\|u-c U_{\lambda,y}\|_{s/2}^2 \ge \cS \|u-c U_{\lambda,y}\|_{L^q(\mathbb{R}^N)}^2\\ &\ge \cS |c|^2 \|U_{\lambda,y}\|_{L^q(\mathbb{R}^N\setminus \Omega)}^2 \ge \cS (1-\rho)^2 \|U_{\lambda,y}\|_{L^q(\mathbb{R}^N\setminus \Omega)}^2 \,. \end{align*} Now let $B\subset\mathbb{R}^N$ denote the open ball centered at zero with $|B|=1$, and let $r_0>0$ denote the radius of $B$. Since the function $U$ in (\ref{eq:6}) is radial and strictly decreasing in the radial variable, the bathtub principle \cite[Theorem 1.14]{LL} implies that $$ \|U_{\lambda,y}\|_{L^q(\mathbb{R}^N\setminus \Omega)}^2 \ge \|U_{\lambda,y}\|_{L^q(\mathbb{R}^N \setminus (B+y))}^2= \|U_{\lambda,0}\|_{L^q(\mathbb{R}^N \setminus B)}^2 \,, $$ and hence \begin{equation} \|U_{\lambda,0}\|_{L^q(\mathbb{R}^N \setminus B)}^q \le \Bigl(\frac{d(u,\cM)}{\sqrt{\cS} (1-\rho)}\Bigr)^q \le \Bigl(\frac{\rho}{\sqrt{\cS} (1-\rho)}\Bigr)^q= |\mathbb{S}^{N-1}| \int_{1}^\infty \frac{r^{N-1}}{(1+r^2)^{N}}\,dr \label{eq:22} \end{equation} by our choice of $\rho$ in (\ref{eq:39}). On the other hand, we compute $$ \|U_{\lambda,0}\|_{L^q(\mathbb{R}^N \setminus B)}^q = |\mathbb{S}^{N-1}| \int_{r_0}^\infty \frac{r^{N-1} \lambda^{q}}{\big[1+(\lambda^{\frac{2}{N-s}}r)^2\big]^N}\,dr = |\mathbb{S}^{N-1}| \int_{\lambda^{\frac{2}{N-s}}r_0}^\infty \frac{r^{N-1}}{(1+r^2)^{N}}\,dr $$ This implies that $\lambda^{\frac{2}{N-s}}r_0\geq 1$ and therefore \begin{align} \|U_{\lambda,0}\|_{L^q(\mathbb{R}^N \setminus B)}^q & = |\mathbb{S}^{N-1}| \int_{\lambda^{\frac{2}{N-s}}r_0}^\infty \frac{r^{N-1}}{(1+r^2)^{N}}\,dr \label{eq:27} \\ & \ge 2^{-N} |\mathbb{S}^{N-1}| \int_{\lambda^{\frac{2}{N-s}}r_0}^\infty\, \frac{dr}{r^{N+1}}=\frac{|\mathbb{S}^{N-1}|}{N\, (2r_0)^N}\,\lambda^{-q}. \nonumber \end{align} Combining (\ref{eq:22}) and (\ref{eq:27}), we conclude that \begin{equation} \label{eq:35} d(u,\cM) \ge \frac{C_1}{\lambda} \qquad \text{with $C_1:= \sqrt{\cS}(1-\rho)\Bigl(\frac{|\mathbb{S}^{N-1}|}{N\, (2r_0)^N}\Bigl)^{\frac{1}{q}}$.} \end{equation} Using (\ref{eq:33}), (\ref{eq:34}) and (\ref{eq:35}), we find that \begin{align*} |u|_{w,\Omega}& \le |c U_{\lambda,y}|_{w,\Omega} + |u- c U_{\lambda,y}|_{w,\Omega}\le (1+\rho) |U_{\lambda,y}|_{w,\mathbb{R}^N} + \frac{1}{\sqrt{\cS}} \|u- c U_{\lambda,y}\|_{s/2}\\ &= \frac{1+\rho}{\lambda} |U|_{w,\mathbb{R}^N} + \frac{1}{\sqrt{\cS}} d(u,\cM) \le C_2 d(u,\cM) \end{align*} with $C_2:= \frac{(1+\rho)}{C_1}|U|_{w,\mathbb{R}^N} + \frac{1}{\sqrt{\cS}}$. Combining this with (\ref{eq:38}), we thus obtain the claim with $C_0:= \max \{C_2,\frac{1}{\rho \sqrt{\cS}}\}$. \end{proof} Finally, Theorem~\ref{maincorollary} now simply follows by combining Theorem~\ref{maintheorem} and Proposition~\ref{sec:weak-lq2-remainder-1} and setting $C:=C_0^{-2}$. {\em \bf Acknowledgement.} U.S. National Science Foundation grant PHY-1068285 (R.F.) and German Science Foundation (DFG) grant WE 2821/4-1 (T.W.) is acknowledged. Shibing Chen wants to thank Robert McCann for helpful discussions.
1,108,101,563,750
arxiv
\section{Introduction} Introduced by Heisenberg in 1928, the Heisenberg statistical model of spin systems has been widely used to study phase transitions and critical phenomena in magnetic systems and strongly correlated electron systems\textsuperscript{\cite{1-Roger G.Bowers1969,2-J.F.Cooke1970,3-Freeman J.Dyson1976,4-Guang-Shan Tian1997,5-Henk W.J2002,6-R.G.Brown2006}}. Recently developed powerful tools developed to unravel the physics of strongly correlated multi-body quantum systems provide new platforms for understanding quantum magnetism\textsuperscript{\cite{7-Fernanda2013}}. It has been proposed to implement the Heisenberg model in other systems as well. For example, Pinheiro {\it et al.} demonstrated that in the Mott region, a boson atom in the first excitation band of a two-dimensional optical lattice can realize the spin-1/2 quantum Heisenberg model\textsuperscript{\cite{7-Fernanda2013}}. Bermudez {\it et al.} introduced a theoretical scheme to simulate the XYZ model using trapped ions\textsuperscript{\cite{8-A. Bermudez2017}}. The molecular axis of polar molecules that are subject to an external electric field oscillates within a certain angular range about the field direction, forming pendular states\textsuperscript{\cite{9-B. Friedrich1991}}. These pendular states have specific orientations that give rise to constant projections of the dipole moment along the external field, resulting in long-range anisotropic interactions via the electric dipole-dipole coupling. In a field gradient, pendular molecule can be individually addressed due to its field-dependent eigenenergy (and orientation). Moreover, the internal structure of polar molecules is much richer than that of atoms or spins, allowing much richer physics. Given these unique properties, arrays of polar molecules are considered to be promising platforms for quantum computing and quantum information processing\textsuperscript{\cite{10-Book2009,11-D. DeMille2002,12-Philippe Pellegrini2011,13-Jing Zhu2013,14-Zuo-Yuan Zhang2017,15-Zuo-Yuan Zhang2020,16-Wei12011,17-Wei22011,32-Micheli,33-Charron,34-kuz,35-ni,36-lics,37-YelinDeMille,38-Wei2010,39-Wei2016,40-kang-Kuen Ni2018}}, which is not unlike spins. Inspired by the similarity between spins and polar molecules, the simulation of the spin models with polar molecules has attracted broad interests over the past decade\textsuperscript{\cite{18-Muller2010,19-Alexey2011,20-Bo Yan2013,21-N.Y.Yao2018,22-Haiyuan Zou2017,22+-Kaden2013,M. L. Wall,A. V. Gorshkov}}. M{\"u}ller described in his thesis the details of how to realize the spin-1/2 XXZ model as well as t-J model with ultra-cold polar molecules trapped in an optical lattice\textsuperscript{\cite{18-Muller2010}}. Gorshkov {\it et al.} demonstrated that the dipole interactions of ultra-cold alkali metal dimers in optical lattices can be used to implement the t-J model, providing insights into strong correlation phenomena in condensed systems\textsuperscript{\cite{19-Alexey2011}}. Yan {\it et al.} experimentally observed dipolar spin-exchange interactions with lattice-confined polar molecules, which laid a foundation for further study of multi-body dynamics in spin lattices\textsuperscript{\cite{20-Bo Yan2013}}. Yao {\it et al.} obtained the dipole Heisenberg model by using polar molecules and found the existence of quantum spin liquids on the triangular and Kagome lattice\textsuperscript{\cite{21-N.Y.Yao2018}}. Zou {\it et al.} implemented the quantum spin model based on the polar molecule KRb in an optical lattice and discovered the quantum spin liquid on the square lattice\textsuperscript{\cite{22-Haiyuan Zou2017}}. However, in almost all the previous works about implementation of the spin-1/2 Heisenberg model with polar molecules, the ground and first excited pendular states with $M=0$ were usually chosen as pseudo-spin states, representing spin up and spin down, respectively\textsuperscript{\cite{18-Muller2010,19-Alexey2011,20-Bo Yan2013,21-N.Y.Yao2018,22-Haiyuan Zou2017,22+-Kaden2013}}. In which case the Hamiltonian is not in the form of the Heisenberg model. Only after applying the rotating wave approximation can the Heisenberg model be recovered. Furthermore, it is not a general Heisenberg XYZ model, but its special case, the XXZ model. Herein, by choosing the two lowest excited pendular states of a polar molecule to represent the pseudo-spin states, we show how to achieve spin-1/2 Heisenberg XYZ model as well as XXZ and XY models directly, without any approximation. We work out the properties of the models by evaluating all their constants as functions of three dimensionless variables. The first one is $\mu\varepsilon/B$, the ratio of the Stark energy (magnitude of permanent dipole moment times electric field strength) to the rotational constant (proportional to inverse of the molecular moment of inertia); this variable governs the energy and intrinsic angular shape of the pendular states. The second one is $\Omega/B$, with $\Omega$ = $\mu^2/r^3$, the square of the permanent dipole moment divided by the cube of the separation distance; this variable governs the magnitude of the dipole-dipole coupling. The third variable is $\alpha$, the angle between the axis of the molecular array and the electric field. As a sample application of the Heisenberg model based on polar molecules, we construct the ground state phase diagram for a linear array of polar molecules. We also discuss advantages, drawbacks as well as potential applications of our model. \section{Pendular states of polar molecules as pseudo-spins} \label{2} \subsection{Pendular and pseudo-spin states} \begin{figure}[htp] \vspace{0.5cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{0cm} \centering \includegraphics[width=0.6\columnwidth]{fig1.eps} \caption{Eigenenergies of a polar molecule in an external electric field, as functions of $\mu \varepsilon /B$, with $\mu $ the permanent dipole moment, $\varepsilon $ the field strength, {\it B} the rotational constant. $\left| \downarrow \right\rangle $ correlates with the $J = 1$, ${M} = 1$ and $\left| \uparrow \right\rangle $ with the $J = 1$, ${M} = 0$ states. States used as the pseudo-spin states (red curves) are labeled $\left| \downarrow \right\rangle $ and $\left| \uparrow \right\rangle $ in the absence of an external field. } \label{fig1} \end{figure} In an electrostatic field, the Hamiltonian of a trapped linear polar molecule is \textsuperscript{\cite{16-Wei12011}} \begin{equation} H = \frac{{{p^2}}}{{2m}} + {V_{trap}}({\bf r}) + B{{\bf J}^2} - \boldsymbol{\mu} \cdot \boldsymbol{\varepsilon}, \end{equation} where the molecule, with mass $m$, rotational constant $B$, and body-fixed dipole moment $\mu$, has translational kinetic energy $p^2/2m$, potential energy $V_{trap}$ within the trapping field and rotational energy $B\bf{J}^2$ as well as interaction energy $\boldsymbol{\mu} \cdot \boldsymbol{\varepsilon}$ with the external field $\boldsymbol{\varepsilon}$. In the trapping well, at ultra-cold temperatures, the translational motion of the molecule is quite modest and very nearly harmonic; $p^2/2m + V_{trap}(r)$ is thus nearly constant and can be omitted from the Hamiltonian. There remains the rotational kinetic energy and Stark interaction, \begin{equation} H_s=B{\bf J}^2-\mu\varepsilon\cos\theta, \end{equation} where $\theta$ is the polar angle between the molecular axis (the molecule-fixed permanent electric dipole moment $\mu$) and the field direction. Under the action of a strong electrostatic field, the polar molecules are compelled to undergo pendular oscillations and result in the forming of pendular states, $\vert$$\tilde{J}M\rangle$. Here, $\tilde{J}$ wears a tilde to indicate it is no longer a good quantum number since the Stark interaction mixes the rotational states, whereas $M$ is still a good quantum number as long as azimuthal symmetry about $\varepsilon$ is maintained. Figure \ref{fig1} shows eigenenergies of a few lowest lying pendular states for a $^1\Sigma$ diatomic (or linear) molecule, as functions of $\mu \varepsilon /B$. We choose the two lowest excited states pendular states, $\vert$11$\rangle$ and $\vert$10$\rangle$, as the pseudo-spin states $\left| \downarrow \right\rangle$ and $\left| \uparrow \right\rangle $, respectively (see Figure \ref{fig1}). Then use an external circularly polarized microwave or radio-frequency field to couple the two states, forming a $\left| \downarrow \right\rangle$ and $\left| \uparrow \right\rangle $ two-level system. The two pseudo-spin states are linear superpositions of spherical harmonics $Y_j^1$and $Y_j^0$: \begin{equation} \vert\downarrow\rangle=\sum_{\substack{j}}a_jY_j^1(\theta,\phi)\qquad\vert\uparrow\rangle=\sum_{\substack{j}}b_jY_j^0(\theta,\phi). \end{equation} \begin{figure}[htp] \vspace{0.5cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{0cm} \centering \includegraphics[width=0.7\columnwidth]{fig2.eps} \caption{Coefficients of spherical harmonics for pendular states $\left|\downarrow\right\rangle$ (left panel) and $\left|\uparrow\right\rangle$ (right panel), see Eq.(3). Dashed curve for $\left|\uparrow\right\rangle $ indicates the coefficients of $Y_0^0$ is negative. } \label{fig2} \end{figure} \begin{figure}[htp] \vspace{0.5cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{0cm} \centering \includegraphics[width=1\columnwidth]{fig3.eps} \caption{Wave functions of the $\left|\downarrow\right\rangle$ and $\left|\uparrow\right\rangle$ pendular states for values of $\mu \varepsilon/B = 0,4,8,12$ (from left to right), respectively. Panels (a) and (b) represent the real and imaginary parts of state $\left|\downarrow\right\rangle$ respectively. Panel (c) represents state $\left|\uparrow\right\rangle$ (has no imaginary part).} \label{fig3} \end{figure} Figure \ref{fig2} plots the coefficients as functions of $\mu\varepsilon /B$. For $\mu\varepsilon /B$ = 0, both $\left| \downarrow \right\rangle$ and $\left| \uparrow \right\rangle$ are purely rotational states with single component of spherical harmonics of $Y_1^1$ and $Y_1^0$, respectively. As $\mu\varepsilon /B$ increases, more and more components of spherical harmonics with the same $M$ but different $J$ get involved and the initially dominant components decrease accordingly. For $\left| \uparrow \right\rangle$, the dominant component $Y_1^0$ (shown in brown) decreases so quickly that it is replaced by $Y_0^0$ as the leading term for $\mu\varepsilon /B >4.5$. For $\left| \downarrow \right\rangle$, the initially dominant component $Y_1^1$ (show in brown) decreases a little slower but is eventually replaced by $Y_1^1$ when $\mu\varepsilon /B$ becomes large enough. Figure \ref{fig3} displays wave functions of $\left| \downarrow \right\rangle$ and $\left| \uparrow \right\rangle$ for different magnitudes of the electric field. For $\left| \uparrow \right\rangle$, since $M=0$, the dipole is rotating with its ${\mathbf J}$-vector perpendicular to the field direction. Without the external field, the dipole orientation is symmetric in the hemispheres toward ($\theta = 0$) or opposite ($\theta =\pi$) to the field direction. With increasing external field, the pinwheeling dipole favors the opposite hemisphere because its motion is slowed down there. However, when the external field becomes large enough, pinwheeling is inhibited and converted into pendular libration about the field direction, and the dipole orientation favors the toward hemisphere. For $\left| \downarrow \right\rangle$, since $M=1$, without the external field, the angular momentum is along the field direction, and thus the dipole orientation is localized at about $\theta = \pi/2$ and also symmetric in both hemispheres toward or opposite to the field direction. As the external field increases, the dipole rotates like a conical pendulum and its orientation favors more and more the toward hemisphere. \subsection{Hamiltonian of psedo-spins with electric dipole-dipole interaction}\label{3} Adding a second trapped polar molecule, identical to the first one but a distance $r_{12}$ apart, introduces the dipole-dipole interaction term\textsuperscript{\cite{16-Wei12011}} \begin{equation} {V_{d - d}} = \frac{{{\bm{\mu _1}} \cdot {\bm{\mu _2}} - 3\left( {{\bm{\mu _1}} \cdot \bm{n}} \right)\left( {{\bm{\mu _2}} \cdot \bm{n}} \right)}}{{{{\left| {{\bm{r_1}} - {\bm{r_2}}} \right|}^3}}}. \end{equation} Here {\bf n} is a unit vector along ${\bf r}_{12}$. In the presence of an external field, $V_{d-d}$ can be expressed in terms of the polar and azimuthal angles: \begin{equation} \begin{aligned} V_{d-d}=&\Omega\left[\cos\theta_1\cos\theta_2+\sin\theta_1\cos\varphi_1\sin\theta_2\cos\varphi_2+\sin\theta_1\sin\varphi_1\sin\theta_2\sin\varphi_2\right.\\&\left.-3\left(\sin\theta_1\cos\varphi_1\sin\alpha+\cos\theta_1\cos\alpha\right)\left(\cos\theta_2\cos\alpha+\sin\theta_2\cos\varphi_2\sin\alpha\right)\right], \end{aligned} \end{equation} where $\Omega = {\mu ^2}/r_{12}^3$, $\alpha$ is the angle between the ${\bf r}_{12}$ vector and the field direction, $\theta_1$ and $\theta_2$ are the polar angles between the dipoles (${\bm \mu}_1$ and ${\bm\mu}_2$) and the field direction, $\varphi_1$ and $\varphi_2$ are the corresponding azimuths. Now the total Hamiltonian is ${H_{total}}$ = ${H_{s1}}$ + ${H_{s2}}$ + ${V_{d - d}}$. When set up in the basis set of the direct product of pseudo-spin states $\{|\downarrow\downarrow\rangle,|\downarrow\uparrow\rangle,|\uparrow\downarrow\rangle,|\uparrow\uparrow\rangle\}$, it takes the form \begin{equation} {H_{s1}} + {H_{s2}} = \left( {\begin{array}{*{20}{c}}{2{E_0}}&0&0&0\\0&{{E_0} + {E_1}}&0&0\\0&0&{{E_1} + {E_0}}&0\\0&0&0&{2{E_1}}\end{array}} \right), \end{equation} \begin{equation} {V_{d - d}} = \Omega \left( {\begin{array}{*{20}{c}}{{P_\alpha }C_0^2}&0&0&{{Q_\alpha }C_X^2}\\0&{{P_\alpha }{C_0}{C_1}}&{ - {P_\alpha }C_X^2}&0\\0&{ - {P_\alpha }C_X^2}&{{P_\alpha }{C_1}{C_0}}&0\\ {{Q_\alpha }C_X^2}&0&0&{{P_\alpha }C_1^2}\end{array}} \right), \end{equation} where $E_0$ and $E_1$ are eigeneneries of the pendular pseudo spin states $\vert$$\downarrow$$\rangle$ and $\vert$$\uparrow$$\rangle$, respectively (see Figure \ref{fig1}). $P_\alpha$ and $Q_\alpha$ are simple functions of $\alpha$: ${P_\alpha } = 1 - 3{\cos ^2}\alpha$, ${Q_\alpha } = - 3{\sin ^2}\alpha $. In $V_{d-d}$, the basis states are linked by matrix elements containing $C_0 $ and $C_1$, the field-induced dipole moments orientation cosines, and $C_X$, the transition dipole moments between the pseudo-spin states $\vert\downarrow\rangle$ and $\vert\uparrow\rangle$. These are given by \begin{equation} C_0=\langle\downarrow\vert\cos\theta\vert\downarrow\rangle\qquad C_1=\langle\uparrow\vert\cos\theta\vert\uparrow\rangle\qquad C_X=\langle\downarrow\vert\sin\theta\cos\varphi\vert\uparrow\rangle. \end{equation} In contrast to a real spin state which has a constant dipole moment, here, the values of $C_0 $, $C_1$ and $C_X$ are functions of external electric fields, which are displayed in Figure \ref{fig4}. When $\mu \varepsilon /B$ increases, $C_0$ becomes increasingly positive, whereas $C_1$ is increasingly negative upto about $\mu\varepsilon/B=2$, then climbs to zero at about $\mu\varepsilon/B=4.9$ and thereafter is increasingly positive. The fact that $C_X = 0$ at $\mu\varepsilon/B$ = 0 means that without the external electric field, the transition between $\left|\downarrow\right\rangle$ and $\left|\uparrow\right\rangle$ is not allowed as a one-photon electric dipole transition. Fortunately, increasing the external field introduces sufficient mixing of other spherical harmonics, particularly admixing of $Y_0^0$ and $Y_2^0$ into $\left|\uparrow\right\rangle$ and admixing of $Y_2^1$ into $\left|\downarrow\right\rangle$ (see Figure \ref{fig2}), such that $C_X$ increases sharply from zero to a considerable value, enabling the $\left|\downarrow \right\rangle\leftrightarrow\left|\uparrow\right\rangle$ transition to occur as a one-photon transition. \begin{figure}[htp] \vspace{0.5cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{0cm} \centering \includegraphics[width=0.6\columnwidth]{fig4.eps} \caption{Matrix elements of $C_0$, $C_1$ and $C_X$ as functions of $\mu\varepsilon/B$. The dotted green line is ${C_0} - {C_1}$.} \label{fig4} \end{figure} \section{Realization of the Heisenberg model of spin systems with polar molecules.} \subsection{General Heisenberg XYZ model based on pseudo-spins.} The total Hamiltonian of the two-dipoles molecular system can be mapped onto a two-qubit spin-1/2 general Heisenberg XYZ model: \begin{equation} {H_{XYZ}} = {J_x}\sigma _1^x\sigma _2^x + {J_y}\sigma _1^y\sigma _2^y + {J_z}\sigma _1^z\sigma _2^z - \gamma\left( {\sigma _1^z + \sigma _2^z} \right), \label{eq9} \end{equation} here ${\sigma _x}$, ${\sigma _y}$ and ${\sigma _z}$ are Pauli operators; $J_x, J_y, J_z$ and $\gamma$ are coupling constants given by \begin{equation} \begin{split} {J_x}&=\Omega \left( {3{{\cos }^2}\alpha - 2} \right)C_X^2, \\ {J_y}&=\Omega C_X^2, \\ {J_z} &= \frac{{\Omega \left( {1 - 3{{\cos }^2}\alpha } \right){{\left( {{C_0} - {C_1}} \right)}^2}}}{4},\\ \gamma & = \frac{{2\left( {{E_1} - {E_0}} \right) + \Omega \left( {3{{\cos }^2}\alpha - 1} \right)\left( {C_0^2 - C_1^2} \right)}}{4}. \end{split} \label{eq10} \end{equation} \begin{figure}[htbp] \vspace{0.5cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{0cm} \centering \includegraphics[width=0.6\columnwidth]{fig5.eps} \caption{Contour plots of $J_x$$/\Omega$ (panel a), $J_z$$/\Omega$ (panel b) and second part of $\gamma$$/\Omega$ (panel c) as functions of reduced variable $\mu \varepsilon /B$ and angle $\alpha$. $J_y$ is $\alpha$-independent and the same as $J_x$ when $\alpha=0$.} \label{fig5} \end{figure} Equations (\ref{eq9}-\ref{eq10}) demonstrate how to realize the spin-1/2 anisotropic Heisenberg model with polar molecules in pendular states. The model constants ($J_x, J_y, J_z$ and $\gamma$) are functions of $\mu \varepsilon /B$, $\Omega$ and $\alpha$, which means the model can be adjusted by modifying these parameters. For all the constants, the relations to $\Omega$ are simply linear, whereas the relations to $C$'s are quadratic. $J_y$ is $\alpha$ independent and equal to $J_x$ with $\alpha=0$. The term $\gamma$ consists of two parts. One part is related to the energy gap $\Delta E=E_1-E_0$ which is shown in Figure \ref{fig1}. The other part is proportional to $\Omega$ which is similar to $J$'s. The contour plots in Figure \ref{fig5} illustrate how $J_x/\Omega$, $J_z/\Omega$ and the second part of $\gamma/\Omega$ change with $\mu\varepsilon/B$ and $\alpha$. When $\mu\varepsilon/B$ increases from 0 to 12, the magnitude of the coupling coefficients ($J_x/\Omega$, $J_y/\Omega$ and $J_z/\Omega$) changes in the order of 0 to $10^{-1}$. Similar results are obtained for the second part of $\gamma$$/\Omega$ . Maximum or minimum values of $J_x$ and $J_y$ appear at large $\mu\varepsilon/B$, whereas for $J_z$ they appear around $\mu\varepsilon/B=3$. For given $\Omega $ and $\alpha $, the coupling constants ${J_x}$, ${J_y}$, ${J_z}$ and $\gamma$ depend only on $x=\mu \varepsilon /B$, which enters those constants through ${C_0}$, ${C_1}$, ${C_X}$ and $\Delta E$. To provide a convenient means to evaluate Equation (\ref{eq10}), we fitted our numerical results to obtain accurate approximation formulas, \begin{align} \label{eq11} & {{\left( {{E_1} - {E_0}} \right)} \mathord{\left/{\vphantom {{\left( {{E_1} - {E_0}} \right)} B}} \right.\kern-\nulldelimiterspace} B} = {A_1}x + {A_2}{x^2} + {A_3}{x^3} + {A_4}{x^4} + {A_5}{x^5},\\ \label{eq12} &C(x) = {A_0} + \frac{{{A_1}}}{{1 + \exp [(x - {x_1})/{k_1}]}} + \frac{{{A_2}}}{{1 + \exp [ - (x - {x_2})/{k_2}]}}. \end{align} These functions are plotted in Figure {\ref{fig6}}. The fitted parameters are given in Tables (\ref{Table1}-\ref{Table2}). \begin{figure}[htbp] \vspace{0.5cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{0cm} \centering \includegraphics[width=0.6\columnwidth]{fig6.eps} \caption{Comparison of exact results (blue curves) with fitted approximation functions (dashed red curves) cf. Eqs. (11) and (12) : energy difference, ${{\left( {{E_1} - {E_0}} \right)} \mathord{\left/{\vphantom {{\left( {{E_1} - {E_0}} \right)} B}} \right. \kern-\nulldelimiterspace} B}$, the field-induced dipole moments, ${C_0}$ and ${C_1}$ and the transition dipole moments, ${C_X}$.} \label{fig6} \end{figure} \begin{table}[!ht] \renewcommand\tabcolsep{61pt} \caption{Values of the parameters for Eq.{\ref{eq11}}.} \label{Table1} \centering \begin{threeparttable} \begin{tabular}{*4{c}} \hline \hline Parameters & Values \\ \hline ${A_1}$ & 0.00794 \\ ${A_2}$ & 0.16531 \\ ${A_3}$ & -0.02838 \\ ${A_4}$ & 0.00206 \\ ${A_5}$ & $ - 5.55762 \times {10^{ - 5}}$ \\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item ${R^2} = 0.9999$. \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[!ht] \renewcommand\tabcolsep{18pt} \caption{Values of the parameters for Eq.{\ref{eq12}}.} \label{Table2} \centering \begin{threeparttable} \begin{tabular}{*4{c}} \hline \hline Parameters & Values for ${C_0}$ & Values for ${C_X}$ &Values for ${C_1}$\\ \hline ${A_0}$& -0.24612 & 0.21844 & -0.91801 \\ ${A_1}$& -0.56893 & -0.53637 & 0.9 \\ ${A_2}$& 0.95967 & 0.02855 & 1.36773 \\ ${x_1}$& -0.09066 & -04403 & 0.09317 \\ ${x_2}$& -1.25815 & 4.28747 & 2.52364 \\ ${k_1}$& 2.17868 & 1.18595 & 0.80729 \\ ${k_2}$& 6.7313 & 0.94214 & 3.38213 \\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item ${R^2} = 1$ for ${C_0}$ , ${R^2} = 0.9999$ for ${C_X}$ and ${C_1}$. \end{tablenotes} \end{threeparttable} \end{table} The Heisenberg model given by Equations (\ref{eq9}-\ref{eq10}) is a general XYZ model. But we can get two different special cases of the Heisenberg model by changing the direction of the external field. One is the XXZ model which obtains by taking $\alpha = {0^ \circ }$. In that case, we have ${J_x} = {J_y} \ne {J_z}$ and ${J_x} = {J_y} \ne 0$, ${J_z} \ne 0$. The other one is the XY model which obtains for $\alpha = {54.7^ \circ }$, known as the magic angle. In that case, we have ${J_x} \ne 0$, ${J_y} \ne 0$ and ${J_z} = 0$. \subsection{The Heisenberg XXZ model and quantum phase diagram of polar molecules.} \begin{figure}[htbp] \vspace{-0.5cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{0cm} \centering \includegraphics[width=0.6\columnwidth]{fig7.eps} \caption{(a) Quantum phase diagram of the XXZ model associated with $J_z/J$ and $\gamma/J$ for a linear spin chain. (b) The ratio of the coupling constants of the XXZ model, $J_z/J$, as a function of $\mu\varepsilon/B$ in dipole system of polar molecules. (c) The ratio of the coupling constants of the XXZ model, $\gamma/J$, as a function of reduced variables, $\mu\varepsilon/B$ and $\Omega/B$, in dipole system of polar molecules.} \label{fig7} \end{figure} In order to demonstrate the application of the Heisenberg model based on pendular polar molecules, we take the XXZ model ($\alpha = {0^ \circ }$) as an example. If only pairwise interaction is considered, for a system with $N$ polar molecules trapped in a linear array with external electric field along the array, the Hamiltonian has the form of the XXZ model, \begin{equation} {H_{XXZ}} = \sum\limits_{i = 1}^{N-1} {\left[ {J\left( {\sigma _i^x\sigma _{i+1}^x + \sigma _i^y\sigma _{i+1}^y} \right)} + {J_z}\sigma _i^z\sigma _{i+1}^z\right]} - \gamma\sum\limits_{i = 1}^{N}{\sigma _i^z} \end{equation} with couplings given by \begin{equation} \begin{split} J&=\Omega C_X^2, \\ {J_z} &=- \frac{{\Omega {{\left( {{C_0} - {C_1}} \right)}^2}}}{2},\\ \gamma & = \frac{{\left( {{E_1} - {E_0}} \right) + \Omega \left( {C_0^2 - C_1^2} \right)}}{2}. \end{split} \label{eq16} \end{equation} Figure \ref{fig7}(a) displays the ground state phase diagram of a spin-1/2 XXZ chain with nearest-neighbor interaction\textsuperscript{\cite{23-Christian2010,24-Mykhailo2019,25-D. C. Cabra1998}}. The abscissa is the scaled anisotropy parameter $J_z/J$, and the ordinate is the scaled magnetic field $\gamma/J$. There are two gapped phases: one is the ferromagnetic phase for $J_z/J < -1$; the other is the antiferromagnetic phase for $J_z/J > 1$. In between is the Luttinger liquid phase\textsuperscript{\cite{26-F.D.M. Haldane1980}}. According to Equation \ref{eq16}, for polar molecules, $J_z/J$ only depends on $\mu\varepsilon/B$, so in Figure \ref{fig7}(b) we show how $J_z/J$ changes when $\mu\varepsilon/B$ increases from 0 to 12. The critical value of $J_z/J = -1$ appears at $\mu\varepsilon/B = 6.1 $ ($\varepsilon$ =13.5 kV/cm for the SrO molecule). In order to obtain phase information about the polar molecule system, we still need to know the range of values of $\gamma/J$. According to Equation \ref{eq16}, $\gamma/J$ depends on both $\mu\varepsilon/B$ and $\Omega$. So in Figure \ref{fig7}(c) we plot of $\gamma/J$ as a function of $\mu\varepsilon/B$ for different $\Omega/B$. Finally we obtain a ground state phase diagram associated with $\mu\varepsilon/B$ and $\Omega/B$ for a linear array of polar molecules, which is shown in Figure \ref{fig8}. \begin{figure}[htbp] \vspace{0.5cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{0cm} \centering \includegraphics[width=0.6\columnwidth]{fig8.eps} \caption{Ground state phase diagram of the XXZ model associated with $\Omega /B$ and $\mu \varepsilon /B$ for polar molecules in a linear array.} \label{fig8} \end{figure} \section{Discussion and Prospects}\label{4} In this paper, our chief aim was to demonstrate that the Heisenberg model of spin systems can be realized with ultra-cold diatomic or linear $^1\sum$ molecules, oriented in an external electrostatic field and coupled by the electric dipole-dipole interaction. This requires use of pendular states comprised of superpositions of spherical harmonics. Here the two lowest lying excited states coupled by microwave or radio-frequency fields are used to mimic the two-level spin system. This provides a new physical platform for the study of the Heisenberg model. Since the dipole is encoded in the rotational states of the molecules, the field-induced electric dipole-dipole interactions between the molecules reproduce magnetic dipole-dipole interactions between spins. In order to map out the general features of the model, we have considered a wide range of parameters defined by sets of unitless reduced variables, involving the dipole moments, rotational constant, dipole-dipole coupling, electric field strength and direction. The external field plays an essential role. In order to induce extensive hybridization of rotational states, the field strength needs to be sufficiently high. This has a dual purpose. Firstly, to make the molecules undergo pendular oscillations about the field direction; otherwise rotational tumbling would average out the molecule's dipole moment in the laboratory frame. Secondly, to make the transition dipole moment $C_X$ deviate from zero such that a one-photon transition $\left|\downarrow\right\rangle\leftrightarrow\left|\uparrow\right\rangle$ would be fully allowed. Using optical lattices to trap the molecules limits the distance between adjacent molecules to a few hundreds nanometers, so that the dipole-dipole coupling is weak ($\Omega/B$ typically of order $10^{-6}$ to $10^{-4}$) compared with the energy gap $\Delta E$ and thus the model parameter $\gamma/J$ becomes very large ($> 10^4$). For that weak coupling realm, the ground state of the Heisenberg model obtained with polar molecules is always in the ferromagnetic phase (see Figure \ref{fig8}). In order to enhance the dipole-dipole coupling, the molecular distance $r$ has to be shortened. For the SrO molecule as an example, the molecular distance of less than 10 nm is required for $\gamma/J \sim 1$. Such distance is much shorter than what can be achieved in typical optical lattices, but might be obtained with arrays of nanoscale plasmon-enhanced electro-optical traps\textsuperscript{\cite{27-Brian Murphy2009,28-D.E.Chang2009}} or molecular Wigner crystals\textsuperscript{\cite{29-P. Rabl2007,30-H. P. Buchler2007}}. That will be a promising approach to extend the experimental scope of the model. Polar molecules also offer significant advantages for achieving the Heisenberg spin model, due to their high controllability and the presence of strong and long range interactions. Stark energy is quite large, so for instance the energy gaps between pseudo-spin states $\left| \downarrow \right\rangle$ and $\left| \uparrow \right\rangle$ are typically in the range of microwave frequencies, as opposed to the radiofrequencies separating of real spin states. This enables a faster optically controlled transition between the two energy levels of polar molecules. Electric dipole-dipole interaction is also much stronger than that of spins, resulting in a larger frequency shift, which is essential for building quantum logic gates. The influence of the $\left| {1-1} \right\rangle $ pendular state which is degenerate with $\left| {1 1} \right\rangle $ was also numerically analyzed. If we use $\left| {1-1} \right\rangle $ instead of $\left| {1 1} \right\rangle $ as the pseudo spin states $\left| \downarrow \right\rangle$, we will obtain the same field-induced dipole moments ${C_0}$, ${C_1}$ and the transition dipole moment ${C_X}$. That indicates that the pseudo-spin state $\left| \downarrow \right\rangle$ can also be $\left| {1 -1} \right\rangle$, since the Hamiltonian matrix is the same as that for $\left| {1 1} \right\rangle$. But if the pseudo spin state $\left| \downarrow \right\rangle$ is a superposition of $\left| {1 - 1} \right\rangle $ and $\left| {1 1} \right\rangle $, ${C_0}$ and ${C_1}$ will remain unchanged, whereas ${C_X}$ will be different. In this case, the Hamiltonian is still in the form of Heisenberg model, but the Hamiltonian matrix elements related to ${C_X}$ take on different values. Including the $\left| {1-1} \right\rangle $ state would increase the flexibility and complexity of the model, and will not be elaborated upon here. However, this problem can be avoided by introducing a tilt angle $\beta$ between the polarization vector of the optical trapping field that confines the molecules and the electrostatic field such that $\beta$ $\ne$ 0, $\pi$. In that case, the degeneracy of the $ \pm $M states is lifted \textsuperscript{\cite{Bretislav1,Bretislav2}}. Alternatively, for molecules with a nuclear electric quadrupole moment, a superimposed magnetic field would lift the $ \pm $M degeneracy via the interaction between this moment and the magnetic moment generated by molecular rotation \textsuperscript{\cite{20-Bo Yan2013,31-S.Ospelkaus2010}}. One potential application of polar molecules is in quantum computing, as originally proposed by DeMille two decades ago \textsuperscript{\cite{11-D. DeMille2002}}. Since then, many aspects and variants have been extensively studied, including for both diatomic linear molecules and symmetric top molecules \textsuperscript{\cite{12-Philippe Pellegrini2011,13-Jing Zhu2013,14-Zuo-Yuan Zhang2017,15-Zuo-Yuan Zhang2020,16-Wei12011,17-Wei22011,32-Micheli,33-Charron,34-kuz,35-ni,36-lics,37-YelinDeMille,38-Wei2010,39-Wei2016,40-kang-Kuen Ni2018}}. For linear molecules, the $\left| {0 0} \right\rangle$ and $\left| {1 0} \right\rangle$ pendular states are the most commonly used qubit states\textsuperscript{\cite{12-Philippe Pellegrini2011,13-Jing Zhu2013,15-Zuo-Yuan Zhang2020,16-Wei12011,32-Micheli,33-Charron,34-kuz,35-ni,36-lics,37-YelinDeMille,38-Wei2010,39-Wei2016,40-kang-Kuen Ni2018}}. For symmetric top molecules, different choices of qubit states have been explored\textsuperscript{\cite{17-Wei22011,14-Zuo-Yuan Zhang2017}}. In most cases, the Hamiltonian matrices are complex and no existing model can be used directly. In the meantime, spin systems are also considered to be a promising platform to implement a quantum computer. In fact, most of the work on quantum computers is based on spin systems (Note: superconducting loops are actually artificial spins)\textsuperscript{\cite{41-L.M.K.2000,42-J.Zhang2005,43-L.M.K2004,44-Kavita2000,45-Chiu2004,46-V.W.Scarola2005,47-F.B.M2009,48-M.Asoudeh2004,49-Meng2015,50-V.V.Aristov2004}} and Heisenberg model is the most popular model used in treating such systems\textsuperscript{\cite{45-Chiu2004,46-V.W.Scarola2005,47-F.B.M2009,48-M.Asoudeh2004,49-Meng2015}}. If we take our pseudo-spin states $\left| \downarrow \right\rangle$ and $\left|\uparrow\right\rangle$ as qubits states, then the two methods coincide. This opens up the prospect of directly transplanting methods and techniques developed for spins to polar molecules. So far, most proposals for implementing quantum computing with polar molecules have been based on the gate model. Our new choices of qubits states also invite the possibility of adiabatic quantum computing \textsuperscript{\cite{51-BRIW+14,52-SQVL14,53-VMBR+14,54-VMKO15,55-HJAR+15,56-KHZO+15,57-Ta,58-Gre,59-Ari,60-Sha,61-Ke}}. This follows primarily from the fact that the energy gap between the two qubit states $\Delta E = E_1-E_0$ can be arbitrarily tuned from 0 to 3.7B (see Figure \ref{fig1}) by changing electric fields. For adiabatic quantum computing, the tunable energy gap between $|0\rangle$ and $|1\rangle$ need to be large compared with the interaction energy $V_{d-d}$. In this case, it is around $10^4$ times larger than the coupling energy $V_{d-d}$, which is far beyond the current limit that spin systems can achieve \textsuperscript{\cite{57-KYNH+15}}. Moreover, one requirement for adiabatic quantum computing is that the energy gap between the ground and the first excited state be maintained during the adiabatic evolution such that no phase transitions could occur. This requirement is also satisfied, since for a practical coupling constant ($\Omega/B<10^{-2}$) during the adiabatic process of reducing the electric field, the entire polar molecular system remains in the ferromagnetic phase, without undergoing any phase transition (see Figure \ref{fig8}). \section*{ACKNOWLEDGEMENTS} We are grateful for support from National Natural Science Foundation of China (Grant Nos. 11974113 and 11674098). SK would like to acknowledge the support of the National Science Foundation under award number 1955907. BF gratefully acknowledges the hospitality of John Doyle and Hossein Sadeghpour during his stay at Harvard Physics and at the Harvard-Smithsonian Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP).
1,108,101,563,751
arxiv
\section{Introduction} Since their theoretical prediction\,\cite{kane-05prl146802,kane-05prl226801,bernevig-06prl106802,bernevig-06s1757} and experimental discovery\,\cite{koenig-07s766}, topological insulators\,\cite{hasan-10rmp3045,qi-11rmp1057,Bernevig2013} have become one of the most vibrant fields in contemporary condensed matter physics. In two spatial dimensions, the topological insulating state can be interpreted as the spin-type companion of the charge-type integer quantum Hall effect on a lattice. For the quantum spin Hall effect, the characteristic feature to drive a given electronic band model into this topologically non-trivial phase is the band inversion due to spin-orbit (SO) coupling. As the kinetic and spin degree of freedom are coupled due to SO coupling, the electronic band structure loses its SU(2) spin symmetry. Two different types of SO coupling can be distinguished: (i) the intrinsic spin orbit coupling $V_{\text{ISO}}\sim (Z^4) L^z S^z$ where the SU(2) spin group is only broken down to U(1) (\emph{i.e.},\ retaining a conserved $S^z$ quantum number) and (ii) the Rashba SO coupling $V_{\text{RSO}}\sim \bs{E} \cdot (\bs{S} \times \bs{p})$ which does not retain a conserved continuous subgroup of SU(2). While the intrinsic SO coupling gives rise to the topological insulator phase, the Rashba SO coupling itself is unable to induce the non-trivial topology. In any experimental situation, due to the presence of \emph{e.g.}\ a substrate or external electric fields, Rashba SO coupling needs to be taken into account. As the first microscopic model for topological insulators, the Kane-Mele model was originally proposed to describe the quantum spin Hall effect in graphene\,\cite{kane-05prl146802,kane-05prl226801}. Subsequent band structure calculations showed, however, that the spin orbit gap in graphene is so small\,\cite{min-06prb165310,yao-07prb041401} that the QSH effect in graphene is beyond any experimental relevance. Still, Kane and Mele's pioneering proposal for a prototypical topological insulator has triggered an intensive search for possible realizations. In principle, the spin-orbit coupling $\lambda$ can be increased using heavier elements since $V_{\rm ISO}\propto Z^4$ as a function of the atomic coordination number $Z$. Hence, promising proposals include graphene endowed with heavy adatoms like indium and thallium\,\cite{weeks-11prx021001}, synthesized silicene\,\cite{liu-11prl076802,ezawa12prl055502} (monolayers of silicon), molecular graphene\,\cite{ghaemi-12prb201406}, honeycomb films of tin~\cite{xu-13prl136804}, monolayers or thin films of the Iridium--based honeycomb compounds X$_2$IrO$_3$ (X=Na or Li)\,\cite{shitade-09prl256403,jenderka-13prb045111}, and ``digital'' transition metal oxide heterostructures\,\cite{xiao-11nc596}. Alternatively, the Kane-Mele model might be realized using ultra-cold atoms in tunable optical lattices\,\cite{bloch-08rmp885}. Very recent progress has been made in realizing honeycomb optical lattices\,\cite{Soltan-Panahi-11np434} as well as non-Abelian gauge fields acting as a synthetic spin orbit coupling\,\cite{lin-09n628,goldman-10prl255302,dalibard-11rmp1523,lin-11n83}. Furthermore, a completely different route to realize the quantum spin Hall effect on the honeycomb lattice is to induce it by virtue of interactions\,\cite{raghu-08prl156401,lei-12prb235135,wang-12epl57001,budich-12prb201407,garcia-matrinez-13arXiv:1308.6094,daghofer-13arXiv:1308.6211,roy-13prb045425,araujo-13prb085109}. At the non-interacting level, a Rashba SO term has already been considered in the original work by Kane and Mele where it is shown that the QSH phase of non-interacting fermions is stable with respect to a breaking of $S_z$ symmetry. It is also argued that the otherwise quantized spin Hall conductance will deviate from its quantized value in the presence of a Rashba term\,\cite{kane-05prl146802,kane-05prl226801}. Later it was explicitly shown that the QSH phase survives the combination of disorder and Rashba spin orbit coupling but the value of the spin Hall conductance deviates significantly from the quantized value\,\cite{sheng-06prl036808}. For the purpose of including interactions in the Kane-Mele model, theoretical approaches have preferably constrained themselves to the exclusive consideration of intrinsic spin orbit coupling. There are two main reasons for this development. First, some theoretical approaches such as quantum Monte Carlo (QMC) necessitate the U(1) symmetry kept by the intrinsic SO coupling in order to be applicable, \emph{i.e.},\ in the case of QMC, to avoid the sign problem. Second, calculating the topological invariant in terms of single particle Green's functions in the absence of inversion symmetry as implied by Rashba SO coupling is significantly more complicated, and often yields an integral form of the Volovik invariant\,\cite{volovik03} which is not amenable to efficient numerical evaluation. The Kane-Mele model with an onsite Hubbard interaction term and only intrinsic spin-orbit coupling has been usually referred to as {\it Kane-Mele-Hubbard} (KMH) model and attracted much attention recently; it was investigated from many different perspectives\,\cite{rachel-10prb075106,hohenadler-11prl100403,soriano-10prb161302,wu-12prb205102,dong-11prb205121,yamaji-11prb205122,yu-11prl010401,lee11prl166806,wen-11prb235149,mardani-11arXiv:1111.5980,hohenadler-12prb115132,griset-12prb045123,vaezi-12prb195126,assaad-13prx011015,araki-13prb205440,hung-13prb121113,ueda-13prb161108,zare-13prb224416,araki-13arXiv:1311.3973,meng-13arXiv:1310.6064,hung-13arXiv:1307.2659} providing us with a fairly good understanding of its phase diagram: For weak interactions, the topological insulator remains stable and the metallic edge states persist. For intermediate interactions, a phase transition into a magnetically ordered phase occurs. The latter has been shown to exhibit easy plane antiferromagnetic order\,\cite{rachel-10prb075106} and the transition to be of 3D $XY$ type\,\cite{hohenadler-12prb115132,wu-12prb205102}. In the isotropic limit of vanishing spin orbit coupling, one finds the semi-metallic phase (weak interactions) of graphene as well as the N\'eel antiferromagnet (strong interactions), with the phase transition of regular 3d Heisenberg type~\cite{assaad-13prx031010}. Also related correlated TI models have been studied\,\cite{yoshida-12prb125113,yoshida-13prb085134}. (For a review about correlation effects in topological insulators see Ref.\,\onlinecite{hohenadler-13jpcm143201}.) \begin{figure} \includegraphics[width=.37\textwidth]{schematic_phasedia2.pdf} \caption{(Color online). Schematic $U$--$(\lambda_R/\lambda)$ phase diagram of the full Kane-Mele-Hubbard model for $\lambda = 0.2$ ($t=1$). There are five different phases: topological insulator (TI), weak topological semiconductor (TS), metal (M), easy plane antiferromagnet (XY-AFM), and possibly a phase with incommensurate spiral order. For larger $\lambda$ the TS phase becomes broader while for smaller $\lambda$ the TS phase shrinks until it vanishes for $\lambda<0.1$} \label{fig:schematic-phasedia} \end{figure} Bridging the gap between possible experimental realizations and theoretical modeling, taking into account Rashba SO coupling and interactions in the Kane Mele model is indispensable. We emphasize that the effect of Rashba SO coupling has so far not been investigated in any two-dimensional correlated topological insulator model (with the exception of the one-dimensional edge theory of topological insulators dubbed {\it helical Luttinger liquid}\,\cite{wu-06prl106401,strom-10prl256804,budich-12prl086602,schmidt-12prl156402}). In this article, we employ the variational cluster approach (VCA)\,\cite{potthoff-03prl206402,potthoff03epjb429} to investigate the generalized Kane-Mele-Hubbard model in the presence of Rashba spin orbit coupling. The VCA is an efficient method to investigate interaction effects in correlated electron systems and to obtain effective electronic band structures. Our main results are summarized in Fig.\,\ref{fig:schematic-phasedia}. For small Rashba coupling, we find the TI (at small onsite interaction $U$) and XY-AFM phases (at large interactions $U$) which are also present in the Kane-Mele-Hubbard model without the Rashba coupling. Larger Rashba coupling induces a topologically non-trivial direct-gap only semiconductor before the system eventually becomes metallic. The XY-AFM phase is found to break down at large Rashba couplings beyond which the evolving magnetic phase cannot be analyzed anymore via VCA due to limited cluster size. Involving the knowledge from alternative approaches such as pseudofermion functional renormalization group~\cite{reuther-12prb155127,reuther-14prb100405}, this parameter regime is conjectured to be dominated by incommensurate spiral order. The paper is organized as follows. In Sec.\ II, we introduce the Kane-Mele-Hubbard model and briefly describe the variational cluster approach (VCA). In Sec.\,III, we establish a first VCA benchmark by showing results for the KMH model in the absence of Rashba spin orbit coupling. This scenario serves as a prototypical framework to illustrate various subtle issues in the VCA approach such as cluster dependence, where details are delegated to Appendix\,A. Subsequently, the results for the KMH model in the presence of finite Rashba SO coupling are presented in Sec.\,IV. In Sec.\,V, we conclude that the non-trivial phases of the Kane-Mele model emerging due to Rashba SO coupling persist in the presence of interactions, and that the interplay of interactions and Rashba SO coupling establishes a promising direction of study in theory and experiment. \begin{figure}[t] \includegraphics[scale=0.4]{iso+rso.pdf} \caption{(Color online). (a) Illustration of the hopping term $\propto t$ and the intrinsic SO term $\propto i\lambda\sigma^z$. (b) Illustration of the nearest-neighbor vectors $\bs{\delta}_i$ ($i=1,2,3$) and of the Rashba SO term $\propto i\lambda_R$ with different spin-dependences in different hopping directions $\bs{\delta}_i$.} \label{fig:soc} \end{figure} \section{Model and Methodology} \subsection{Kane-Mele Hubbard model with Rashba spin-orbit coupling} The Kane-Mele-Hubbard model is governed by the Hamiltonian \begin{equation}\label{ham} \begin{split} \mathcal{H} =& -t \sum_{\langle ij \rangle\,\sigma} c_{i\sigma}^\dag c_{j\sigma}^{\phantom{\dag}} +i\lambda \sum_{\langle\!\langle ij \rangle\!\rangle\,\alpha\beta} c_{i\alpha}^\dag \nu_{ij} \sigma^z_{\alpha\beta} c_{j\beta}^{\phantom{\dag}} \\[5pt] & + i \lambda_R \sum_{\langle ij \rangle\,\alpha\beta} c_{i\alpha}^\dag (\bs{\sigma}_{\alpha\beta} \times \bs{d})_z \,c_{j\beta}^{\phantom{\dag}} +U\sum_i n_{i\uparrow}n_{i\downarrow}\ . \end{split} \end{equation} The operator $c_{i\alpha}$ annihilates a particle with spin $\alpha$ on site $i$, $t$ is the hopping amplitude (which we set to unity, $t\equiv 1$, throughout the paper), $\lambda$ the intrinsic spin orbit coupling, $\lambda_R$ the amplitude of the Rashba SO coupling, $U$ parametrizes the local Coulomb (Hubbard) interactions, and $\nu_{ij}=\pm 1$ depending on whether the electron traversing from $i$ to $j$ makes a right (+1) or a left turn (-1) (Fig.~\ref{fig:soc}a). As usual, $\langle ij \rangle$ indicates that $i$ and $j$ are nearest-neighbor sites while $\langle\!\langle ij \rangle\!\rangle$ refers to second-nearest neighbors. The vector $\bs{d}$ points from site $i$ to site $j$ and corresponds to the nearest-neighbor vectors $\bs{\delta}_i$, ($i=1,2,3$) (Fig.\,\ref{fig:soc}\,(b)); $\sigma^\mu$ ($\mu=x,y,z$) denotes the three Pauli matrices corresponding to spin degree of freedom. The explicit spin dependence of the Rashba SO term, $(\bs{\sigma}\times\bs{d})_z$, is visualized in Fig.\,\ref{fig:soc}\,b. The spin orbit term $\propto\lambda$ breaks the SU(2) symmetry down to U(1), the Rashba term $\propto\lambda_R$ breaks the remaining U(1) spin symmetry down to $\mathbb{Z}_2$. It also breaks the spatial inversion symmetry explicitly. The Rashba spin-orbit term as a part of the original Kane-Mele model has so far generally been neglected in studies of the interacting scenario. Note that in the original work by Kane and Mele, also a staggered sublattice potential (Semenoff mass) has been discussed which we will not elaborate on further in the following. This term is particularly useful to probe the transition from a topological band insulator phase into a trivial band insulator phase~\cite{haldane88prl2015,kane-05prl146802,kane-05prl226801,cocks-12prl205303,orth-13jpb134004,rachel13arXiv:1310.3159}, but does not yield distinctly new phases, which is the focus of our investigations in the following. \subsection{Variational Cluster Approach} \subsubsection{Method} The zero temperature variational cluster approach (VCA)\,\cite{potthoff03epjb335} is based on the self-energy functional theory\,\cite{potthoff03epjb429,potthoff05assp135}, which provides an efficient numerical technique for studying strongly correlated systems, especially in the presence of different competing, potentially long-ranged, orders. VCA simplifies the lattice problem, as defined in Eq.~\eqref{ham}, to an exactly solvable problem defined in a reference system consisting of decoupled finite-size clusters. The thermodynamic limit is recovered by reintroducing the inter-cluster hopping to the decoupled cluster via a non-perturbative variational scheme based on self-energy functional theory. The VCA has been successfully applied to many interesting problems, including the high-T$_{c} $ cuprates~\cite{senechal-05prl156404,balzer-10prb144516} and correlated topological insulators~\cite{yu-11prl010401}. In particular, this method is suitable for our current study since the topologically non-trivial properties of the $\mathbb{Z}_2$ topological insulators are appropriately accounted for. By construction, the VCA becomes exact in the limit of $U\to 0$. Hubbard onsite interactions might give rise to competing phases (such as magnetic order) which can be accurately described by the VCA grand potential. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{clusters_new.pdf} \end{center} \caption{(Color online). Honeycomb lattice covered with single clusters in VCA: (a) six-site clusters (PBC). (b) ten-site clusters (PBC). (c) eight-site clusters (PBC). (d) Honeycomb ribbon (cylinder) covered with eight-site clusters.} \label{fig:clusters} \end{figure} In the self-energy functional theory, the grand potential of a system defined by a Hamiltonian $H=H_0(\mathbf{t})+H_1(\mathbf{U})$ is written as a functional of the self-energy $\Sigma$: \begin{align} \Omega[\Sigma]&= F\left[ \Sigma \right]+\text{Tr}\ln\left( G^{-1}_0-\Sigma \right)^{-1} \, , \label{eq:grand-potential-functional} \end{align} where $F\left[ \Sigma \right]$ is the Legendre transform of the Luttinger-Ward functional and $G_0=(\omega+\mu-\mathbf{t})^{-1}$ is the non-interacting Green's function. It can be shown that the functional $\Omega[\Sigma]$ becomes stationary at the physical self-energy, \emph{i.e.},\ $\delta\Omega\left[ \Sigma_{\rm phys} \right]=0$.\cite{potthoff03epjb335} As the Luttinger-Ward functional is universal, it has the same interaction dependence for systems with any set of $\mathbf{t'}$ as long as the interaction $\mathbf{U}$ remains unchanged. Note that the functional $\Omega\left[ \Sigma \right]$ itself is not approximated by any means; we restrict, however, the ``parameter'' space of possible self-energies to the self-energies of the reference system. Thus, the stationary points are obtained from the self-energy $\Sigma'=\Sigma\left[ \mathbf{t'} \right]$ of a system defined by the Hamiltonian $H^{\prime}=H_0(\mathbf{t'})+H_1(\mathbf{U})$, which we label as reference system. Let us define $V=\mathbf{t}-\mathbf{t}'$. Now we are able to conveniently define the VCA-Green's function, \begin{equation} G_{\rm VCA}^{-1} = G'^{-1} - V\ . \end{equation} In terms of the reference system, the VCA grand potential is calculated more conveniently as \begin{align} \Omega[\Sigma']&= \Omega'+\text{Tr}\ln\left( G^{-1}_0-\Sigma' \right)^{-1} - \text{Tr}\ln(G') \, , \label{eq:grand-potential} \end{align} with $\Omega'$, $\Sigma'$, and $G'$ denoting the grand potential, the self-energy and the Green's function of the reference system, respectively. The reference system is chosen such that it can be treated exactly. Here, we choose an array of decoupled clusters with open boundary conditions and calculate $\Omega'$, $\Sigma'$, and $G'$ via exact diagonalization. While the correlation beyond the reference system size are included on a mean-field level, the short-range correlations within the reference system are fully taken into account in the VCA, resembling related (cluster) DMFT approaches. \subsubsection{Cluster size and shape} Since a spinful Hubbard model involves four basis states for each lattice site, we are generally restricted to rather small clusters with a maximum of ten sites (Fig.\,\ref{fig:clusters}\,(b)). Furthermore, the choice of the reference system, \emph{i.e.},\ the cluster shape and size, {is constrained by the requirement that the honeycomb lattice needs to be fully covered, either using periodic boundary conditions (PBCs)--as realized on a torus--or cylindrical boundary conditions.} We consider six-, eight-, and ten-site clusters in the case of PBCs and eight-site clusters for cylindrical boundary conditions with zig-zag edges (Fig.\,\ref{fig:clusters}). (Note that the six- and ten-site clusters could also be used for ribbons (cylinders) with armchair edges which is not further considered here, see also Ref.\,\onlinecite{wu-12prb205102}.) While one generally expects to obtain more accurate results with a larger cluster, the effect of the lattice partitioning, \emph{i.e.},\ the cluster dependence, is rather strong. {\it We therefore extract our physical results from the joint consideration of all cluster sizes reachable by VCA, which is indispensable to obtain physically meaningful results from finite cluster approaches in general.} In the topological insulator phase we explore the edge states connecting the valence and conduction bands of the system. These edge states typically penetrate a few unit cells into the bulk. If the ribbon height (\emph{i.e.},\ the distance between upper and lower edge) does not exceed a few unit cells it might happen that the penetrating edge states from the upper and lower edge couple to each other and gap out. To avoid this, we have to make sure that the ribbon height is sufficiently large; we build a supercluster which consists of $n$ normal clusters (as described above) and stack them on top of each other as illustrated in Fig.\ref{fig:clusters}\,(d). The supercluster corresponds to the unit cell of the effectively one-dimensional superlattice and is defined by the tridiagonal matrix \begin{equation} G'^{-1}=\begin{pmatrix} G'^{-1}_1& t_{1,2} \\[5pt] t_{2,1} &~G'^{-1}_2~& t_{2,3} \\[5pt] & t_{3,2}& ~G'^{-1}_3~& t_{3,4} \\[5pt] &&\ddots&\ddots&\ddots \\[5pt] &&&t_{n-1,n-2}&~G'^{-1}_{n-1}~ &t_{n-1,n} \\[5pt] &&&&t_{n,n-1}&G'^{-1}_{n} \end{pmatrix} \label{eq:supercluster} \end{equation} where $G'$ is the Green's function of the supercluster with the dimension $2L_c \times n$, $G'_{i}$ are the cluster Green's functions and $t_{i,i+1}$ is the hopping matrix connecting the two cluster Green's functions $G'_i$ and $G'_{i+1}$; $L_c$ is the number of cluster sites. To separate edge states from the upper and lower edge we stack at least eight clusters to form a supercluster from which we compute the single-particle spectral function (displaying the edge states). The single-particle spectral function $A(k,\omega)$ is defined as in the standard case of PBCs via \begin{equation}\label{def-Akw} A(k,\omega) = -\frac{1}{\pi} {\rm Im}\Big\{ G_{\rm VCA}(k,\omega) \Big\}\ , \end{equation} where the VCA-Green's function depends on the momentum $k$ retained by the circumferential direction of the cylinder. \subsubsection{Symmetry breaking Weiss fields} In quantum cluster approaches (and dynamical mean-field theory) manifestations of spontaneous symmetry breaking for finite size clusters is resolved by introducing artificial mean-field like Weiss fields of the form \begin{equation}\label{xaf-weiss} H_{X-{\rm AF}} = h^x \sum_{i\,\alpha\beta} \left( a_{i\alpha}^\dag \sigma^x_{\alpha\beta} a_{i\beta}^{\phantom{\dag}} - b_{i\alpha}^\dag \sigma^x_{\alpha\beta} b_{i\beta}^{\phantom{\dag}} \right)\ , \end{equation} where the operator $a_i$ ($b_i$) acts on sublattice $A$ ($B$). Eq.\,\eqref{xaf-weiss} is the simplest example of an antiferromagnetic Weiss field with N\'eel order in $x$-direction (in-plane). Given an external Weiss field for a certain order parameter, a stable magnetic solution is characterized by a stationary point in the grand potential at a finite field strength. Furthermore, in order to represent the physical ground state, such a stationary point needs to have a lower energy than the zero-field solution. In principle, similar to a mean-field treatment, this procedure needs to be repeated for all possible configurations of Weiss fields. The order parameter can then be determined from the magnetic solution with the lowest energy. The cluster decomposition of the lattice, however, restricts the possible choices of Weiss fields to those which are compatible with the cluster size and shape, \emph{i.e.},\ a Weiss field needs to have the same periodicity as the array of clusters. Typically, for a given cluster only a few types of magnetic order may be investigated. For example, a N\'eel pattern cannot be implemented on a three-site cluster. Likewise, incommensurate spiral order is incompatible with any finite cluster. \subsubsection{Variation of single-particle parameters} The variational procedure of VCA works such that the amplitudes of every single-particle term as well as the chemical potential $\delta\mu$ need to be varied. It is well established, however, that for practical purposes the variation of $\delta\mu$ is often sufficient and the additional variation of, say, the hopping $\delta t$ does not lead to a new stationary point. For the KMH model, in principle we have to vary not only the chemical potential, but also hopping, spin orbit coupling, and Rashba term independently. In the Appendices A and B, we show exemplarily the difference between (i) variation of $\delta\mu$, (ii) variation of $\delta\mu$ and $\delta t$, (iii) variation of $\delta\mu$, $\delta t$, and $\delta \lambda$, as well as (iv) variation of additional antiferromagnetic Weiss fields. Essentially we find that variation of $\delta t$ has a significant effect on the phase diagrams incl.\ magnetic phase transitions. Additional variation of $\delta\lambda$ or $\delta\lambda_R$, respectively, does not seem to influence the variational procedure. Still, performing VCA on the honeycomb lattice with variation of $\delta\mu$ only might lead to numerical artifacts and should be avoided. Further details are illustrated in the Appendices A and B. \section{Kane-Mele-Hubbard model without Rashba SO Coupling $\bs{(\lambda_R=0)}$} \subsection{Topological insulator} \subsubsection{$\mathbb{Z}_2$ invariant} In the presence of inversion symmetry the topological invariant can be conveniently calculated probing bulk properties only, which is even applicable in the interacting case. Particularly, within VCA this can be achieved for any cluster size. Expressing topological invariants in terms of single particle Green's functions was pioneered by Volovik~\cite{volovik03}; more recently, Gurarie\,\cite{gurarie-11prb085426} conveniently reformulated Volovik's invariant for the field of topological insulators. Recently, Wang \emph{et al.\ }\,\cite{wang-10prl256803,wang-12prb165126} derived simplified expressions for the inversion-symmetric Hamiltonians. The $\mathbb{Z}_2$ topological invariant relevant for topological insulators is computed from the full interacting Green's function through a Wess-Zumino-Witten term\,\cite{wang-10prl256803}, motivated from the concept of dimensional reduction in topological field theory~\cite{qi-11rmp1057,qi-08prb195424}. \begin{figure}[t] \begin{center} \includegraphics[width=.35\textwidth]{kmh_para.pdf} \end{center} \caption{(Color online). (a) Phase boundary in $U$--$\lambda$--plane between topological insulator and trivial band-insulator (``non-magnetic'' solution) obtained by a periodic eight-site cluster computation of the $\mathbb{Z}_2$ invariant. (b) Edge spectrum in the TI phase obtained for cylindrical geometry; parameters ($\lambda=0.2$, $U=3$, $\lambda_R=0$) correspond to the light-blue star in the phase diagram (a). (a) and (b) are complementary approaches to detect the topological insulating phase.} \label{fig:kmh-z2invariant} \end{figure} In the presence of inversion symmetry (\emph{i.e.},\ when $\lambda_R\equiv 0$ and antiferromagnetic order is absent), we follow Wang \emph{et al.\ } to compute the topological invariant formula\,\cite{wang-12prb165126} via the parity eigenvalues of the Green's function obtained within VCA at the time-reversal invariant momenta (TRIM) $\bs{\Gamma}_{i}$ and zero energy. The Green's function is a $N\times N$ matrix with $N=2L_c$, where $L_c$ is the number of sites per cluster. Both $G$ and $G^{-1}$ can be diagonalized, yielding \begin{equation} G(i\omega,\bs{k})^{-1} \ket{\alpha(i\omega,\bs{k})} = \mu_\alpha(i\omega,\bs{k}) \ket{\alpha (i\omega,\bs{k})}\ , \label{eq:eigen-green} \end{equation} with $\mu_\alpha \in \mathbb{C}$. The Green's function matrix $G(i\omega,\bs{k})$ has the same eigenvectors $\ket{\alpha(i\omega,\bs{k})}$ but the inverse eigenvalues $\mu^{-1}_\alpha(i\omega,\bs{k})$. The states at the TRIMs, $\ket{\alpha (i\omega,\bs{\Gamma}_i)}$, are simultaneous eigenstates of $G$ and $P$ and satisfy\,\cite{wang-12prb165126}, \begin{equation} P \ket{\alpha (i\omega,\bs{\Gamma}_i)} = \eta_\alpha \ket{\alpha (i\omega,\bs{\Gamma}_i)}\ . \label{eq:eigen-parity} \end{equation} Since $\mu_\alpha(0,\bs{\Gamma}_i)$ is real, one can distinguish between positive ($\mu_\alpha(0,\bs{\Gamma}_i)>0$) and negative ($\mu_\alpha(0,\bs{\Gamma}_i)<0$) eigenvalues, denoted as R-zeros and L-zeros, respectively. This allows to define the topological invariant $\Delta$ via \begin{equation} (-1)^{\Delta}=\prod_{\rm R-zero} \eta_{\alpha}^{1/2}=\pm1\ . \label{eq:ti} \end{equation} In Fig.\,\ref{fig:kmh-z2invariant}\,(a) we show the $U$--$\lambda$ plot of this invariant. Note again that $\Delta$ cannot be calculated when an antiferromagnetic Weiss field is present due to breaking of inversion symmetry. As a consequence, in VCA we independently investigate the magnetically ordered regime. The onset of a finite magnetization likewise sets the boundary for which the topological character of the insulating state vanishes. \subsubsection{Edge states} As an alternative to a bulk measurement of the topological invariant, the topological insulator phase can also be identified by detecting the helical edge states which are a hallmark of $\mathbb{Z}_2$ topological insulators considered here. This is accomplished by solving the Hamiltonian \eqref{ham} on a cylindric geometry as explained in the previous section. This method is reliable and is also applicable when the computation of the topological bulk invariant is too complicated, such as for finite Rashba SO coupling addressed later. In Fig.\,\ref{fig:kmh-z2invariant}\,(b) the single particle spectral function $A(k,\omega)$ defined for a ribbon geometry is shown ($\lambda=0.2$, $\lambda_R=0$, $U=4$). In the effectively one-dimensional Brillouin zone, one clearly sees a band gap between upper and lower bands, which are connected by helical edge states crossing at the TRIM $k=\pi$. \subsection{XY Antiferromagnet} For $\lambda\to 0$ the Hamiltonian \eqref{ham} becomes invariant under SU(2) spin rotations and the antiferromagnetic N\'eel order is isotropic. Finite SO coupling $\lambda\not= 0$ drives the system into an easy-plane antiferromagnet with an ordering vector in the $x$-$y$ lattice plane\,\cite{rachel-10prb075106}, which has been confirmed by QMC\,\cite{hohenadler-11prl100403,dong-11prb205121}, VCA\,\cite{yu-11prl010401}, and pseudofermion functional RG\,\cite{reuther-12prb155127}. In order to compute the magnetic phase diagram within VCA, we apply antiferromagnetic Weiss-fields in $x$ and $z$-direction for various values of $\lambda$. \begin{figure}[t] \begin{center} \includegraphics[width=.45\textwidth]{af-xz-so-cb.pdf} \end{center} \caption{(Color online). Heat map of the grand potential $\Omega(h^{x},h^{z})$ as a function of antiferromagnetic Weiss fields $h^{x}$ and $h^{z}$ for various values of $\lambda$. All plots haven been obtained for the six-site cluster and $U=6$. Global minima of $\Omega$ are indicated by green points (lines). For $\lambda=0.1$ we find a second stationary point (blue point) which is a saddle point at finite $h^z\ne 0$ with higher energy.} \label{fig:af-xz-so} \end{figure} For $\lambda=0$ we find a circle of degenerate minima in the $h^x$-$h^z$-plane, indicating isotropic magnetic order. For finite $\lambda>0$, this degeneracy is lifted and magnetic order in $x$-direction is energetically preferred. For small $\lambda=0.1$ there is an additional stable solution (a saddle point in $\Omega$ indicated by the blue point in Fig.\,\ref{fig:af-xz-so} right top panel) corresponding to a magnetization in $z$-direction. This solution, however, is not a global minimum in $\Omega$ and the system is still an easy-plane antiferromagnet. For larger $\lambda$, this meta-stable solution disappears. In total, the VCA confirms the established results about magnetic order in the KMH. \subsection{Phase diagram} \begin{figure}[t] \begin{center} \includegraphics[width=.4\textwidth]{schem_kmh_phasedia.pdf} \end{center} \caption{(Color online). Schematic phase diagram of the Kane-Mele-Hubbard model ($\lambda_R=0$) as obtained from VCA.} \label{fig:kmh-schematic} \end{figure} As the final result, the interacting $U$--$\lambda$ phase diagram exhibits a semi-metal for $\lambda=0$ which is detected via a linear density of states near the Fermi-level. It transcends into a topological insulator phase for finite $\lambda$ up to moderate interaction strengths. For stronger interactions, the system acquires XY antiferromagnetic order. Obtaining a phase diagram such as Fig.\,\ref{fig:kmh-schematic} via a quantum cluster approach is challenging: (i) stabilizing semi-metals within real-space quantum cluster methods is rather involved; in particular the six-site cluster may suffer from artifacts of the lattice partitioning. (ii) clusters which do not have the shape of closed honeycomb rings underestimate the critical interaction strength $U_c$ associated with the onset of magnetization. (iii) exclusive variation of the chemical potential might lead to an erroneous non-magnetic insulator phase up to small intrinsic spin orbit coupling\,\cite{yu-11prl010401}. In our analysis where we also varied the hopping in order to minimize the grand potential we could not find this non-magnetic insulator phase. Note that this erroneous non-magnetic insulator phase was linked to a proposed quantum spin liquid phase. Recently, it was shown using large-scale QMC calculations that there is no such spin liquid on the honeycomb lattice\,\cite{sorella-12sr992,assaad-13prx031010} being in perfect agreement with our analysis. (For an extensive discussion and details about (i) -- (iii) we refer the interested reader to Appendix A.) The analysis done so far shows that a careful multi-size cluster analysis has to be employed in order to determine an artefact-free physical phase diagram. This equips us for our subsequent investigations of the KMH model in the presence of Rashba SO coupling studied in the next section. \section{Kane-Mele-Hubbard model including Rashba SO coupling $\bs{(\lambda_R>0)}$} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\textwidth]{km-edge_lso02_varlr.pdf} \caption{(Color online). Single particle spectra on a cylinder geometry for $U=0$, $\lambda=0.2$, and different values of $\lambda_R$. From left to right: $\lambda_R=0$, $0.2$, $0.4$, $0.6$. and $0.8$. The spectra interpolate from a topological insulating phase ($\lambda_R=0$, $0.2$, and $0.4$) to a metallic phase ($\lambda_R=0.8$). In between, for $\lambda_R=0.6$ we find an additional weak topological semiconductor phase (see also Fig.\,\ref{fig:ts-phase}).} \label{fig:akw-U=0} \end{center} \end{figure*} In their seminal papers, Kane and Mele showed that the topological insulator phase persists until $\lambda_R = 2\sqrt{3}\lambda$ where the gap closes and the system enters a metallic phase\,\cite{kane-05prl146802,kane-05prl226801}. They computed the $\mathbb{Z}_2$ invariant to explore the corresponding phase diagram. In their work, they considered rather small values of SO coupling such as $\lambda = 0.03$ or $0.06$, and in general $\lambda\ll t$. For a description of graphene, which was the original intention of this work, such small SO coupling seemed to be realistic. However, with regard to the many different candidate systems potentially realizing the quantum spin Hall effect in a honeycomb lattice compound which have been proposed in the meantime, it is justified to consider larger spin orbit coupling such as $\lambda=0.2$. It turns out, that for sufficiently large $\lambda\geq 0.1$ and $\lambda_R$ close to the predicted phase transition at $\lambda_R=2\sqrt{3}\lambda$, the system is not gapped anymore. The Rashba SO coupling bends the bands such that there is no full gap. On the other hand, there is always a direct gap for each wave vector $k$, \emph{i.e.},\ the conductance and valence bands do neither touch nor cross each other -- this is the reason why the topological invariant (computed for $U=0$) labels this region as a topological insulator. In fact, in this ``metallic'' region the edge states are well-defined and clearly visible (see the second-right panel in Fig.\,\ref{fig:akw-U=0} and Fig.\,\ref{fig:ts-phase}\,(b)). At each momentum $k$ the system has a gap, but globally the system is gapless. Therefore we call this region a weak topological semiconductor phase where ``semiconductor'' refers to a {\it direct gap-only insulating phase}. In the presence of disorder individual $k$ values cannot be distinguished anymore leading to the attribute {\it weak}. Still this phase is stable in the presence of interactions as we will explicate below. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{ts-phase.pdf} \caption{(Color online). (a) $\lambda_R$--$\lambda$ phase diagram for the non-interacting Kane-Mele model displaying the TI, metal (M), and topological semi-conductor (TS) phase. (b) Zoom into the edge spectrum for $\lambda=0.2$, $\lambda_R=0.6$, $U=0$ shown in Fig.\,\ref{fig:akw-U=0}. (c) $U$--$\lambda_R$ phase diagram for $\lambda=0.2$ in the non-magnetic regime: the weak TS phase persists in the presence of interactions.} \label{fig:ts-phase} \end{center} \end{figure} \subsection{Weak to intermediate interactions} \begin{figure}[t] \centering \includegraphics[width=.37\textwidth]{spec-multi-km02-ra06-u-pm-0246.pdf} \caption{(Color online). Spectral function $A(k,\omega)$ on cylindrical geometry (as defined in Eq.\,\eqref{def-Akw}) for $\lambda=0.2$, $\lambda_{R}=0.6$, and various values of $U$. For better illustration, only the weights of the outermost sites on the cylinder are taken into account. From top to bottom: $U=0$, $2$, $4$, and $6$. For $U=0$ and $U=2$ we find the weak TS phase, for $U=4$ and $U=6$ a magnetically ordered insulating phase.} \label{fig:spec-multi-km01-ra06-u} \end{figure} For $\lambda<0.1$, we only find TI and metallic phases at $U=0$, which persist for moderate interaction strength. Fixing $\lambda=0.2$ we find three different phases at $U=0$: TI, weak topological semiconductor (TS) phase, and metal (see Fig.\,\ref{fig:ts-phase}\,(a,b)). The TS phase is stable with respect to interactions, see Fig.\,\ref{fig:ts-phase}\,(c). To gain further insight, we compute single-particle spectral functions on cylindrical geometry (using the eight-site cluster) to determine the edge state spectrum (see Fig.\,\ref{fig:spec-multi-km01-ra06-u}). For $\lambda=0.2$ and $\lambda_R=0.6$, the TS phase is stable up to moderate values of $U$. At around $U=4$ the system enters a magnetically ordered phase. Upon further increasing $U$ the bulk gap increases rapidly; however, no edge states connect the valence and conductance bands anymore, indicating the trivial topology of the magnetic phase. We perform an additional test to verify that the two modes crossing at $k=\pi$ in Fig.\,\ref{fig:spec-multi-km01-ra06-u} ($U=0$ and $U=2$) are indeed edge states: we repeat the computation of the single particle spectral function $A(k,\omega)$ on a cylindrical geometry but with additional links connecting the two edges of the cylinder. These additional links are chosen such that they are compatible with the band structure of the KMH model. As such, moving from a cylindric to a toroidal geometry, the bulk spectra should be unchanged with the only difference that the edges have disappeared, which is exactly what we find. \subsection{Strong interactions and magnetic order} For finite $\lambda>0$ and $\lambda_R=0$, the magnetic region of the phase diagram is an XY antiferromagnet as discussed above. Treating the Rashba term as a small perturbation leaves the magnetic phase unchanged. Thus we expect an XY-AFM in the weak-$\lambda_R$ region. \begin{figure}[t!] \begin{center} \includegraphics[width=.35\textwidth]{af-xy-km01-ra00-03-ring2.pdf} \end{center} \caption{(Color online). Heat map of the grand potential as a function of antiferromagnetic Weiss fields $\Omega(h^x,h^y)$. On the six-site ring-shaped cluster we find easy-plane AFM order for $\lambda_{R}<0.3$ (at $\lambda=0.1$ and $U=6$). For larger Rashba coupling we do not find any saddle points at finite Weiss fields. } \label{fig:af-xy-so-km01-ra00-03} \end{figure} First, we use the six-site cluster and compute the grand potential $\Omega$ as a function of $h^x$ and $h^y$. As expected we find the XY-AFM. $\Omega$ as a function of $h^x$ and $h^y$ shows a perfect circle at finite Weiss fields $h^{x/y}$ (Fig.\,\ref{fig:af-xy-so-km01-ra00-03}). For the six-site cluster, the saddle point associated with the XY-AFM phase is found at decreasing Weiss fields $h^{x/y}$ when we increase the Rashba coupling. For $\lambda_R=0.3$ (at fixed $\lambda=0.1$), we do not find any magnetic solution anymore (see lower panels in Fig.\,\ref{fig:af-xy-so-km01-ra00-03}). This implies that there is either a true non-magnetic insulator phase or there is a magnetically ordered phase which cannot be detected within VCA. For instance, this is the case for incommensurate spiral order, where the Weiss field is incompatible with the cluster partitioning. A spiral phase is likely to occur since the spin Hamiltonian (\emph{i.e.},\ the Hamiltonian obtained in the strong coupling limit $U\to\infty$ of Eq.\,\eqref{ham}) contains terms of Dzyaloshinskii-Moriya type\,\cite{reuther-12prb155127}. Recently, spiral order was also found in a Kane-Mele type model\,\cite{shitade-09prl256403} with multi-directional SO coupling in the presence of strong interactions\,\cite{reuther-12prb155127,kargarian-12prb205124,liu-13arXiv:1307.4597}. In principle, we cannot rule out the existence of the non-magnetic insulator phase for large $U$ and large Rashba spin orbit coupling. The existence of such a phase would be exciting, in particular, since it could be related to a recently proposed fractionalized quantum spin-Hall phase (dubbed QSH$^\star$)\,\cite{ruegg-12prl046401}. \subsection{Phase diagram} As the final result of this section and this paper, the $U$-$\lambda_R$ phase diagram contains, for moderate Rashba SO coupling $\lambda_R$, a TI phase (weak interactions) and an XY-AFM phase (strong interactions). Stronger Rashba SO coupling drives the TI into a metallic phase. If the intrinsic SO coupling $\lambda$ is sufficiently large ($\lambda \geq 0.1$) an additional weak topological semiconductor phase emerges between the TI and the metallic phase. In the strong-interaction regime, we do not find a magnetic solution whose unit cell would be consistent with the available cluster sizes in VCA, a regime which is hence likely to host incommensurate spiral magnetic order. All these findings cumulate in the schematic phase diagram Fig.\,\ref{fig:schematic-phasedia}. \section{Conclusions} We have investigated the effect of Rashba spin orbit coupling in the Kane-Mele-Hubbard model as a prototypical correlated topological insulator. We have applied the variational cluster approach and determined the phase diagram via the computation of local density of states, magnetization, single particle spectral function, and edge states to detect the topological character. The topological insulating phase persists in the presence of Rashba spin-orbit coupling and interactions. Furthermore, in the strong coupling regime, the Rashba term induces magnetic frustration which leads to incommensurability effects in the magnetic fluctuation profile and is conjectured to predominantly give rise to spiral magnetic phases. Rashba spin orbit coupling also gives rise to peculiar metallic phases. We find a weak topological semiconductor phase, for a wide range of Hubbard interaction strengths as well as intrinsic and Rashba spin orbit couplings. It will be exciting to investigate some of these effects in future experiments which exhibit the Rashba term due to external fields or intrinsic environmental effects. \begin{acknowledgements} The authors acknowledge discussions with Karyn Le Hur, Martin Hohenadler, Fakher F.\ Assaad, Andreas R\"uegg, Motohiko Ezawa, Tobias Meng, Michael Sing, J\"org Sch\"afer, and Matthias Vojta. We thank the LRZ Munich and ZIH Dresden for generous allocation of CPU time. ML is supported by the DFG through FOR 1162. JR acknowledges support by the Deutsche Akademie der Naturforscher Leopoldina through grant LPDS 2011-14. RT is supported by the ERC starting grant TOPOLECTRICS of the European Research Council (ERC-StG-2013-336012). SR is supported by the DFG through FOR 960, the DFG priority program SPP 1666 ``Topological Insulators'', and by the Helmholtz association through VI-521. We thank the Center for Information Services and High Performance Computing (ZIH) at TU Dresden for generous allocations of computer time. \end{acknowledgements}
1,108,101,563,752
arxiv
\section{Introduction} The idea that supermassive black holes are generic components of galactic nuclei has come to be widely accepted, due largely to the kinematical detection of dark objects with masses $10^{6-9.5}$M$_{\odot}$ at the centers of about a dozen galaxies (Kormendy \& Richstone 1995; Jaffe, this volume). The mean mass of these objects -- of order $10^{-2.5}$ times the mass of their host spheroids -- is consistent with the mass in black holes needed to produce the observed energy density in quasar light given reasonable assumptions about the efficiency of quasar energy production (Chokshi \& Turner 1992). The black hole paradigm also explains in a natural way many of the observed properties of energy generation in active galactic nuclei and quasars (Blandford 1990). However it has long been clear that supermassive black holes might be important from a purely stellar-dynamical point of view: both within the nucleus, where the gravitational force is dominated by the black hole; but also at much larger radii, if a substantial number of stars are on orbits that carry them into the center (Gerhard \& Binney 1985). Recent work, reviewed here, has developed these ideas and given support to the view that supermassive black holes may be important for understanding many of the systematic, large-scale properties of elliptical galaxies and bulges. \section{Nuclear Dynamics} \subsection{Cusp formation} The luminosity densities of early-type galaxies and bulges are well approximated as power-laws, $\nu\propto r^{-\gamma}$, inside of a ``break radius'' $r_b$ (Crane et al. 1993; Gebhardt et al. 1996). The break radius is difficult to measure in fainter galaxies whose steep cusps have roughly the same power-law index ($\gamma\approx 2$) as the larger-radius profile. In bright galaxies, $M_v\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} -20$, the central cusps are shallower ($\gamma\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1$) and the luminosity profiles show a definite change in slope at $r\approx r_b$ (Kormendy, this volume). In these galaxies, $r_b$ scales roughly with total luminosity (Faber et al. 1997); spheroids with $M_v\approx -21$ have $r_b\approx 50$ pc, easily resolvable from the ground for nearby galaxies. To what extent are the stellar cusps attributable to the presence of a black hole? The gravitational force from a black hole of mass $M_h$ would dominate the force from the stars within a radius $r_g$ such that $M_*(<r_g) = M_h$. This radius is roughly comparable to $r_b$ in the handful of galaxies for which both $r_b$ and $M_h$ can be accurately measured; for instance, in M87, where $M_h$ is well constrained by the kinematics of a gas disk (Macchetto et al. 1997), $r_g\approx r_b\approx 300$ pc. However the black hole can strongly influence the motions of stars only inside the (typically smaller) radius $r_h=GM_h/\sigma_*^2$, the ``radius of influence,'' where orbital velocities around the black hole are comparable to stellar velocity dispersions. M87 has $r_h\approx 60$ pc, much smaller than $r_g$ or $r_b$ and barely resolvable from the ground. One expects to observe photometric and kinematic features in the stellar distribution near $r_h$; indeed, the upturn in stellar velocities that occurs near this radius is one signature of a black hole. However few if any galaxies exhibit a clear feature in the stellar luminosity profile inside of $r_b$ (Gebhardt et al. 1996), a fact that must be explained by any model of cusp formation. \begin{figure}[t] \plotfiddle{merritt_1.ps}{7.75cm}{0}{75}{75}{-245}{-225} \caption{\footnotesize Cusp formation by adiabatic growth of a black hole (Merritt \& Quinlan 1998). The initial model (thin curves) is a triaxial ellipsoid, shown here spherically symmetrized; heavy curves are the final models after growth of a central point containing $0.3\%$, $1\%$ and $3\%$ of the stellar mass. {\bf (a)} Stellar density profiles $\rho(r)$; the thin line has a logarithmic slope of $-2$. These cusps are steeper than the $\rho\propto r^{-1.5}$ cusps that form in initially isothermal cores. {\bf (b)} Velocity dispersion profiles $\sigma(r)$. The small-radius dependence is $\sigma\sim r^{-1/2}$ (thin line). } \label{fig1} \end{figure} One widely-discussed model for the formation of stellar cusps is the slow growth of a black hole in a pre-existing, constant-density core. ``Slow'' here means on a time scale long compared to stellar orbital periods, $\sim 10^{6-7}$ yr, thus guaranteeing conservation of orbital actions; the assumption is a reasonable one if black holes grew on the Salpeter time scale, $t_s\approx 10^{7-8}$ yr (Krolik 1999). If the core was initially isothermal, and if the final mass of the black hole is less than the initial core mass, the final density satisfies $\rho\propto r^{-\gamma},\ \gamma=1.5$, for $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_h$ (Peebles 1972; Young 1980). An interesting feature of the Peebles-Young model is the smooth, nearly inflectionless form of the final density profile between $r_h$ and $r_c$, the initial core radius. Van der Marel (1999; this volume) took advantage of this fact, associating $r_c$ with the observed break radius $r_b$. He was able to fit the weak power-law cusps of bright ellipticals to the region $r_h\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_c$ where the Peebles-Young profile is locally well approximated by a shallow power law. The steep cusps of faint ellipticals were less accurately reproduced. The initial conditions adopted by Peebles, Young and van der Marel -- isothermal spheres with large core radii -- are computationally convenient but not very compelling from a physical point of view. Formation of galaxies through hierarchical clustering (e. g. Primack et al., this volume) or collapse (e.g. May \& van Albada 1984) tends to make systems with small or nonexistent cores and with phase space densities that rise more rapidly than $\sim e^{-aE}$ near the center. Growing a black hole in such a galaxy produces a steeper cusp than in an initially isothermal core, typically with $\gamma\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 2$ (Quinlan et al. 1995; Merritt \& Quinlan 1998; Fig. 1). This is just the slope characteristic of the central regions of faint ellipticals and it would seem reasonable to attribute the cusps in these galaxies to black holes. The absence of a prominent break radius in faint ellipticals would imply that the initial core mass (if there was a core) did not greatly exceed the final mass of the black hole. Dissipation is often invoked as a possible mechanism for making steep cusps (e.g. Faber et al. 1997) though simulations of dissipative core formation (e.g. Mihos \& Hernquist 1994) have so far failed to produce power-law profiles. The weak cusps in brighter ellipticals are not so naturally explained via the adiabatic growth model: not only are they much shallower than $\rho\sim r^{-2}$ but -- as Fig. 1 shows -- the transition from $\rho\sim r^{-2}$ at $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_h$ to $\rho\sim $ constant at $r\sim r_c$ leaves an inflection in the density profile if $r_h < r_c$, and such inflections are rarely if ever seen. A more natural formation model for weak cusps would predict only one characteristic radius, $r_b$. Such a model was proposed by Ebisuzaki, Makino and Okumura (1991). These authors noted that the coalescence of two black holes following a galaxy merger would transfer energy from the binary to the stars in the nucleus, creating a low-density core with mass comparable to the combined mass of the two black holes. In their model (further developed in Makino \& Ebisuzaki 1996 and Makino 1997), the break radius $r_b$ measures the size of the region ``scoured out'' by the binary black hole -- consistent with the observed, rough equality of $r_b$ and $r_g$. The predicted density profile within $r_b$ is tolerably close to a power law with index $\gamma\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1$; in this model, the observed trend of decreasing $\gamma$ with increasing luminosity would simply reflect a greater role for mergers in the formation of brighter galaxies. One way to discriminate between the adiabatic growth and binary black hole models for cusp formation is via their very different predictions about the stellar kinematics near the black hole. Slow growth of a black hole leaves the shape of the stellar velocity ellipsoid nearly unchanged (Young 1980; Goodman \& Binney 1984), even at radii where $\sigma_*$ increases substantially; the reason is that orbital eccentricities are almost unaffected by adiabatic changes in the potential (Lynden-Bell 1963). By contrast, ejection of stars by a coalescing black hole binary produces a core with strongly circularly-biased motions, since stars on radial trajectories are more likely to interact with the binary (Quinlan 1996). Quinlan \& Hernquist (1997) and Nakano \& Makino (1999) found that the velocity anisotropy $\beta \equiv 1-\sigma_t^2/\sigma_r^2$ in an initially isotropic core can drop as low as $\sim -1$ after the ejection of stars is complete. \begin{figure} \plotfiddle{merritt_2.ps}{16.5cm}{0}{75}{75}{-215}{-80} \caption{\footnotesize Properties of box-like orbits in a triaxial potential containing a central point mass. Each panel shows one octant of an equipotential surface; the $z$ (short) axis is vertical and the $x$ (long) axis is to the left. Orbits were started on this surface with zero velocity. The degree of stochasticity is indicated by the grey scale; white regions correspond to regular orbits. Resonance zones are labelled by their defining integers. The ratio $M_h/M_*$ of black hole mass to enclosed stellar mass is {\bf (a)} 0.33, {\bf (b)} 0.22, {\bf (c)} 0.13, {\bf (d)} 0.088, {\bf (e)} 0.032 and {\bf (f)} 0.0079. The half-mass radius of the model is approximately one. Near the black hole (a), orbits are mostly regular; in the ``zone of chaos'' (c, d), almost all box-like orbits are chaotic; and at large radii (f), regular and stochastic orbits co-exist. } \label{fig2} \end{figure} Stellar velocity anisotropies are difficult to extract from integrated spectra; the task becomes much easier if the form of the gravitational potential is known a priori, since the anisotropy then follows directly from the line-of-sight velocity dispersion profile (Binney \& Mamon 1982). The velocity polarization predicted by the binary black hole model is therefore best looked for in a galaxy where $M_h$ has been determined independently from the stellar data. M87 is such a galaxy, and in fact the ground-based stellar data, combined with the Macchetto et al. (1997) estimate of $M_h$, imply a substantial anisotropy, $\beta\approx -1$, at $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_h$ (Merritt \& Oh 1997). However the statistical significance of the result is small due to the low central surface brightness of this galaxy. Planned observations of M87 and other galaxies with STIS on HST should soon resolve this issue. \subsection{Nonspherical nuclei} The gravitational potential in the vicinity of the black hole, at $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_g$, is nearly Keplerian; forces from the stars in the cusp constitute a small perturbation, causing orbits to precess. In an axisymmetric galaxy, this precession converts an otherwise closed, elliptical orbit into a tube orbit which fills a doughnut-shaped region around the symmetry axis. Tube orbits avoid the center due to conservation of angular momentum; hence a star on a tube orbit can come only so close to the black hole. The situation is very different, and much more interesting, in non-axisymmetric or triaxial nuclei. While tube orbits still exist in such potentials, orbits that pass arbitrarily close to the center are possible as well. Figure 2 illustrates how the character of ``box-like'' orbits -- orbits with stationary points and (in an integrable potential) filled centers -- varies with distance from the central black hole in a triaxial potential. Near the black hole, $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_g$, almost all box-like orbits are regular, i.e. non-chaotic, respecting three isolating integrals of the motion. Such orbits in the planar geometry have been dubbed ``lenses'' by Sridhar \& Touma (1999); in three dimensions, the orbits are shaped like pyramids with rectangular bases. The black hole lies just inside the vertex of the pyramid, at the focus of the precessing ellipse, and the pyramid's central axis is coincident with the short ($z$) axis of the triaxial figure. A superposition of two such orbits, oriented above and below the $x-y$ plane, is symmetric and looks very much like the regular box orbits in integrable triaxial potentials (de Zeeuw 1985). However the latter are aligned with the {\it long} axis of the triaxial figure, making them much more useful for self-consistently reconstructing the stellar density. At larger radii, $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_g$, the potential is no longer approximately Keplerian and the integrability is lost -- box-like orbits are generically stochastic (Fig. 2c, d). This ``zone of chaos'' in triaxial potentials extends from a few times $r_g$ outward to much larger radii, as discussed below. Could the predominantly regular orbits -- pyramids and tubes -- within $\sim r_g$ be used to self-consistently reconstruct a triaxial nucleus containing a central black hole? The question is in principle straightforward to answer though as yet no attempts have been made. Two studies (Kuijken 1993; Syer \& Zhao 1998) addressed the self-consistency problem for scale-free, non-axisymmetric disks with divergent central densities. The major families of box-like orbits in both of these planar potentials are $2:1$ ``bananas''; in spite of having a favorable orientiation parallel to the long axis, the banana orbits were found to be too limited in their range of shapes to reproduce the assumed figure. Pyramid orbits have even less favorable shapes and it is reasonable to expect the triaxial self-consistency problem for black hole nuclei to be at least as narrowly constrained as that for scale-free disks. It therefore seems likely that significant triaxiality would be difficult to maintain near the center of a galaxy containing a supermassive black hole: both at radii $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_g$, because of the unfavorable orientation of the (predominantly regular) orbits; and at radii $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_g$, because of chaos. In advance of more definite theoretical predictions, a number of workers have recently investigated the isophotal shapes of early-type galaxies at radii near $r_b$. Quillen (this volume) finds a change in ellipticity and boxiness between $\sim r_b$ and a few times $r_b$ in the isophotes of two galaxies; the change is in the direction of more elongated and boxier isophotes at large radii. Tymann (cited in Bender, this volume) also finds less boxy isophotes inward of $r_b$ in a sample of early-type galaxies. Ryden (1998; this volume) shows that bright ellipticals as a class exhibit rounder isophotes at radii of a few times $r_b$ than at larger radii; the effect is consistent with more nearly axisymmetric shapes at smaller radii, since oblate spheroids are more likely than triaxial ellipsoids to appear round under random projection. These results suggest a possible change in the shapes or orbital compositions of galaxies near $r_b$, perhaps in the direction of more axisymmetric configurations at smaller radii. As Quillen and Ryden both note, such changes could reflect the constraints that a black hole imposes on the shapes of orbits, or they could be relics of the core formation process, or both. Distinguishing between these possibilities will be easier once the self-consistency problem for black-hole nuclei is better understood. Orbits like the pyramids are not centered on the black hole, a consequence of the near-Keplerian nature of the potential. Sridhar \& Touma (1999) noted that off-center orbits can persist even in nuclei where the black hole itself is offset from the center of the stellar spheroid; furthermore they identified one orbit family for which the orbital offset was in the same direction as that of the spheroid. One could imagine using such orbits to construct self-consistent, lopsided nuclei along the lines of the conceptual model proposed by Tremaine for M31 (1995). One attempt, using an $N$-body code, is described by Jacobs \& Sellwood (this volume); these authors were unable to construct long-lived lopsided disks unless the disk mass was less than a few percent of the mass of the black hole, substantially smaller than the estimated mass of the stellar disk in M31 (Kormendy \& Bender 1999). \section{Large-Scale Dynamics} \subsection{Regular and stochastic orbits} The gravitational influence of a nuclear black hole can extend far beyond $r_g$ in a non-axisymmetric galaxy, since orbital angular momenta are not conserved and stars with arbitrarily large energies can pass close to the center (Gerhard \& Binney 1985). In a triaxial potential containing a central point mass, the phase space divides naturally into three regions depending on distance from the center (Fig. 2). In the innermost region, $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_g$, the potential is dominated by the black hole and the motion is essentially regular, as discussed above. At intermediate radii (Fig. 2c, d), the black hole acts as a scattering center, rendering almost all of the center-filling orbits stochastic. This ``zone of chaos'' extends outward from a few times $r_g$ to a radius where the enclosed stellar mass is roughly $10^2$ times the mass of the black hole. If $M_h$ exceeds $\sim 10^{-2}M_{sph}$, as it appears to do in a few galaxies, the ``zone of chaos'' includes essentially the entire potential outside of $\sim r_g$. However if $M_h\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 10^{-2}M_{sph}$, there is a third, outer region in which the phase space is a complex mixture of chaotic and regular trajectories (Fig. 2e, f). In the absence of a central point mass, the orbital structure of a triaxial potential resembles this complex outer region at all energies (Schwarzschild 1993; Merritt \& Fridman 1996; Carpintero \& Aguilar 1998; Papaphillipou \& Laskar 1998; Valluri \& Merritt 1998; Wachlin \& Ferraz-Mello 1998). \begin{figure}[t] \plotfiddle{merritt_3.ps}{11.cm}{0}{65}{65}{-215}{-170} \caption{\footnotesize Resonances in triaxial potentials (Merritt \& Valluri 1999). The mass model in {\bf (a)} has a weak ($\gamma=0.5$) cusp and no black hole; in {\bf (b)} the black hole contains $0.3\%$ of the total mass. Both equipotential surfaces lie close to the half-mass radius. The grey scale measures the degree of stochasticity of orbits started with zero velocity on the equipotential surface, as in Fig. 2. Stable resonance zones -- the white bands in (a) and (b) -- are labelled by the order $(m_1,m_2,m_3)$ of the resonance. Panels {\bf (c)} and {\bf (d)} show the pericenter distance $\Delta$ of a set of $10^3$ orbits with starting points along the heavy solid lines in (a) and (b). } \label{fig3} \end{figure} The complexity of the phase space far from the black hole is a consequence of resonances. A resonant orbit is one for which the fundamental frequencies $\omega_i, {i=1,2,3}$ on the invariant torus are ``commensurate,'' satisfying a relation of the form $m_1\omega_1 + m_2\omega_2 + m_3\omega_3 = 0$ with integer $m_i$. In the case of 2D motion, a resonant orbit is closed, returning to its starting point after a time $T=2\pi |m_2|/\omega_1=2\pi |m_1|/\omega_2$. In three dimensions, resonances do not imply closure; instead, a resonant trajectory is confined for all time to a two-dimensional sub-manifold of its 3-torus (Valluri \& Merritt, this volume). Such an orbit is thin, densely filling a membrane in configuration space. Resonant tori play roughly the same role, in three dimensions, that closed orbits play in two, generating families of regular orbits when stable and stochastic orbits when unstable (Merritt \& Valluri 1999). In triaxial potentials, the main source of instability is gravitational deflections from the central point mass; stable orbits are ones that avoid the center. Tube orbits achieve this via a $1:1$ resonance in one of the principal planes; box orbits are generically center-filling, but a box orbit associated with a sufficiently low-order resonance can also avoid the center by a wide enough margin to remain stable. The degree to which this is possible depends on the steepness of the central force gradient (Valluri \& Merritt 1998). When the central singularity is weak -- for instance, a $\rho\propto r^{-\gamma}$ stellar cusp -- box-like orbits can remain stable even when their pericenter distances are small. Phase space then consists of a large number of intersecting and overlapping resonance zones, some of high order, corresponding to thin orbits with many sheets (Fig. 3a). When the central singularity is stronger -- e.g. a central point mass -- only a handful of low-order resonances can maintain sufficient pericenter distance to remain stable (Fig. 3b); high-order resonances typically generate stochastic zones. As the mass of the central point is increased, fewer and fewer of the resonant orbits are able to avoid the center by a wide enough margin and the phase space undergoes a transition to global stochasticity -- essentially all of the box-like trajectories are chaotic. While this transition has only been studied in a handful of model potentials, it seems to occur whenever the black hole mass exceeds $\sim 2-3\%$ of the enclosed mass in stars (Merritt \& Quinlan 1998; Valluri \& Merritt 1998; Merritt \& Valluri 1999). The influence of figure rotation on the orbital composition of triaxial potentials has not yet been systematically studied. Valluri (this volume) finds that figure rotation tends to increase the degree of orbital stochasticity, apparently because the Coriolis forces broaden orbits that would otherwise be thin, driving them into the destabilizing center. \subsection{Black-hole-induced evolution} Stochastic motion introduces a new time scale into galactic dynamics, the mixing time (Kandrup \& Mahon 1994; Kandrup, this volume). Mixing is the process by which a non-uniform distribution of particles in phase space relaxes to a uniform distribution, at least in a coarse-grained sense. A weak sort of mixing, called phase mixing, occurs even in integrable potentials, as particles on adjacent tori gradually move apart (Lynden-Bell 1967); phase mixing is responsible for the fact that the coarse-grained phase space density in relaxed systems is nearly constant around tori. Mixing in chaotic systems can be much more effective than phase mixing, since stochastic trajectories are exponentially unstable and not confined to tori. Chaotic mixing is also irreversible in the sense that an infinitely fine tuning of velocities would be required in order to undo its effects. Mixing driven by a central black hole converts all of the stochastic trajectories at a single energy into an invariant ensemble whose shape is similar to that of an equipotential surface, hence rounder than the figure. Two consequences are likely: the galaxy should become rounder, or at least more axisymmetric, due to the loss of the regular orbits needed to maintain triaxiality; and sharp features in the phase-space distribution should be smoothed out. Mixing induced by a central singularity ceases if the stellar distribution reaches an axisymmetric state since few stars are then able to approach the destabilizing center. \begin{figure}[t] \plotfiddle{merritt_4.ps}{7.5cm}{0}{60}{60}{-190}{-180} \caption{\footnotesize Response of an initially triaxial galaxy to growth of a central point mass (adapted from Merritt \& Quinlan 1998). The intermediate-to-long axis ratio $b/a$, defined by the most-bound 50\% of the stars, is plotted as a function of time; $T_{1/2}$ is the period of a circular orbit at the half-mass radius in a spherical model with the same radial distribution of mass. The growth time of the central mass was $\sim 5T_{1/2}$ for the two smaller values of $M_h$ and $\sim 2T_{1/2}$ for the larger value. } \label{fig4} \end{figure} The rate at which mixing would induce such changes can be estimated by integrating trajectories in fixed triaxial potentials. Such experiments (Kandrup \& Mahon 1994; Mahon et al. 1995; Merritt \& Valluri 1996; Valluri \& Merritt 1998) reveal a strong dependence of the mixing rate on the structure of phase space. In regions containing both regular and stochastic trajectories -- e.g. the two outermost shells of Figure 2 -- mixing is inefficient, presumably because the invariant tori of the regular orbits hinder the diffusion of the stochastic orbits. In regions where the motion is almost fully chaotic -- e.g. the ``zone of chaos'' in Figure 2 -- mixing occurs very rapidly, in a few crossing times. Orbits in such regions lose all memory of their initial conditions after just a few oscillations. Norman, May \& van Albada (1985) made one of the first attempts to simulate the large-scale response of a galaxy to the orbital evolution induced by a central black hole. These authors observed only a slight response at the centers of their $N$-body models; however their initial conditions were almost precisely axisymmetric, thus guaranteeing that the influence of the black hole would be limited to $r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} r_g$. More dramatic evolution was seen in a number of subsequent studies of dissipative galaxy formation (e.g. Katz \& Gunn 1991; Udry 1993; Dubinski 1994). These authors used $N$-body codes to simulate the accumulation of mass at the centers of initially triaxial galaxies or halos; in each case, evolution toward more axisymmetric shapes was observed when the central mass exceeded a few percent of the mass in stars. Barnes (1996; this volume) observed a similar response in $N$-body simulations of mergers between disk galaxies: purely stellar-dynamical mergers produced strongly triaxial remnants, but adding as little as $1\%$ of the mass in the form of a dissipative component resulted in nearly axisymmetric final shapes. The evolution toward axisymmetry seen in these simulations is sometimes loosely attributed to ``dissipation,'' but in fact it is purely a stellar dynamical effect: the stars respond to the ``gas'' only insofar as the latter affects the gravitational potential. Merritt \& Quinlan (1998) repeated the Norman et al. (1985) experiments, using initial models that were significantly triaxial at all radii. They observed a global response toward axisymmetry as the mass of the central point was increased; the rate of evolution was found to depend strongly on the ratio of black hole mass to galaxy mass (Fig. 4). When $M_h/M_{sph}$ was $0.3\%$, the galaxy evolved in shape over $\sim 10^2$ orbital periods, whereas increasing $M_h/M_{sph}$ to $3\%$ caused the galaxy to become almost precisely axisymmetric in little more than a crossing time. Rapid evolution toward axisymmetry occurred at any radius whenever the ``black hole'' mass exceeded $\sim 0.025$ times the enclosed stellar mass -- roughly the same mass ratio at which the regular box-like orbits disappear (Fig. 2). These experiments provide a natural explanation for the absence of significant triaxiality in most elliptical galaxies (Franx, de Zeeuw \& Illingworth 1991). Based on Fig. 4, a galaxy with a ``typical'' black hole, $M_h/M_{sph}\approx 0.003$, would evolve to axisymmetry in roughly 100 periods of the half-mass circular orbit; this time span is of order a galaxy lifetime for elliptical galaxies with $M_v\approx -19$ or $-20$. Fainter ellipticals have generally shorter crossing times and hence should be weakly triaxial at best; brighter ellipticals might still retain their (merger-induced?) triaxial shapes. These predictions are consistent with what little is known about the statistics of elliptical galaxy intrinsic shapes (Ryden 1996; Tremblay \& Merritt 1996; Bak \& Statler 1999). Orbital evolution induced by a black hole should smooth out the stellar phase-space distribution at the same time that it destroys triaxiality. In fact Merritt \& Quinlan (1998) noted a striking change in the isophotal shapes of their $N$-body models, from strongly peanut-shaped at the start to nearly elliptical after the black hole was in place. Boxy or peanut-shaped isophotes are a natural consequence of a non-smooth phase space density (Binney \& Petrou 1982). One might therefore predict a correlation between triaxiality and boxiness in real galaxies, since the orbital evolution induced by a nuclear black hole would tend to eliminate the two in tandem. Kormendy \& Bender (1996) noted just such a correlation; furthermore the majority of boxy ellipticals are bright (Bender, this volume), consistent with the expected, longer time scales for orbital evolution in brighter galaxies. It is remarkable that the minimum black hole mass required to induce rapid evolution in the orbital composition of a triaxial ellipsoid -- $M_h/M_{sph} \approx 2\%$ -- is essentially equal to the maximum value of $M_h/M_{sph}$ observed in real galaxies (Kormendy et al. 1996; Cretton \& van den Bosch 1999). This agreement could be fortuitous, or it could point to a connection between the fueling of black holes and the shapes of their host spheroids.
1,108,101,563,753
arxiv
\section{Introduction} One of the most powerful techniques for calculating Feynman diagrams is based on their presentation in terms of hypergeometric functions. We will call this the hypergeometric function representation of Feynman diagrams. Such a representation can be used for numerical evaluation, construction of the asymptotic expansion, {\it etc}. One of the unsolved problems in this program is obtaining the proper representation for a diagram with an arbitrary number of legs and loops. Direct use of $\alpha$- or Feynman parameters representations \cite{Bogolyubov} is not very helpful in solving this problem. The Mellin-Barnes technique is restricted to several topologies \cite{BD,DK01,JKV02,JK04}. The negative dimension approach \cite{nda} has a similar restriction \cite{AGO:nda}. The most investigated diagrams are the master integrals (typically, integrals with the power of each propagator equal to unity). The differential \cite{DE} and/or difference equation \cite{Tarasov00} techniques are usually used to obtain such representations. The known cases include the one-loop diagrams \cite{one-loop,davydychev}, two-loop propagator-type diagrams with special mass and momentum values \cite{BFT}, several three-loop bubble-type diagrams \cite{DK01}, three-loop vertex-type diagrams \cite{3-vertex}, and four-loop bubble-type diagrams \cite{4-loop}. For practical application however, it is necessary to construct the of $\varepsilon$-expansion (Laurent expansion) of hypergeometric functions. There is some evidence that the multiple polylogarithms \cite{Goncharov,Broadhurst:1998,Borwein:1999}, \begin{equation} \Li{k_1,k_2, \cdots, k_n}{z_1,z_2,\cdots, z_n} = \sum_{m_1 > m_2 > \cdots m_n > 0} \frac{z_1^{m_1} z_2^{m_2} \cdots z_n^{m_n} }{m_1^{k_1} m_2^{k_2} \cdots m_n^{k_n}} \;. \end{equation} are sufficient for parametrizing the coefficients of the $\varepsilon$-expansion of some, but not all\footnote{We are thankful to S. Weinzierl for this information.}, hypergeometric functions \cite{nested2}. In some particular cases, the result of the Laurent expansion can be written in terms of simpler functions. In particular, at the present moment, it is commonly accepted \cite{weinzierl:03} that the generalized hypergeometric functions with an arbitrary set of integer parameters can be presented in terms of harmonic polylogarithms \cite{RV00}. The idea of the proof is based on the properties of {\it nested sums} \cite{nested1}: the analytical coefficients of the $\varepsilon$-expansion of any generalized hypergeometric function with integer parameters can be reduced to a set of some basic {\it harmonic} series of the type \begin{eqnarray} \hspace{-5mm} \sum_{j=1}^\infty \frac{z^j}{j^c} S_{a_1}(j-1) \cdots S_{a_p}(j-1) \;, \label{harmonic} \end{eqnarray} where $z$ is an arbitrary argument and $S_a(j)$ is an harmonic sum defined as $S_a(j) = \sum_{k=1}^j \frac{1}{k^a}$. Series of this type are expressible in terms of the Remiddi-Vermaseren harmonic polylogarithms. However for hypergeometric functions with half-integer values of parameters, the new type of sums, {\it multiple} ({\it inverse}) {\it binomial sums} \cite{KV00,JKV02,JK04,DK04} are generated: \begin{eqnarray} \hspace{-5mm} \Sigma^{(k)}_{a_1,\cdots,a_p; \; b_1,\cdots,b_q;c}(z) &\equiv& \sum_{j=1}^\infty \frac{1}{\left(2j \atop j\right)^k}\frac{z^j}{j^c} S_{a_1}(j-1) \cdots S_{a_p}(j-1) S_{b_1}(2j-1) \cdots S_{b_q}(2j-1) \; . \nonumber\\ & & \label{binsum} \end{eqnarray} For particular values of $k$, the sums (\ref{binsum}) are called \begin{eqnarray} k = \left\{ \begin{array}{rl} 0 & \mbox{ {\it generalized harmonic} } \\ 1 & \mbox{ {\it inverse binomial} } \\ -1 & \mbox{ {\it binomial} } \end{array} \right\} \mbox{ sums }. \nonumber \end{eqnarray} At the present moment, there is no proof that any {\it multiple} ({\it inverse}) {\it binomial sums} can be expressed in terms of harmonic polylogarithms only. This problem was investigated in Ref.\ \cite{DK04} for {\it multiple inverse binomial sums} up to {\it weight 4}. In Ref.\ \cite{MKL04}, it was shown that some of the {\it multiple inverse binomial sums} are not expressible in terms of harmonic polylogarithms of simple argument. In Ref.\ \cite{MKL06}, the results of Ref.\ \cite{DK04} were extended on the case of special combinations of {\it multiple binomial sums} and {\it multiple generalized harmonic sums}. However, the Laurent expansion of a hypergeometric function in general contains combinations of {\it multiple sums}. These combinations may be expressed in terms of harmonic polylogarithms. From this point of view, the construction of the analytical coefficients of the $\varepsilon$-expansion of hypergeometric functions can be done independently from existing results for each individual {\it multiple sum}.\footnote{We are indebted to A.~Davydychev for discussion on this subject.} The simplest hypergeometric function is the Gauss hypergeometric function $_{2}F_1(a,b;c;z)$, \cite{Gauss,bateman,Slater}. It satisfies the second-order differential equation \begin{eqnarray} \frac{d}{dz} \left( z \frac{d}{dz} + c - 1 \right)w(z) = \left( z \frac{d}{dz} + a \right) \left( z \frac{d}{dz} + b \right)w(z) \;, \end{eqnarray} and admits the series representation \begin{eqnarray} {}_{2}F_1(a,b;c;z) = \sum_{k=0}^\infty \frac{(a)_k (b)_k}{(c)_k} \frac{z^k}{k!} \;, \end{eqnarray} where $(a)_k = \Gamma(a+k)/\Gamma(a)$ is the Pochhammer symbol. The primary aim of this paper is to prove the following: \noindent \begin{itemize} \item {\bf Theorem 1:} \\ \ {\it The all-order $\varepsilon$-expansions of the Gauss hypergeometric functions \begin{subequations} \label{2F1-Theorem1} \begin{eqnarray} &{}_2F_{1}&(I_1+a\varepsilon, I_2+b\varepsilon; I_3+c \varepsilon;z) \;, \\ &{}_2F_{1}&(I_1+a\varepsilon, I_2+b\varepsilon; I_3+\tfrac{1}{2}+c \varepsilon;z) \;, \\ &{}_2F_{1}&(I_1+\tfrac{1}{2}+a\varepsilon, I_2+b\varepsilon; I_3+c \varepsilon;z) \;, \\ &{}_2F_{1}&(I_1+\tfrac{1}{2}+a\varepsilon, I_2+b\varepsilon; I_3+\tfrac{1}{2} + c \varepsilon;z) \;, \\ &{}_2F_{1}&(I_1+\tfrac{1}{2}+a\varepsilon, I_2+\tfrac{1}{2}+b\varepsilon; I_3 + \tfrac{1}{2} + c \varepsilon;z) \;, \end{eqnarray} \end{subequations} where $\{ I_k \}$ are integer numbers, $a,b,c$ are an arbitrary numbers, and $\varepsilon$ is an arbitrary small parameter, are expressible in terms of Remiddi-Vermaseren harmonic polylogarithms with rational coefficients. } \end{itemize} \section{All-order $\varepsilon$-expansion} \label{allorder} \subsection{Non-zero values of the $\varepsilon$-dependent part} It is well known that any Gauss hypergeometric function may be expressed as a linear combination of two other hypergeometric functions with parameters differing from the original ones by an integer \cite{Gauss,bateman,Slater,nikiforov1,MKL06}. Such a representation will be called a {\it reduction}, and the explicit algorithm will be called a {\it reduction algorithm}. Using the algorithm described in Ref.\ \cite{MKL06}, the result of the reduction can be written as \begin{eqnarray} && \hspace{-5mm} P(a,b,c,z) {}_{2}F_{1}(a+I_1,b+I_2;c+I_3;z) = \Biggl \{ Q_1(a,b,c,z) \frac{d}{dz} + Q_2(a,b,c,z) \Biggr\} {}_{2}F_{1}(a,b;c; z) \;, \nonumber \\ \label{decomposition} \end{eqnarray} where $a,b,c,$ are any fixed numbers, $P,Q_1,Q_2$ are polynomial in parameters $a,b,c$ and argument $z$, and $I_1,I_2,I_3$ any integer numbers. All the hypergeometric functions (\ref{2F1-Theorem1}) listed in {\bf Theorem 1} can be reduced to functions with $I_1,I_2,I_3$ equal to zero for half-integer values of parameters, and to unity for integer ones. In this way, all the hypergeometric functions of {\bf Theorem 1} are expressible in terms of the following five basic functions and their first derivatives: \begin{subequations} \label{2F1} \begin{equation} \label{2F1-type1} {}_2F_{1}(a_1\varepsilon, a_2\varepsilon; 1+c \varepsilon;z), \qquad {}_2F_{1}(a_1\varepsilon, a_2\varepsilon; \tfrac{1}{2}+f \varepsilon;z), \end{equation} \begin{equation} \label{2F1-type2} {}_2F_{1}(\tfrac{1}{2}\!+\!b\varepsilon, a\varepsilon; 1\!+\!c \varepsilon;z), \quad {}_2F_{1}(\tfrac{1}{2}\!+\!b\varepsilon, a\varepsilon; \tfrac{1}{2} \!+\! f \varepsilon;z), \quad {}_2F_{1}(\tfrac{1}{2}\!+\!b_1\varepsilon, \tfrac{1}{2}\!+\!b_2\varepsilon; \tfrac{1}{2} \!+\! f \varepsilon;z). \end{equation} \end{subequations} It was shown in Ref.\ \cite{MKL06} that only the two hypergeometric functions (and their first derivative) of type (\ref{2F1-type1}) are algebraically independent. The other three, (\ref{2F1-type2}), are algebraically expressible in terms of ${}_2F_{1}(a_1\varepsilon, a_2\varepsilon; \tfrac{1}{2}+f \varepsilon;z)$. Consequently, in order to prove {\bf Theorem 1}, it sufficient to show that the analytical coefficients of the $\varepsilon$-expansion of the first two hypergeometric functions (\ref{2F1-type1}) are expressible in terms of Remiddi-Vermaseren polylogarithms. \subsubsection{Integer values of of $\varepsilon$-independent parameters} Let us start from expansion of Gauss hypergeometric functions with integer values of parameters, and consider the function ${}_2F_{1}(a_1\varepsilon, a_2\varepsilon; 1+c \varepsilon;z)$. In ref.\ \cite{hyper:expansion}, the all-order $\varepsilon$-expansions for this functions and its first derivative were constructed in terms of multiple polylogarithms of one variable \cite{Broadhurst:1998,Borwein:1999}. These multiple polylogarithms may be expressed as iterated integrals \footnote{ Recall that multiple polylogarithms can be expressed as iterated integrals of the form \begin{eqnarray} \Li{k_1, \cdots, k_n}{z} & = & \int_0^z \underbrace{\frac{dt}{t} \circ \frac{dt}{t} \circ \cdots \circ \frac{dt}{t}}_{k_1-1 \mbox{ times}} \circ \frac{dt}{1-t} \circ \cdots \circ \underbrace{\frac{dt}{t} \circ \frac{dt}{t} \circ \cdots \circ \frac{dt}{t}}_{k_n-1 \mbox{ times}} \circ \frac{dt}{1-t} \;, \label{iterated} \end{eqnarray} where, by definition \begin{eqnarray} \int_0^z \underbrace{\frac{dt}{t} \circ \frac{dt}{t} \circ \cdots \circ \frac{dt}{t}}_{k_1-1 \mbox{ times}} \circ \frac{dt}{1-t} = \int_0^z \frac{dt_1}{t_1} \int_0^{t_1} \frac{dt_2}{t_2} \cdots \int_0^{t_{k-2}} \frac{dt_{k_1-1}}{t_{k_1-1}} \int_0^{t_{k_1-1}} \frac{dt_{k_1}}{1-t_{k_1}} \;. \end{eqnarray} The integral (\ref{iterated}) is an iterated Chen integral \cite{Chen} (see also \cite{dirk}) w.r.t.\ the two differential forms $\omega_0 = dz/z$ and $\omega_1 = \frac{dz}{1-z}$, so that \begin{eqnarray} \Li{k_1, \cdots, k_n}{z} & = & \int_0^z \omega_0^{k_1-1} \omega_1 \cdots \omega_0^{k_n-1} \omega_1 \;. \label{chen} \end{eqnarray} } and have the expansion \begin{equation} \Li{k_1,k_2, \cdots, k_n}{z} = \sum_{m_1 > m_2 > \cdots m_n > 0} \frac{z^{m_1}}{m_1^{k_1} m_2^{k_2} \cdots m_n^{k_n}} \;. \label{mp} \end{equation} Similar results were also derived (without explicit form of coefficients) in Ref.\ \cite{weinzierl:03} via nested sums approach. We will follow the idea of Ref.\ \cite{hyper:expansion}. The Gauss hypergeometric function ${}_2F_{1}(a_1\varepsilon, a_2\varepsilon; 1+c \varepsilon;z)$ is the solution of the differential equation \begin{eqnarray} \frac{d}{dz} \left( z \frac{d}{dz} + c \varepsilon \right) w(z) = \left( z \frac{d}{dz} + a_1 \varepsilon \right) \left( z \frac{d}{dz} + a_2 \varepsilon\right) w(z) \;, \label{gauss:diff} \end{eqnarray} with boundary conditions $w(0)=1$ and $\left. z \frac{d}{dz} w(z)\right|_{z=0} = 0$. Eq.\ (\ref{gauss:diff}) is valid in each order of $\varepsilon$, so that in terms of coefficients functions $w_k(z)$ defined as \begin{equation} w(z) = \sum_{k=0}^\infty w_k(z) \varepsilon^k, \label{epsilon-expansion} \end{equation} it can be written \begin{eqnarray} (1-z) \frac{d}{dz} \left( z \frac{d}{dz} \right) w_k(z) = \left( a_1 + a_2 - \frac{c}{z} \right) \left( z \frac{d}{dz} \right) w_{k-1}(z) + a_1 a_2 w_{k-2}(z) \label{gauss:diff2} \end{eqnarray} for $k \geq 0$ with \begin{subequations} \label{wspecial} \begin{eqnarray} \label{w0} & w_0(z) &= 1\;, \\ & w_k(z) &= 0\;, \qquad k<0. \label{wneg} \end{eqnarray} \end{subequations} The boundary conditions for the coefficient functions are \begin{subequations} \label{boundary} \begin{eqnarray} \label{boundary1} & w_k(0) = 0\;,& \qquad k \geq 1 \;, \\ & \left. z \frac{d}{dz} w_k(z) \right|_{z=0} = 0\;,& \qquad k \geq 0 \; . \label{boundary2} \end{eqnarray} \end{subequations} Let us introduce a new function $\rho(z)$ defined by\footnote{ We may note that $$ {}_{2}F_1\left(\begin{array}{c|} 1+a_1\varepsilon, 1\!+\!a_2\varepsilon\\ 2 \!+\! c \varepsilon \end{array} ~z \right) = \frac{1+c\varepsilon}{z} \sum_{k=0}^\infty \left[ \frac{\rho_{k+2}(z)}{a_1 a_2} \right] \varepsilon^k \;. $$ } \begin{equation} \rho(z) = z \frac{d}{dz} w(z) = \sum_{k=0}^\infty \rho_k(z) \varepsilon^k \;, \end{equation} where the coefficient functions satisfy \begin{equation} \rho_k(z) = z \frac{d}{dz} w_k(z) \;. \end{equation} The boundary conditions for these new functions follow from Eq.\ (\ref{boundary}): \begin{equation} \rho_k(0) = 0 \;, \qquad k \geq 0 \;. \label{boundary:rho} \end{equation} Eq.\ (\ref{gauss:diff2}) can be rewritten as a system of two first-order differential equations: \begin{eqnarray} (1-z) \frac{d}{dz} \rho_i (z) & = & \left(a_1 \!+\! a_2 \!-\! \frac{c}{z} \right) \rho_{i-1}(z) \!+\! a_1 a_2 w_{i-2}(z) \;, \nonumber \\ z \frac{d}{dz} w_i(z) & = & \rho_i(z) \;. \label{gauss:diff3} \end{eqnarray} The solution of this system can be presented in an iterated form: \begin{eqnarray} \rho_i (z) & = & \left(a_1 \!+\! a_2 \!-\! c \right) \int_0^z \frac{dt}{1-t} \rho_{i-1}(t) \!+\! a_1 a_2 \int_0^z \frac{dt}{1-t} w_{i-2}(t) \!-\! c w_{i-1}(z) \;, \quad i \geq 1 \;, \nonumber \\ w_i (z) & = & \int_0^z \frac{dt}{t} \rho_i(t) \;, \quad i \geq 1 \;. \label{w} \end{eqnarray} Taking into account that $w_0(z)=1$ and $\rho_0(z)=0$ (the $\varepsilon$-expansion of $\rho(z)$ begins with the term linear in $\varepsilon$), we obtain the first few coefficients, \begin{subequations} \label{first} \begin{eqnarray} \rho_1(z) &=& w_1(z) = 0, \\ \frac{\rho_2(z)}{a_1 a_2} &=& - \ln(1-z) \equiv H(1;z) \;,\\ \frac{w_2(z)}{a_1 a_2} &=& \Li{2}{z} \equiv H(0,1;z) \;,\\ \frac{\rho_3(z)}{a_1 a_2} &=& \gamma_c \frac{1}{2} \ln^2(1-z) - c \Li{2}{z} \equiv \gamma_c H(1,1;z) \!-\! c H(0,1;z)\;,\\ \frac{w_3(z)}{a_1 a_2 } &=& \gamma_c \Snp{1,2}{z} \!-\! c \Li{3}{z} \equiv \gamma_c H(0,1,1;z) \!-\! c H(0,0,1;z)\;, \end{eqnarray} \end{subequations} where we have defined $\gamma_c = a_1 + a_2 - c$, and $\Li{n}{z}$ and $S_{a,b}(z)$ are the classical and Nielsen polylogarithms \cite{Lewin,Nielsen}, respectively: $$ S_{a,b}(z) = \frac{(-1)^{a+b-1}}{(a-1)! \; b!} \!\int\limits_0^1 \mbox{d} \xi\; \frac{\ln^{a-1}\!\xi \ln^b (1\!-\!z\xi)}{\xi} \; , \quad S_{a,1}(z) = \Li{a+1}{z}. $$ The functions $H(\vec{A};z)$ are the Remiddi-Vermaseren harmonic polylogarithms \cite{RV00}, and $\vec{A}$ is a multiple index including only entries $0$ and $1$. From the representation (\ref{w}) and result for the first few coefficients (\ref{first}) we may derive the following observations: \begin{itemize} \item {\bf Corollary 1:} {\it The all-order $\varepsilon$-expansion of the function ${}_2F_{1}(a_1 \varepsilon, a_2 \varepsilon; 1+c \varepsilon;z)$ may be written in terms of harmonic polylogarithms $H_{\vec{A}}(z)$ only, where the multiple index $\vec{A}$ includes only the values $0$ and $1$. } \item {\bf Corollary 2:} {\it The analytical coefficient of $\varepsilon^k$ in the expansion of ${}_2F_{1}(a_1 \varepsilon, a_2 \varepsilon; 1+c \varepsilon;z)$ includes only functions of weight k with numerical coefficients. } \item {\bf Corollary 3:} {\it The non-constant terms of the $\varepsilon$-expansion of ${}_2F_{1}(a_1 \varepsilon, a_2 \varepsilon; 1+c \varepsilon;z)$ are proportional to the product $a_1 a_2$ in any order of $\varepsilon$. } \end{itemize} The first and the last statement follows from the representation\footnote{The {\bf Corollary 3} follows also from general properties of hypergeometric functions.} (\ref{w}), the explicit value of coefficients functions $w_k(z)$, $k=0,1,2$ (Eqs.\ (\ref{first})), and definition of harmonic polylogarithms \cite{RV00}. The second statement follows from the form of the solution of Eq.~(\ref{w}). The relation between harmonic polylogarithms $H_{\vec{A}}(z)$, with multiple index $\vec{A}$ including only $0$ and $1$, and multiple polylogarithms of one variable (Eq.\ \ref{mp})) is well known \cite{nested1} and follows from the proper definition (see Sec.\ 2 in Ref.\ \cite{RV00}): \begin{eqnarray} \Li{k_1, k_2, \cdots, k_n}{z} & = & H( \underbrace{0,0, \cdots, 0,}_{k_1-1 \mbox{ times}} 1, \underbrace{0,0, \cdots, 0,}_{k_2-1 \mbox{ times}} 1, \cdots \underbrace{0,0, \cdots, 0,}_{k_n-1 \mbox{ times}} 1;z) \;. \end{eqnarray} By continued iterations of Eq.\ (\ref{w}) and Eq.\ (\ref{first}), we have reproduced all coefficients of the $\varepsilon$-expansion of the Gauss hypergeometric function presented in Eq.\ (4.7) of \cite{MKL06}. For the coefficients functions $\rho_4(z), \omega_4(z)$, and $\rho_5(z)$, we find a more compact form. We also obtain the higher-order terms $\omega_5(z)$ and $\rho_6(z)$ of the $\varepsilon$-expansion. The results are\footnote{The FORM\cite{FORM} representation of these expressions can be extracted from Ref.\ \cite{MKL}.} \begin{eqnarray} && \frac{\rho_4(z)}{a_1 a_2} = - \frac{1}{6} \gamma_c^2 \ln^3(1 \!-\! z) \!+\! \left( c \gamma_c \!-\! a_1 a_2 \right) \ln(1 \!-\! z) \Li{2}{z} \!+\! c^2 \Li{3}{z} \!+\! \left( c \gamma_c \!-\! 2 a_1 a_2 \right) \Snp{1,2}{z} \;, \nonumber \\ && \frac{w_4(z)}{a_1 a_2 } = c^2 \Li{4}{z} - \frac{1}{2} \left( c \gamma_c - a_1 a_2 \right)\left[ \Li{2}{z} \right]^2 + \gamma_c^2 \Snp{1,3}{z} + \left( c \gamma_c - 2 a_1 a_2 \right) \Snp{2,2}{z} \;, \label{first:new} \\ && \frac{\rho_5(z)}{a_1 a_2} = \frac{1}{24} \gamma_c^3 \ln^4(1-z) \!-\! c^3 \Li{4}{z} \!-\! c \gamma_c^2 \Snp{1,3}{z} \nonumber \\ && \hspace{15mm} - c \left( c \gamma_c \!-\! a_1 a_2 \right) \ln (1-z) \Li{3}{z} \!-\! c \left( c \gamma_c \!-\! 2 a_1 a_2 \right) \Snp{2,2}{z} \nonumber \\ && \hspace{15mm} - \gamma_c \left( c \gamma_c \!-\! a_1 a_2 \right) \ln(1-z) \Biggl[ \frac{1}{2} \ln(1-z) \Li{2}{z} + \Snp{1,2}{z} \Biggr] \;, \\ && \frac{w_5(z)}{a_1 a_2 } = \gamma_c^3 \Snp{1,4}{z} - c^3 \Li{5}{z} - c \gamma_c^2 \Snp{2,3}{z} - c \left( c \gamma_c - 2 a_1 a_2 \right) \Snp{3,2}{z} \nonumber \\ && \hspace{15mm} + \left( c \gamma_c - a_1 a_2 \right) \Biggl[ \gamma_c \Li{2}{z} \Snp{1,2}{z} - \gamma_c F_1(z) - c F_2(z) \Biggr] \;, \\ && \frac{\rho_6(z)}{a_1 a_2} = - \gamma_c^4 \frac{1}{120} \ln^5 (1-z) + c^4 \Li{5}{z} + c^2 \gamma_c^2 \Snp{2,3}{z} \nonumber \\ && \hspace{15mm} + \frac{1}{6} \gamma_c^2 \left( c \gamma_c - a_1 a_2 \right) \Biggl[ \ln^3(1-z) \Li{2}{z} + 3 \ln^2(1-z) \Snp{1,2}{z} + 6 \ln(1-z) \Snp{1,3}{z} \Biggr] \nonumber \\ && \hspace{15mm} + \frac{1}{2} c \left( c \gamma_c - a_1 a_2 \right) \ln(1-z) \Biggl[ \gamma_c \ln(1-z) \Li{3}{z} + 2 c \Li{4}{z} \Biggr] \nonumber \\ && \hspace{15mm} - (c-a_1) (c-a_2) \left( c \gamma_c - 2 a_1 a_2 \right) \ln(1-z) \Snp{2,2}{z} \nonumber \\ && \hspace{15mm} + a_1 a_2 \left( c \gamma_c - a_1 a_2 \right) \Biggl[ \frac{1}{2} \ln(1-z) \left[ \Li{2}{z} \right]^2 - 2 \Li{2}{z} \Snp{1,2}{z} + 2 F_1(z) \Biggr] \nonumber \\ && \hspace{15mm} + \left( c \gamma_c - 2 a_1 a_2 \right) \Biggl[ \gamma_c^2 \Snp{1,4}{z} + c^2 \Snp{3,2}{z} \Biggr] \;, \label{first:new:last} \end{eqnarray} where we have introduced two new functions: \begin{eqnarray} F_1(z) & = & \int_0^z \frac{dx}{x} \ln^2(1 - x) \Li{2}{x} \; , \\ F_2(z) & = & \int_0^z \frac{dx}{x} \ln(1 - x) \Li{3}{x} \; . \end{eqnarray} There is an algebraic relation\footnote{We are indebted to A.~Davydychev for this relation.} between these two functions: \begin{eqnarray} F_2(1-z) & = & F_1(z) - 2 \ln z \Snp{1,3}{z} + 2 \Snp{2,3}{z} - \Li{2}{z} \Snp{1,2}{z} - \ln z \ln(1-z) \Snp{1,2}{z} \nonumber \\ && - \frac{1}{6} \ln^3(1-z) \ln^2 z - \frac{1}{2} \ln z \ln^2 (1-z) \Li{2}{z} + \frac{1}{2} \zeta_2 \ln^2 (1-z) \ln z \nonumber \\ && - \zeta_2 \Snp{1,2}{z} - \zeta_3 \Li{2}{1-z} - \zeta_5 \;, \end{eqnarray} where $$ F_1(1) = 2 \zeta_3 \zeta_2 - \zeta_5 \sim 2.9176809 \cdots \;. $$ In this way, at the order of {\bf weight 5}, one new function\footnote{Compare with results of \cite{HypExp}.}, $F_1$, which is not expressible in terms of Nielsen polylogarithms, is generated by the Laurent expansion of a Gauss hypergeometric function with integer values of parameters. In general, the explicit form of this function is not uniquely determined, and the result may be presented in another form by using a different subset of harmonic polylogarithms. \subsubsection{Half-integer values of of $\varepsilon$-independent parameters} \label{half} Let us apply a similar analysis for the second basis hypergeometric function \begin{eqnarray} && {}_{2}F_1\left(\begin{array}{c|} a_1 \varepsilon, a_2\varepsilon\\ \frac{1}{2} \!+\! f \varepsilon \end{array} ~z \right) \; . \label{gauss:1a} \end{eqnarray} In this case, the differential equation has the form \begin{eqnarray} \frac{d}{dz} \left( z \frac{d}{dz} -\frac{1}{2} + f \varepsilon \right) w(z) = \left( z \frac{d}{dz} + a_1 \varepsilon \right) \left( z \frac{d}{dz} + a_2 \varepsilon\right) w(z) \;, \label{gauss:diff:a} \end{eqnarray} with the same boundary conditions $w(0)=1$ and $\left. z \frac{d}{dz} w(z)\right|_{z=0} = 0$. Using the $\varepsilon$-expanded form of the solution, and noting that Eq.~(\ref{epsilon-expansion}), and in fact, Eq.~(\ref{gauss:diff:a}) is valid at each order of the $\varepsilon$-expansion, we may rewrite Eq.~(\ref{gauss:diff:a}) as \begin{eqnarray} \left[ (1-z) \frac{d}{dz} - \frac{1}{2z} \right] \left( z \frac{d}{dz}\right) w_i(z) = \left[ (a_1 \!+\! a_2) \!-\! \frac{f}{z} \right] \left( z \frac{d }{d z} \right) w_{i-1} (z) \!+\! a_1a_2 w_{i-2}(z) \;. \label{gauss:diff:2a} \end{eqnarray} Let us introduce the new variable $y$ such that \footnote{The form of this variable follows from the analysis performed in Refs.\ \cite{DK01,DK04,MKL04}.}, \begin{eqnarray} y = \frac{1-\sqrt{\frac{z}{z-1}}}{1+\sqrt{\frac{z}{z-1}}} \;, \quad z = -\frac{(1-y)^2}{4y} \;, \quad 1-z = \frac{(1+y)^2}{4y} \;, \quad z \frac{d}{dz} = - \frac{1-y}{1+y} y \frac{d}{dy}\;, \label{conformal} \end{eqnarray} and define a set of a new functions $\rho_i(y)$ so that\footnote{ We may note that $$ {}_{2}F_1\left(\begin{array}{c|} 1+a_1\varepsilon, 1\!+\!a_2\varepsilon\\ \tfrac{3}{2} \!+\! f \varepsilon \end{array} ~z \right) = \frac{1+2f\varepsilon}{2z} \frac{1-y}{1+y} \sum_{k=0}^\infty \left[ \frac{\rho_{k+2}(y)}{a_1 a_2} \right] \varepsilon^k \;. $$ } \begin{equation} z \frac{d}{dz} w_i(z) \equiv \left( -\frac{1-y}{1+y} y \frac{d}{dy} \right) w_i(y) = \frac{1-y}{1+y} \rho_i(y) \;, \label{rho:a} \end{equation} and, as in the previous case, \begin{equation} \rho(y) = z \frac{d}{dz} w(z) = \sum_{k=0}^\infty \rho_k(y) \varepsilon^k \;. \end{equation} In terms of the new variable $y$, Eq.~(\ref{gauss:diff:2a}) can be written as system of two first order differential equations: \begin{eqnarray} y \frac{d}{dy} \rho_i (y) & = & \left(a_1 \!+\! a_2 \right) \frac{1-y}{1+y} \rho_{i-1}(y) + 2f \left( \frac{1}{1-y} - \frac{1}{1+y} \right) \rho_{i-1}(y) \!+\! a_1 a_2 w_{i-2}(y) \;, \nonumber \\ y \frac{d}{dy} w_k(y) & = & - \rho_k(y) \;. \label{gauss:diff3a} \end{eqnarray} The solution of these differential equations for functions $w_i(y)$ and $\rho_i(y)$ has the form \begin{eqnarray} \rho_i(y) & = & \int_1^y dt \left[ 2f \frac{1}{1-t} \!-\! 2 (a_1\!+\!a_2\!-\!f) \frac{1}{1+t}\right] \rho_{i-1}(t) - (a_1 \!+\! a_2) \left[ w_{i-1}(y) \!-\! w_{i-1}(1) \right] \nonumber \\ && + a_1 a_2 \int_1^y \frac{dt}{t} w_{i-2}(t) \;, \quad i \geq 1 \;, \nonumber \\ w_i(y) & = & - \int_1^y \frac{dt}{t} \rho_i(t) \;, \quad i \geq 1 \;. \label{w:a} \end{eqnarray} The point $z=0$ transforms to the point $y = 1$ under the transformation (\ref{conformal}), so that the boundary conditions are \begin{eqnarray} \begin{array}{cl} w_k(1) = 0\;, & k \geq 1 \;, \\ \rho_k(1) = 0\;, & k \geq 0 \;. \label{boundary:a} \end{array} \end{eqnarray} The first several coefficients of the $\varepsilon$-expansion can be calculated quite easily by using $w_0(y)=1$ and $\rho_0(y)=0$: \begin{subequations} \label{first:a} \begin{eqnarray} \rho_1(y) &=& w_1(y) = 0, \\ \frac{\rho_2(y)}{a_1 a_2} &=& \ln(y) \equiv H(0;y) \;,\\ \frac{w_2(y)}{a_1 a_2} &=& -\frac{1}{2} \ln^2 (y) \equiv - H(0,0;y) \;. \end{eqnarray} \end{subequations} Continuing these iterations, we may reproduce the coefficients of the $\varepsilon$-expansion of the Gauss hypergeometric function (\ref{gauss:1a}) presented in Eq.~(4.2) of Ref.\ \cite{MKL06}. Since the length of the expressions obtained for the coefficient functions $ \rho_3(y), \omega_3(y), \rho_4(y), \omega_4(y), \rho_5(y), \omega_5(y) $ is similar to those published in Eq.~(4.1) of Ref.\ \cite{MKL06}, we don't reproduce them here.\footnote{M.Y.K. thanks to M.~Rogal for pointing out a mistake in Eq.~(4.1) of Ref.\ \cite{MKL06}: In the $\varepsilon^2$ term, the coefficient should be ``$-2(3 f-a_1-a_2)$'' instead of ``$-2(3f-2a_1-2a_2)$''.} The higher order terms of $\varepsilon$-expansion are relatively lengthy and therefore will also not be presented here. Unfortunately, as in the previous case, we are unable to calculate the $k$-coefficient of $\varepsilon$-expansion without knowledge of previous ones. From representation (\ref{w:a}) we deduce the following result: \begin{itemize} \item {\bf Corollary 4:} {\it The all-order $\varepsilon$-expansion of function (\ref{gauss:1a}) can be written in terms of harmonic polylogarithms $H_{\vec{A}}(y)$ of variable $y$ defined in (\ref{conformal}) and multiple index $\vec{A}$ with entries taking values $0$, $1$ and $-1$. } \end{itemize} This statement follows from the representation (\ref{w:a}), the values of coefficients functions $w_k(z)$, $k=0,1,2$ (see Eqs.~(\ref{boundary}), (\ref{first})), properties of harmonic polylogarithms, and the relation between powers of logarithms and harmonic polylogarithms. Also, {\bf Corollary 2} and {\bf Corollary 3} are valid for the hypergeometric function (\ref{gauss:1a}). We would like to mention that, in contrast to the Eq.~(\ref{w}), Eq.~(\ref{w:a}) contains a new type of function, coming from the integral $\int f(t) dt/(1+t)$. Another difference is that the first nontrivial coefficient function, $\rho_2(y)$, is equal to $\ln(y)$, instead of $\ln(1-z)$, as it was in the previous case. It was shown in Ref.\ \cite{RV00} that terms containing the logarithmic singularities can be explicitly factorised (see Eqs.~(21)-(22) in Ref.\ \cite{RV00}), so that the coefficient functions, $w_k(y)$ and $\rho_k(y)$ from Eq.~(\ref{w:a}), have the form \begin{eqnarray} w_k(y) & = & \sum_{j=0}^k c(\vec{s}, \vec{\sigma},k) \ln^{k-j}(y) \left[ \Li{\left( \vec{\sigma} \atop \vec{s} \right)}{y} - \Li{\left( \vec{\sigma} \atop \vec{s} \right)}{1} \right] \;, \nonumber \\ \rho_k(y) & = & \sum_{j=0}^{k-1} \tilde{c}(\vec{s}, \vec{\sigma},k) \ln^{k-j}(y) \left[ \Li{\left( \vec{\sigma} \atop \vec{s} \right)}{y} - \Li{\left( \vec{\sigma} \atop \vec{s} \right)}{1} \right] \;, \end{eqnarray} where $c(\vec{s}, \vec{\sigma},k)$ and $\tilde{c}(\vec{s}, \vec{\sigma},k)$ are numerical coefficients, $\vec{s}$ and $\vec{\sigma}$ are multi-index, $\vec{s}=(s_1, \cdots s_n)$ and $\vec{\sigma} = (\sigma_1, \cdots, \sigma_n)$, $\sigma_k$ belongs to the set of the square roots of unity, $\sigma_k = \pm 1$, and $\Li{\left( \vec{\sigma} \atop \vec{s} \right)}{y}$ is a coloured multiple polylogarithm of one variable \cite{Goncharov,Broadhurst:1998,Borwein:1999}, defined as \begin{equation} \Li{\left( \sigma_1, \sigma_2, \cdots, \sigma_k \atop s_1, s_2, \cdots, s_n \right)}{z} = \sum_{m_1 > m_2 > \cdots m_n > 0} z^{m_1} \frac{\sigma_1^{m_1} \cdots \sigma_n^{m_n} }{m_1^{s_1} m_2^{s_2} \cdots m_n^{s_n}} \;. \label{colored} \end{equation} It has an iterated integral representation w.r.t.\ three differential forms, \begin{eqnarray} \omega_0 & = & \frac{dy}{y}, \quad \sigma=0, \nonumber \\ \omega_\sigma & = & \frac{\sigma dy}{1- \sigma y}, \quad \sigma= \pm 1, \end{eqnarray} so that, \begin{equation} \Li{\left( \sigma_1, \sigma_2, \cdots, \sigma_k \atop s_1, s_2, \cdots, s_k \right)}{y} = \int_0^1 \omega_0^{s_1-1} \omega_{\sigma_1} \omega_0^{s_2-1} \omega_{\sigma_1 \sigma_2} \cdots \omega_0^{s_k-1} \omega_{\sigma_1 \sigma_2 \cdots \sigma_k} \;, \quad \sigma_k^2 = 1\;. \label{color} \end{equation} The values of coloured polylogarithms of unit argument were studied in Refs.\ \cite{Borwein1996,color}. \subsection{Zero-values of the $\varepsilon$-dependent part of upper parameters} In the case when one of the upper parameter of the Gauss hypergeometric function is a positive integer, the result of the reduction has the simpler form (compare with Eq.~(\ref{decomposition})): \begin{eqnarray} && \hspace{-5mm} P(b,c,z) {}_{2}F_{1}(I_1,b+I_2;c+I_3;z) = Q_1(b,c,z) {}_{2}F_{1}(1,b;c; z) + Q_2(b,c,z) \;, \label{decomposition:integer} \end{eqnarray} where $b,c,$ are any fixed numbers, $P,Q_1,Q_2$ are polynomial in parameters $b,c$ and argument $z$, and $I_1,I_2,I_3$ are any integers.\footnote{The proper algebraic relations for the reduction are given in Ref.\ \cite{MKL06}.} In this case, it is enough to consider the following two basis functions: ${}_2F_{1}(1, 1+a\varepsilon; 2+c \varepsilon;z)$ and ${}_2F_{1}(1, 1+a\varepsilon; \frac{3}{2}+f \varepsilon;z)$. The $\varepsilon$-expansion of this function can be derived from the proper solution given by Eq.~(\ref{first}) or Eq.~(\ref{first:a}), using the relations \begin{eqnarray} \vspace{-5mm} {}_{2}F_1\left(\begin{array}{c|} 1, 1\!+\!a_2\varepsilon\\ 2 \!+\! f \varepsilon \end{array} ~z \right) & = & \lim_{a_1 \to 0 } \frac{1+c\varepsilon}{a_1 a_2 \varepsilon^2} \frac{d}{dz} {}_{2}F_1\left(\begin{array}{c|} a_1 \varepsilon, a_2\varepsilon\\ 1 \!+\! c \varepsilon \end{array} ~z \right) = \frac{1+c\varepsilon}{z} \sum_{k=0}^\infty \left[ \left. \frac{\rho_{k+2}(z)}{a_1 a_2} \right|_{a_1 = 0} \right] \varepsilon^k \nonumber \\ \label{integer_1} \end{eqnarray} and \begin{eqnarray} \hspace{-3mm} {}_{2}F_1\left(\begin{array}{c|} 1, 1\!+\!a_2\varepsilon\\ \frac{3}{2} \!+\! f \varepsilon \end{array} ~z \right) & = & \lim_{a_1 \to 0 } \frac{1 \!+\! 2f\varepsilon}{2 a_1 a_2 \varepsilon^2} \frac{d}{dz} {}_{2}F_1\left(\begin{array}{c|} a_1 \varepsilon, a_2\varepsilon\\ \frac{1}{2} \!+\! f \varepsilon \end{array} ~z \right) = \frac{1+2f\varepsilon}{2z} \sum_{k=0}^\infty \left[ \left. \frac{\rho_{k+2}(y)}{a_1 a_2} \right|_{a_1 = 0} \right] \varepsilon^k \;, \nonumber \\ \label{integer_1a} \end{eqnarray} where we have used the differential relation $$ \frac{d}{dz} \;{}_{2}F_1\left(\begin{array}{c|} a, b\\ c \end{array} ~z \right) = \frac{ab}{c} \;{}_{2}F_1\left(\begin{array}{c|} 1+a, 1+b\\ 1+c \end{array} ~z \right) \;, $$ and the brackets mean that in the proper solution, we can put $a_1=0$. The functions $\rho_k$ are given by Eq.~(\ref{first}) and Eq.~(\ref{first:a}), correspondingly. Due to {\bf Corollary 3}, the limit $a_1 \to 0$ must exist. The case when both upper parameters are integers may be handled in a similar manner. {{\bf Theorem 1} is thus proved.} $\blacksquare$ \section{Some particular cases} \subsection{The generalized log-sine functions and their generalization} \label{imaginary} For the case $0 \leq z \leq 1$ the variable $y$ defined in (\ref{conformal}) belongs to a complex unit circle, $y=\exp (i \theta)$. In this case, the harmonic polylogarithms can be split into real and imaginary parts (see the discussion in Appendix A of Ref.\ \cite{MKL04}), as in the case of classical polylogarithms. \cite{Lewin} Let us introduce the trigonometric parametrization $z = \sin^2 \tfrac{\theta}{2}.$ In this case, the solution of the proper differential equations (\ref{w}) and (\ref{w:a}) can be written in the form \begin{eqnarray} \rho_i(\theta) & = & (a_1 \!+\! a_2 \!-\! c) \int_0^\theta d \phi \frac{\sin \frac{\phi}{2}}{\cos \frac{\phi}{2}} \rho_{i-1}(\phi) \!+\! a_1 a_2 \int_0^\theta d \phi \frac{\sin \frac{\phi}{2}}{\cos\frac{\phi}{2}} w_{i-2}(\phi) \!-\! c w_{i-1}(\theta) \;, \quad i \geq 1 \;, \nonumber \\ w_i(\theta) & = & \int_0^\theta d \phi \frac{\cos \frac{\phi}{2}}{\sin \frac{\phi}{2}} \rho_i(\phi) \;, \quad i \geq 1 \;, \label{w:geom} \end{eqnarray} and \begin{eqnarray} \rho_i(\theta) & = & (a_1 \!+\! a_2 \!-\! f) \int_0^\theta d \phi \frac{\sin \frac{\phi}{2}}{\cos \frac{\phi}{2}} \rho_{i-1}(\phi) \!-\! f \int_0^\theta d \phi \frac{\cos \frac{\phi}{2}}{\sin \frac{\phi}{2}} \rho_{i-1}(\phi) \!+\! a_1 a_2 \int_0^\theta d \phi w_{i-2}(\phi) \;, \nonumber \\ w_i(\theta) & = & \int_0^\theta d \phi \rho_i(\phi) \;, \quad i \geq 1 \;, \label{w:a:geom} \end{eqnarray} respectively. In the first case, the solutions of the system of equations (\ref{w:geom}) are harmonic polylogarithms with argument equal to $\sin^2 \tfrac{\theta}{2}$. In the second case, the result contains the generalized log-sine functions \cite{Lewin,FK99,D00,lsjk} and some of their generalizations studied in Ref.\ \cite{MKL05} (see also Ref.\ \cite{Anastasiou}). For illustration, we will present a first several terms of the $\varepsilon$-expansion~\footnote{The FORM representation of these expressions can be extracted from \cite{MKL}.} (see the proper relations, Table I of Appendix C in Ref.\ \cite{DK04}): \begin{eqnarray} && \left. _2F_1\left( \begin{array}{c} 1+a_1 \varepsilon, 1 + a_2 \varepsilon \\ \frac{3}{2} + f \varepsilon \end{array} \right| \sin^2 \tfrac{\theta}{2} \right) = \frac{(1+2f\varepsilon)}{ \sin \theta} \nonumber \\ && \hspace{1mm} \times \Biggl( \theta + 2 \varepsilon \Biggl\{ \gamma_f \left( \Ls{2}{\pi \!-\! \theta} \!-\! \theta L_{\theta} \right) - f \left( \Ls{2}{\theta} \!+\! \theta l_{\theta} \right) \Biggr\} \nonumber \\ && \hspace{1mm} + \varepsilon^2 \Biggl\{ 2 f \gamma_{2f} \Ls{3}{\theta} \!+\! 2 \gamma_f \gamma_{2f} \Ls{3}{\pi-\theta} \!-\! f \gamma_f \Ls{3}{2 \theta} \nonumber \\ && \hspace{7mm} \!+\! 4 f \gamma_f \left[ \Ls{2}{\theta} L_\theta \!-\! \Ls{2}{\pi \!-\! \theta} l_\theta \!+\! \theta L_\theta l_\theta \right] + 4 f^2 \Ls{2}{\theta} l_\theta - 4 \gamma_f^2 \Ls{2}{\pi-\theta} L_\theta \nonumber \\ && \hspace{7mm} + 2 f^2 \theta l^2_\theta + 2 \gamma_f^2 \theta L_\theta^2 + \frac{1}{6} a_1 a_2 \theta^3 + \gamma_f \gamma_{2f} \pi \zeta_2 \Biggr\} \nonumber \\ && \hspace{1mm} + \varepsilon^3 \Biggl\{ \frac{4}{3} \gamma_{2f} \left[ (a_1+a_2) \gamma_f \Ls{4}{\pi-\theta} + f^2 \Ls{4}{\theta} - 3 f \gamma_f \Lsc{2,3}{\theta} \right] - \frac{2}{3} f^2 \gamma_{f} \Ls{4}{2\theta} \nonumber \\ && \hspace{7mm} + 4 a_1 a_2 \left[ 2 f \Cl{4}{\theta} - 2 \gamma_f \Cl{4}{\pi-\theta} - f \Cl{3}{\theta} \theta - \gamma_f \Cl{3}{\pi-\theta} \theta \right] \nonumber \\ && \hspace{7mm} + 2 \left[ f l_\theta + \gamma_f L_\theta \right] \left[ f \gamma_f \Ls{3}{2\theta} - 2 \gamma_f \gamma_{2f} \Ls{3}{\pi-\theta} - 2 f \gamma_{2f} \Ls{3}{\theta} \right] \nonumber \\ && \hspace{7mm} + 2 \left[ f l_\theta + \gamma_f L_\theta \right]^2 \left[ 2\gamma_{2f} \Ls{2}{\pi-\theta} - f \Ls{2}{2\theta} \right] + a_1 a_2 \gamma_{2f} \theta^2 \Ls{2}{\pi-\theta} \nonumber \\ && \hspace{7mm} - \frac{1}{2} a_1 a_2 f \Ls{2}{2\theta} \theta^2 \!-\! \frac{1}{3} a_1 a_2 \theta^3 \left[ f l_\theta \!+\! \gamma_f L_\theta \right] \!-\! \frac{4}{3} \theta \left[ f l_\theta \!+\! \gamma_f L_\theta \right]^3 \!-\! 2 \gamma_f \gamma_{2f} \pi \zeta_2 \left[ f l_\theta \!+\! \gamma_f L_\theta \right] \nonumber \\ && \hspace{7mm} + a_1 a_2 (3 a_1 + 3 a_2 - 7 f) \theta \zeta_3 - 2 (a_1 + a_2 )\gamma_f \gamma_{2f} \pi \zeta_3 \Biggr\} + {\cal O} (\varepsilon^4) \Biggr) \label{A_expansion:1} \end{eqnarray} and \begin{eqnarray} && \hspace{-5mm} \left. _2F_1\left( \begin{array}{c} a_1 \varepsilon, a_2 \varepsilon \\ \frac{1}{2} + f \varepsilon \end{array} \right| \sin^2 \tfrac{\theta}{2} \right) = 1 + a_1 a_2 \varepsilon^2 \Biggl( \frac{1}{2} \theta^2 \nonumber \\ && \hspace{1mm} + \varepsilon \Biggl\{ 2 f \Ls{2}{\theta} \theta \!-\! 2 \gamma_f \Ls{2}{\pi-\theta} \theta \!+\! 4 \gamma_f \Cl{3}{\pi-\theta} \!+\! 4 f \Cl{3}{\theta} \!+\! (3a_1 \!+\! 3a_2 \!-\! 7f) \zeta_3 \Biggr\} \nonumber \\ && \hspace{1mm} + \varepsilon^2 \Biggl\{ 2 \gamma_f \gamma_{2f} \Ls{3}{\pi-\theta} \theta \!-\! f \gamma_f \Ls{3}{2\theta} \theta \!+\! 2 f \gamma_{2f} \Ls{3}{\theta} \theta \!+\! \frac{1}{24} a_1 a_2 \theta^4 \nonumber \\ && \hspace{7mm} - 2 \left[ f \Ls{2}{\theta} \!-\! \gamma_{f} \Ls{2}{\pi-\theta} \right]^2 + \gamma_f \gamma_{2f} \theta \pi \zeta_2 \Biggr\} + {\cal O} (\varepsilon^3) \Biggr) \;, \label{A_expansion:2} \end{eqnarray} where $$ L_\theta = \ln\left( 2 \cos \frac{\theta}{2} \right) \;, \quad l_\theta = \ln\left( 2 \sin \frac{\theta}{2} \right) \;, $$ the generalized log-sine function is defined as \begin{equation} \LS{j}{k}{\theta} = - \int\limits_0^\theta {\rm d}\phi \; \phi^k \ln^{j-k-1} \left| 2\sin\frac{\phi}{2}\right| \, , \quad \Ls{j}{\theta} = \LS{j}{0}{\theta} \; , \label{log-sine} \end{equation} and we use the notation $\Lsc{2,3}{\theta}$ for the special combination (see Eq.~(2.18) in Ref.\ \cite{DK04}) \begin{eqnarray} \label{Lsc-Ti} \Lsc{2,3}{\theta} &=& \tfrac{1}{12}\Ls{4}{2\theta} - \tfrac{1}{3}\Ls{4}{\theta} + 2 \Ti{4}{\tan\tfrac{\theta}{2}} - 2 \ln\left( \tan\tfrac{\theta}{2} \right)\; \Ti{3}{\tan\tfrac{\theta}{2}} \nonumber \\ && + \ln^2\left( \tan\tfrac{\theta}{2} \right)\; \Ti{2}{\tan\tfrac{\theta}{2}} - \tfrac{1}{6} \theta \ln^3\left( \tan\tfrac{\theta}{2} \right) \; , \end{eqnarray} where the functions $\Ti{N}{z}$ are defined as \cite{Lewin} \begin{equation} \label{Ti_N} \Ti{N}{z} = {\rm Im}\left[ \Li{N}{{\rm i}z}\right] = \frac{1}{2 {\rm i} } \Bigl[\Li{N}{ {\rm i}z} - \Li{N}{- {\rm i}z}\Bigr] \;, \qquad \Ti{N}{z} = \int\limits_0^z \frac{{\rm d}x}{x}\; \Ti{N-1}{x} \; . \end{equation} These functions receive special interest in physics through their role in the so-called ``single-scale'' diagrams, which depend only on one massive scale parameter. The massless propagator-type diagrams, bubble-type diagrams and propagator-type diagrams on mass shell all belong to this class. In particular, the single-scale diagrams with two massive particle cuts correspond to hypergeometric functions with value of argument equal to $z=1/4$. \cite{FKK,KV00,DK01} In this case, the value of the conformal variable $y$ is equal to the primitive ``sixth root of unity'', $y=\exp\left( i \frac{\pi}{3} \right)$. In contrast to the case in multiple polylogarithms (\ref{mp}) of the primitive sixth root of unity studied in Ref.\ \cite{Borwein:2000} and the more complicated case in coloured polylogarithms of the sixth root of unity studied by Broadhurst in Ref.\ \cite{Broadhurst:1998}, the physically interesting case corresponds to coloured polylogarithms of square root (\ref{color}) (harmonic polylogarithms) with argument equal to primitive sixth root of unity. In this case, some new transcendental constants, in addition to studied in Ref.\ \cite{Borwein:2000} will be generated. The set of independent constants up to {\bf weight 5} was constructed in Refs.\ \cite{FK99,DK01,MKL05}. \subsection{Special cases: all-order $\varepsilon$-expansion in terms of Nielsen polylogarithms} \label{special} One advantage of a trigonometric representation used in the previous section is the theorem proved in Ref.\ \cite{DK01} (see also Ref.\ \cite{lsjk}), that any generalized log-sine function (\ref{log-sine}) is expressible in term of Nielsen polylogarithms \cite{Nielsen} only. Using this theorem, it was shown in Ref.\ \cite{DK01,DK04} that for the Gauss hypergeometric function \begin{equation} _2F_1 \left(\begin{array}{c|} 1, 1 \!+\! a\varepsilon \\ \tfrac{3}{2} \!+\! b\varepsilon\end{array} ~\sin^2 \tfrac{\theta}{2} \right) \;, \end{equation} the Laurent expansion is expressible in terms of only Nielsen polylogarithms in the three cases (i) $b=0$, (ii) $b=a$, (iii) $a=2b$. Using the reduction algorithm \cite{MKL06}, we can claim that the Laurent expansions of the following functions are also expressible in terms of Nielsen polylogarithms only: \begin{eqnarray} _2F_1 \left(\begin{array}{c|} I_1, I_2 \!+\! \varepsilon \\ \tfrac{1}{2} \!+\! I_3 \end{array} ~\sin^2 \tfrac{\theta}{2} \right)\; , \quad _2F_1 \left(\begin{array}{c|} I_1, I_2 \!+\! \varepsilon \\ \tfrac{1}{2} \!+\! I_3 \!+\! \varepsilon \end{array} ~\sin^2 \tfrac{\theta}{2} \right) \;, \quad _2F_1 \left(\begin{array}{c|} I_1, I_2 \!+\! \varepsilon \\ \tfrac{1}{2} \!+\! I_3 \!+\! \tfrac{1}{2}\varepsilon \end{array} ~\sin^2 \tfrac{\theta}{2} \right) \;, \label{caseI} \end{eqnarray} where $I_1$, $I_2$ and $I_3$ are integers. It is interesting to analyze this solution from the point of view of Eq.~(\ref{w:a:geom}). Due to fact that $a_1 = 0$, the last term in Eq.~(\ref{w:a:geom}) is identically equal to zero. In case (i), only the first term survives, with integration kernel having the form $d \ln(\cos \tfrac{\phi}{2})$. In case (ii), only the second term survives, and the integration kernel has the form $d \ln(\sin \tfrac{\phi}{2})$. In case (iii), the first and second terms can be reduced to the second case of a double argument. The statement about expressibility of {\it inverse binomial sums} in terms of log-sine function, proved in Ref.\ \cite{DK04} (see also Ref.\ \cite{MKL04}) applies to all three of these cases. We can extend the class of Gauss functions whose $\varepsilon$-expansions are expressible in terms of only Nielsen polylogarithms by using algebraic relations\footnote{This can also be derived via the integral representation.} between of the fractional-linear arguments (see Sec.\ 3 in Ref.\ \cite{MKL06}). The cases which may be expressed in this manner are summarized in {Table I}, where $a,b,c$ are parameters of the Gauss hypergeometric functions ${}_2F_1(a,b;c;z)$ and $I_1$, $I_2$ and $I_3$ are integer: $$ \begin{array}{|c|c|c|c|c|c|c|c|c|c|}\hline \multicolumn{10}{|c|}{\rm\bf{Table\ \ I} } \\ \hline a & I_1 & I_1 & I_1 & I_1 & I_1 & I_1 & I_1 \!+\! \varepsilon & I_1 \!+\! \varepsilon & I_1 \!+\! 2 \varepsilon \\ \hline b & \frac{1}{2} \!+\! I_2 & \frac{1}{2} \!+\! I_2 \!+\! \varepsilon & \frac{1}{2} \!+\! I_2 \!-\! \varepsilon & \frac{1}{2} \!+\! I_2 \!+\! \varepsilon & \frac{1}{2} \!+\! I_2 & \frac{1}{2} \!+\! I_2 \!+\! \varepsilon & \frac{1}{2} \!+\! I_2 & \frac{1}{2} \!+\! I_2 \!+\! \varepsilon & \frac{1}{2} \!+\! I_2 \!+\! \varepsilon \\ \hline c & \frac{1}{2} \!+\! I_3 \!+\! \varepsilon & \frac{1}{2} \!+\! I_3 & \frac{1}{2} \!+\! I_3 \!+\! \varepsilon & I_3 \!+\! \varepsilon & I_3 \!+\! \varepsilon & I_3 \!+\! 2 \varepsilon & I_1 \!+\! 1\!+\! \varepsilon & I_1 \!+\! 1 \!+\! \varepsilon & I_1 \!+\! 1 \!+\! 2 \varepsilon \\ \hline \end{array} $$ The results of this section can be formulated as follows: \begin{itemize} \item {\bf Proposition 1}: {\it All cases of Gauss hypergeometric functions with half-integer values of parameters for which the all-order $\varepsilon$-expansion is expressible in terms of only Nielsen polylogarithms are described in Eq.\ (\ref{caseI}) or the parameters shown in Table I. } \end{itemize} \section{Conclusions} The main result of this paper is the proof of {\bf Theorem 1}, as stated also in the abstract. The proof includes two steps: (i) the algebraic reduction of Gauss hypergeometric functions of the type in {\bf Theorem 1} to basic functions and (ii) the iterative algorithms for calculating the analytical coefficients of the $\varepsilon$-expansion of basic hypergeometric functions. In implementing step (i), the algebraic relations between basis functions with half-integer values of parameters reduce all of the cases to the one basic function of type (\ref{2F1-type1}) and its first derivative (see details in Ref.\ \cite{MKL06}). In step (ii), the algorithm is constructed for integer values of parameters in Eq.~(\ref{w}) and for basis Gauss hypergeometric functions with half-integer values of parameters in Eq.~(\ref{w:a}). This allows us to calculate the coefficients directly, without reference to multiple sums. It is interesting to note that the Laurent expansions of the Gauss hypergeometric functions with integer values of parameters are expressible in terms of multiple polylogarithms of one variable (see Eq.~(\ref{mp})) or the Remiddi-Vermaseren harmonic polylogarithms with multiple index including only values $0$ and $1$. The argument of the resulting functions coincides with the original variable of the hypergeometric function. For Gauss hypergeometric functions with half-integer values of parameters, the coefficients of the $\varepsilon$-expansion produce the full set of harmonic polylogarithms, or coloured multiple polylogarithms of one variable (see Eq.~(\ref{colored})). These functions depend on a new variable, related to the original variable by conformal transformation (see Ref.\ \cite{MKL06}). For special values of the argument of the hypergeometric function, $z<1$, the coloured multiple polylogarithms of one variable may be split into real and imaginary parts. This case has been discussed in section \ref{imaginary}. It was shown that the physically interesting case, representing single-scale diagrams with with two massive particle cuts, corresponds to coloured polylogarithms (\ref{color}) with argument equal to a primitive ``sixth root of unity'', $y=\exp\left( i \frac{\pi}{3} \right)$. This gives an explanation of the proper ``basis of transcendental constants'' constructed in Refs.\ \cite{FK99} and \cite{DK01}, and its difference from the proper basis of David Broadhurst \cite{Broadhurst:1998}. In the section \ref{special}, the subset of Gauss hypergeometric function is analyzed, showing that the all-order $\varepsilon$-expansion is expressible in terms of Nielsen polylogarithms only. In particular, we have formulated the proposition that the only Gauss hypergeometric functions with half-integer values of parameters for which the all-order $\varepsilon$-expansion is expressible in terms of Nielsen polylogarithms only belong to one of the functions described in (\ref{caseI}) or in Table I. In Appendix \ref{appendix}, we discuss the construction of the all-order Laurent expansion of the Gauss hypergeometric function (\ref{gauss:1a}) around $z=1$. \acknowledgments We are grateful A.~Davydychev for useful discussion. M.Yu.K. is thankful to participants of conference ``Motives and Periods'', University of British Columbia, Vancouver, June 5-12, 2006 \cite{vancouver}, for interesting discussions. Special thanks to Andreas Rosenschon for invitation and financial support and D.~Kreimer and H.~Gangl for enormously and useful discussions and suggestions. M.Yu.K. is very grateful to Laura Dolchini for moral support when paper was written. This research was supported in part by RFBR grant \# 04-02-17192 , NATO Grant PST.CLG.980342 and DOE grant DE-FG02-05ER41399.
1,108,101,563,754
arxiv
\section{Introduction} Compact white dwarf (WD) binaries (with orbital periods in the range from minutes to hours) are important for several areas of astrophysics. The orbits of these systems decay via the emission of gravitational waves, constituting the largest signals for the next generation space-based gravitational wave interferometers. Systems of sufficiently short orbital period will merge within a Hubble time, the result of which may produce a variety of exotic objects, such as helium-rich sd0 stars, R CrB stars and AM CVn binaries. Most importantly, when the total binary mass is near the Chandrasekhar limit, the merged WDs may collapse into a neutron star or explode as a Type Ia supernova (e.g., Webbink 1984; Iben \& Tutukov 1984). Recent studies have provided support for such ``double degenerate'' progenitors of SNe Ia. (e.g., Gilfanov \& Bogdan 2010; Di Stefano 2010; Maoz et al.~2010; Li et al.~2011; Bloom et al.~2012; Schaefer \& Pagnotta 2012). The outcome of a WD binary merger depends on the masses of the WDs and their pre-merger conditions (e.g., Segretain et al.~1997; Yoon et al.~2007; Loren-Aguilar et al.~2009; van Kerkwijk et al.~2010; Dan et al.~2012; Raskin et al.~2012). Most previous studies of pre-merger binary WDs have focused equilibrium tides and considered tidal dissipation in a perameterized way (e.g., Mochkovitch \& Livio 1989; Iben et al. 1998; Willems et al.~2010; Piro 2011). None of these studies have sought to predict the magnitude and location of tidal heating due to dynamical tides, which dominate the tidal responses of the binary WDs. In two recent papers (Fuller \& Lai 2011, 2012, hereafter paper I and paper II, respectively), we presented the first {\it ab initio} calculations of dynamical tides in realistic WD models. In paper I, we considered resonant excitations of WD g-modes during binary decay and showed that the modes reach non-linear amplitudes near the surface of the star. This implies that, rather than exciting discrete g-modes, the binary companion will excite a continuous train of gravity waves, which propagate outward and dissipate in the outer envelope of the WD. We studied such continuous tidally excited waves in paper II. For a canonical Carbon/Oxygen WD (consisting of CO core with a He-H envelope), we showed that the outgoing waves are primarily launched at the CO/He transition region, and propagate toward the WD surface, where they are likely dissipated through a combination of non-linear processes and radiative damping. We computed the energy and angular momentum flux carried by the waves in order to predict the orbital and spin evolution of WDs in compact binaries. We found that such dynamical tides cause the binary WDs to be nearly synchronized prior to merger. Furthermore, the tidal heating rate can be quite large at short orbital periods (exceeding tens of solar luminosities just before merger, depending on the system parameters), potentially leading to significant observable signatures. In this {\it Letter}, we show that tidal heating may trigger a thermonuclear runaway hydrogen fusion event in a CO WD. The observational consequence of such an event would likely be an outburst that resembles a classical nova. We call this new phenomenon a ``Tidal Nova'' (TN). Unlike all other types of novae or supernovae, a TN does not rely on mass accretion or collapse. We present a simple two-zone model for the angular momentum evolution of a differentially rotating WD, which we use to calculate the radial tidal heating profile within the WD. We then evolve the WD model including tidal heating to calculate changes in its temperature, luminosity, and internal structure. For a wide range of physically plausible parameters, we demonstrate that tidal heating induces a thermonuclear runaway event. Finally, we discuss the observational signatures of such an event, and compare our predictions to observations of short-period WD binaries. \section{Energy and Angular Momentum of Tidally Excited Gravity Waves} Using the method described in Paper II, we calculate the amplitude of tidally excited gravity waves inside a WD. We consider a circular orbit with angular frequency $\Omega$. The WD spins at an angular frequency $\Omega_s$, and the spin is aligned with the orbit. In the corotating frame, the frequency of the dominant $l=m=2$ tidal potential is $\omega=2(\Omega-\Omega_s)$. For a WD of mass $M$ and radius $R$ (and given internal structure) with a companion of mass $M'$, the energy and angular momentum fluxes carried by the gravity waves can be written as \begin{eqnarray} &&\dot{J}_z(\Omega,\omega) = T_0(\Omega) F(\omega),\label{Jdot}\\ && \dot{E}(\Omega,\omega) = \Omega T_0 F(\omega),\label{Edot} \end{eqnarray} where \begin{equation} \label{T0} T_0(\Omega) = \frac{G M'^2}{a} \bigg(\frac{R}{a}\bigg)^5, \end{equation} with $\Omega=\sqrt{GM_t/a^3}$ the orbital angular frequency ($M_t=M+M'$ is the total mass and $a$ is the orbital semi-major axis). The dimensionless function $F(\omega)$ (similar to the tidal lag angle in the language of equilibrium tide theory) determines the magnitude of wave excitation, and is strongly dependent on the internal structure of the WD and the tidal frequency $\omega$. In Paper II we have calculated $F(\omega)$ for $0.6 M_\odot$ CO WD models of various surface temperatures and slow rotation. We found that $F(\omega)$ is an erratic function of $\omega$ because of the ``quasi-resonance cavity'' formed by the CO core inside the He/H shell. However, because of the strong dependence of $F(\omega)$ on $\omega$ [the envelope of $F(\omega)$ approximately scales as $\omega^5$], at sufficiently short orbital periods, tidal spin-up combined with orbital decay via gravitational radiation ensure that $\omega \simeq {\rm const}$. The orbital period at which this transition occurs is $P_c \simeq 40$ minutes, depending on the WD masses and temperatures [see Eq.~(79) of Paper II]. At periods $P \lesssim P_c$, the tidal energy transfer rate is \begin{equation} \label{Edot2} \dot{E} \simeq \frac{3I\Omega^2}{2t_{\rm GW}}, \end{equation} where $I$ is the moment of inertia of the WD, and $t_{\rm GW}=|a/\dot a|$ is the binary inspiral time due to gravitational radiation, \begin{equation} t_{\rm GW}= 4.2\times 10^{5}\,{\rm yr} \bigg(\frac{M_{\odot}^2}{MM'}\bigg)\bigg(\frac{M_t} {2M_{\odot}}\bigg)^{\!\!1/3}\!\! \bigg(\!\frac{P}{10\,{\rm min}} \bigg)^{\!\! 8/3}. \label{tgw}\end{equation} When the outgoing gravity waves damp in the WD envelope and locally deposit their angular momentum, some of the wave energy is converted into rotational kinetic energy, while the rest is converted to heat. The heating rate is \begin{equation} \label{eheat} \dot{E}_{\rm heat} = \dot E \bigg(1 - \frac{\Omega_{s}}{\Omega}\bigg). \end{equation} If the WD maintains some differential rotation, $\Omega_s$ in the above equation should be the rotation rate of the layer in which the waves damp, and heat will also be generated through viscous angular momentum transport. \section{Two Zone Model for Tidal Heat Depostion} \label{critical} Our calculations indicate that the gravity waves reach non-linear amplitudes and break in the outer layers of the WD. The location of wave breaking depends on various parameters (e.g., orbital and tidal frequencies), but is always at $r \gtrsim 0.92R$ and the exterior mass $\Delta M \lesssim 10^{-4} M$ (Paper II). Since a small fraction of the stellar mass absorbs the entire angular momentum flux, the outer layer spins up rapidly. If it spins up faster than angular momentum can be transported to the core, the outer layer will rotate synchronously with the orbit. Outgoing waves approaching the synchronized envelope will be absorbed near corotation and deposit their angular momentum, causing the synchronized envelope to move to larger depths (see Goldreich \& Nicholson 1989). We consider a simple two-zone model for the spin evolution of the WD. In this model, the envelope of the star rotates synchronously with the orbit ($\Omega_{\rm env} = \Omega$), while the core rotates sub-synchronously ($\Omega_{\rm core} < \Omega$). The envelope and core are coupled, with angular momentum being transferred to the core according to a parameterized coupling time, $t_{\rm coup}$. The angular momentum of the core-envelope system evolves according to \begin{eqnarray} &&\label{Jedot} \frac{d}{dt}\left(I_{\rm env}\Omega_{\rm env}\right) = \dot{J}_z(\Omega,\omega_{\rm core}) - \frac{I_{\rm env}}{t_{\rm coup}} (\Omega_{\rm env} - \Omega_{\rm core}),\\ &&\label{Jcdot} \frac{d}{dt}\left(I_{\rm core}\Omega_{\rm core}\right) = \frac{I_{\rm env}}{t_{\rm coup}} (\Omega_{\rm env} - \Omega_{\rm core}), \end{eqnarray} where $I_{\rm env}=I-I_{\rm core}$ is the moment of inertia of the envelope. Here, $\dot{J}_z$ is the angular momentum flux which can be calculated from equation (\ref{Jdot}). We have assumed that the gravity waves are excited in the core and absorbed in the envelope\footnote{This assumption is valid as long as long as the core-envelope boundary is above the C/He transition layer (with an exterior mass $\Delta M \approx 10^{-2} M_\odot$), which is the region where the outgoing gravity waves are excited.}. Consequently, the angular momentum source term $\dot{J}_z$ is only present in the envelope evolution equation, although it is dependent on the tidal frequency in the core, $\omega_{\rm core} = 2(\Omega - \Omega_{\rm core})$. Using $\Omega_{\rm env} =\Omega$, equations (\ref{Jedot}) and (\ref{Jcdot}) can be integrated to find $I_{\rm env}$ and $\Omega_{\rm core}$ as a function of time or orbital period. The mass $\Delta M_{\rm env}$ of the envelope corresponds to $I_{\rm env} \simeq (2/3)\Delta M_{\rm env}R^2$. The thickness (or $\Delta M_{\rm env}$) of the envelope is dependent on the parameter $t_{\rm coup}$. In stably stratified stars like WDs, angular momentum can be transported by magnetic fields. In the presence of a poloidal field $B$ connecting the core and envelope, $t_{\rm coup}$ can be estimated from the Alfven wave crossing time, \begin{equation} \label{ta} t_A = \frac{R\sqrt{4\pi\rho}}{B} \simeq 10^2 \ {\rm yr} \bigg(\frac{10^3 {\rm G}}{B}\bigg) \end{equation} for our CO WD model. For WDs without an intrinsic magnetic field, angular momentum may be transported via the Tayler-Spruit dynamo (Spruit 2002). To estimate $t_{\rm coup}$, we calculate the effective viscosity for angular momentum transport via the Tayler-Spruit dynamo, $\nu_{TS}$, as outlined in Spruit 2002.\footnote{For simplicity, we have calculated the viscosity $\nu_{TS}$ without including the effects of composition gradients in the WD [see equation (32) in Spruit 2002]. A more realistic estimate of the rotational profile of the WD should take composition gradients into account.} We find $t_{TS}\equiv \int_0^R(r/\nu_{TS})dr\approx 10^{3}\,{\rm yr}\,(P/45{\rm min})^{3/2}$. Thus we expect the coupling time to lie in the range $t_{\rm coup} \lesssim 10^{3}\,{\rm yr}$ for the short orbital periods of interest. \begin{figure} \begin{centering} \includegraphics[scale=.6]{WD2zone63} \caption{\label{WDcrit} The mass $\Delta M_{\rm env}$ of the synchronized envelope as a function of orbital period for a $0.6 M_\odot$ CO WD model with a $0.3 M_\odot$ companion. The solid (black) line has $t_{\rm coup} = 1\,{\rm yr}$, the dot-dot-dashed (green) line $t_{\rm coup} = 10\,{\rm yr}$, the dot-dashed (orange) line $t_{\rm coup} = 10^2\,{\rm yr}$, and the dashed (red) line $t_{\rm coup} = 10^3 {\rm yr}$.} \end{centering} \end{figure} Figure \ref{WDcrit} plots the value of $\Delta M_{\rm env}$ as a function of orbital period for our $0.6 M_\odot$ WD model with a $0.3 M_\odot$ companion, using values of $t_{\rm coup}$ ranging from $1 {\rm yr}$ to $10^3 {\rm yr}$. We begin our calculation at $P_{\rm orb} > 1 {\rm hr}$ and use $I_{\rm env,0}=0$ and $\Omega_{{\rm core},0}=0$, as appropriate at long orbital periods where tidal effects are negligible. We see that for the range of $t_{\rm coup}$ considered, $\Delta M_{\rm env}$ remains small ($\lesssim 10^{-2} M_{\odot}$) at all orbital periods of interest. Thus, the synchronized envelope most likely does not extend down to the C/He transition layer where the gravity waves are excited, justifying our assumption that $\dot{J}_z$ is a function of $\Omega_{\rm core}$. However, the envelope does extend to very large optical depths, suggesting that binary WDs may be observed to be synchronized at large orbital periods even if their cores are not synchronized. Note that since $I_{\rm env} \ll I$, the core of the star contains most of the angular momentum, and its spin evolves in the same manner as discussed in Paper II. \section{Tidal Heating and Unstable Nuclear Burning} In the two-zone model discussed in \S 3, the total tidal heating rate $\dot E_{\rm heat}$ may be calculated from equation (\ref{eheat}) with $\Omega_{s}=\Omega_{\rm core}$, and the tidal heat is deposited entirely at the base of the synchronized envelope where $\Delta M = \Delta M_{\rm env}$. In a real WD, the heat deposition will occur over a range of depths that depends on the details of wave breaking and viscous angular momentum transport. For simplicity, here we choose to deposit the tidal heat uniformly per unit mass in the synchronized envelope. The heating rate per unit mass, $\dot{\varepsilon}_{\rm heat}$, is then \begin{align} \label{epsheat1} &\dot{\varepsilon}_{\rm heat} = 0 \quad {\rm for} \quad \Delta M > \Delta M_{\rm env} \\ \nonumber &\dot{\varepsilon}_{\rm heat} = \frac{\dot{E}_{\rm heat}}{\Delta M_{\rm env}} \quad {\rm for} \quad \Delta M < \Delta M_{\rm env}. \end{align} Although the radial dependence of this heating function is unlikely to be realistic, we find that the results below are not strongly dependent on the form of $\dot\varepsilon_{\rm heat}$. To understand the effect of tidal heating on the WD properties, we evolve WD models using the extra heating term calculated via equation (\ref{epsheat1}). We use the one-dimensional stellar evolution code MESA (Paxton et al.~2010) to evolve our WD models, starting from an initial orbital period of one hour. We present results for a $0.6 M_\odot$ CO WD model with a $\sim10^{-4} M_\odot$ hydrogen shell and a $0.3 M_\odot$ companion. \begin{figure} \begin{centering} \includegraphics[scale=.6]{WD63tempT5T10} \caption{\label{6temp} The surface temperature of the $0.6 M_\odot$ CO WD model with a $0.3 M_\odot$ companion as a function of orbital period, for initial temperatures of $5000$~K (top) and $10^4$~K (bottom). The solid black lines are calculated with $t_{\rm coup}=1\,{\rm yr}$ while the dashed (red) lines are calculated with $t_{\rm coup}=10^3\,{\rm yr}$. The dotted lines are calculated for a WD with no tidal heating and the same initial temperature. The (blue) dot-dashed lines correspond to equation (\ref{Ttide}). The (red) stars mark the points at which tidal novae occur. The asterisks mark the position of the secondary of the 12.75 minute binary WD system SDSS J065133+284423 (Brown et al.~2011).} \end{centering} \end{figure} Figure \ref{6temp} displays the surface temperature as a function of orbital period for our tidally heated WD. For comparison, we also show the temperature of a non-tidally heated WD and the ``tidal heating temperature'', defined as \begin{equation} \label{Ttide} T_{\rm eff,heat}= \bigg(\frac{\dot{E}_{\rm heat}}{4\pi R^2 \sigma}\bigg)^{1/4}. \end{equation} At long orbital periods ($P \gtrsim 45$ minutes), the tidal heating has little effect on the surface temperature of the WD. At shorter periods ($P \lesssim 30$ minutes), the temperature becomes substantially larger due to tidal heating. Several of the curves end abruptly due to the ignition of a thermonuclear runaway event, at which point we terminate our evolution calculations. For small values of $t_{\rm coup}$, the tidal heat is deposited at shallow depths and quickly diffuses to the surface such that the luminosity of the WD is $L \simeq L_0 + \dot{E}_{\rm heat}$, where $L_0$ is the luminosity of a non-tidally heated WD. However, for larger values of $t_{\rm coup}$, most of the tidal heat is deposited deeper in the WD where it cannot quickly diffuse outward. This leads to lower surface temperatures, although the internal temperature may increase substantially. Figure \ref{63temp} shows the interior temperature profile of our WD at three different orbital periods, using $t_{\rm coup}=10^3\,{\rm yr}$. At long orbital periods, the temperature profile is similar to that of a non-tidally heated WD. As the orbital period decreases, the interior heats up, with the local temperature maximum at $\Delta M\sim \Delta M_{\rm env}$. If the base of the hydrogen layer reaches a temperature of $\sim 10^7$~K, hydrogen burning will be ignited. \begin{figure} \begin{centering} \includegraphics[scale=.6]{wdtempprof63T5000} \caption{\label{63temp} Temperature profile of the WD (as a function of exterior mass $\Delta M$) at orbital periods of 45 minutes (black), 20 minutes (green), and 12 minutes (red). These temperatures are calculated for the $0.6 M_\odot$ WD model with an initial surface temperature of $T_{\rm eff}=5000$~K, a $0.3 M_\odot$ companion and $t_{\rm coup}=10^3\,{\rm yr}$.} \end{centering} \end{figure} In the depicted model, the layer just above the He/H transition (at $\Delta M \approx 10^{-4} M_\odot$) is composed of largely degenerate hydrogen gas. The ignition of fusion in this layer can thus spark a thermonuclear runaway. In general, our calculations show that these {\it tidal novae} occur only in initially cool WDs ($T_{\rm eff} \lesssim 1.2 \times 10^4$~K in the absence of tidal heating). They do not occur in hotter WDs because the hydrogen is not degenerate and can burn stably. Also, tidal novae require that the waves deposit some of the heat near the base of the hydrogen layer, i.e., $10^{-5}M_\odot \lesssim \Delta M_{\rm env} \lesssim 10^{-3} M_\odot$. In our two zone model, such heating occurs for coupling times $10 {\rm yr} \lesssim t_{\rm coup} \lesssim 10^4 {\rm yr}$. Overall, we find that the tidal novae occur at orbital periods $5~{\rm min} \lesssim P_{\rm orb} \lesssim 20~{\rm min}$, depending on the location of heat deposition, initial temperature of the WD, and companion mass. \section{Discussion} We have shown that under rather general conditions (see the last paragraph of Section 4), tidal dissipation in compact WD binaries can lead to nova outbursts prior to binary merger or mass transfer. While we do not attempt to predict the detailed observational signal of a tidal nova (TN), we speculate that it may be very similar to a classic nova. However, in contrast to classical novae in CVs, a TN would occur in a compact system with no evidence for mass transfer. Our results indicate that a TN would precede the beginning of mass transfer or merger by about $t_{\rm GW}/4 \sim 10^5-10^6$~yrs [see Eq.~(\ref{tgw})], provided the conditions outlined in the previous paragraph are satisfied. In most classical novae, the initial outburst is followed by a period of stable hydrogen burning at near the Eddington luminosity, in which the hydrogen shell of the WD inflates to a radius of order $R_\odot$. However, the ultracompact nature of the WD system involved in a TN (where $a\sim R_\odot/4$) may preclude such a phase because the stably burning hydrogen shell would inflate beyond the WD's Roche lobe. This shell may then accrete on to the companion star or be ejected from the system. Therefore, we expect most of the hydrogen to be burned or ejected during in a TN. In the absence of mass transfer to supply a fresh hydrogen, recurrent novae would be unlikely. Thus, the occurrence rate of these TN may be comparable to that of WD mergers involving a CO WD. Our theory can be constrained by comparing the prediction of our tidal heating calculations to observed compact WD binaries. The 12.75~minute system SDSS JJ065133+284423 provides the best opportunity (Brown et al.~2011). This system is composed of a primary with $T_{\rm eff}=16400$~K and mass $0.25 M_\odot$, and a secondary with $T_{\rm eff}\approx 9000$~K and mass $0.55 M_\odot$. Comparison with Figure \ref{6temp} indicates that the luminosity of the secondary is likely dominated by tidal heating. Our result for a CO WD with an initial temperature of $5000$~K and a value of $t_{\rm coup}=10^3~{\rm yr}$ is most consistent with the observed temperature of the secondary. These results indicate that a TN may occur in this system in the future. In principle, tidal heating may change the structure of the WD enough to alter the dynamics of gravity wave propagation. However, we find that this is not the case (i.e., no interior convection zone forms), with the exception of a thermonuclear runaway event. Our simple two-zone model for the WD obviously needs improvement, and we have neglected the effects of mixing induced by the breaking gravity waves and viscous angular momentum transport. If the mixing is strong enough to smooth out the WD composition gradients, the dynamics of gravity wave excitation and tidal heat deposition may be altered. Furthermore, if the surface hydrogen mixes into the WD interior where it burns, the surface hydrogen layer will be gradually depleted and a TN will not occur. Observations of the ejecta of classical novae indicate substantial enrichment with core elements, although the mixing mechanism is not well understood (Truran 2002). These and other aspects of TN in compact WD binaries warrant further study. Future observations may be able to test whether TN occur and in turn provide information about the tidal processes at work in WD binaries. The observation of a nova-like event in a system with no evidence for mass transfer would be strong evidence for the existence of TN and for the tidal heating mechanism studied in this paper. Measurements of hydrogen surface abundances in compact WD systems could also constrain our theory. The observation of a WD with a thick hydrogen envelope in a very tight ($P \lesssim 5$ minutes) detached binary would indicate TN do not usually occur. If WDs in tight binaries are observed to have little to no hydrogen on their surface, this may indicate that TN have stripped the surface hydrogen, or that the hydrogen has been destroyed due to efficient mixing processes. Observations of compact binary WDs detected in future surveys may provide opportunities to test these theories. We thank Bill Paxton, Lars Bildsten, and Eliot Quataert for useful discussions. JF acknowledges the hospitality (Fall 2011) of the Kavli Institute for Theoretical Physics at UCSB (funded by the NSF through Grant 11-Astro11F-0016) where part of the work was carried out. This work has been supported in part by NSF grant AST-1008245, NASA grants NNX12AF85G and NNX10AP19G. \def{Astrophys. J.}{{Astrophys. J.}} \def{Astrophys. J. Supp.}{{Astrophys. J. Supp.}} \def{Mon. Not. R. Astr. Soc.}{{Mon. Not. R. Astr. Soc.}} \def{Phys. Rev. Lett.}{{Phys. Rev. Lett.}} \def{Phys. Rev. D}{{Phys. Rev. D}} \def{Astrophys. J. Let.}{{Astrophys. J. Let.}} \def{Publ. Astr. Soc. Pacific}{{Publ. Astr. Soc. Pacific}} \def{Astr. Astr.}{{Astr. Astr.}} \def{Astr. Astr. Rev.}{{Astr. Astr. Rev.}}
1,108,101,563,755
arxiv
\section{Introduction} The Jacobsthal polynomials $J_{n}(x)$ were first studied by E. E. Jacobsthal around 1919. Jacobsthal polynomials $J_{n}(x)$ can be defined by the recurrence $J_{0}(x) = 0, J_{1}(x) = 1$ and $J_{n}(x) = J_{n-1}(x) + xJ_{n-2}(x)$, for $n \geq 2$. Clearly, $J_{n}(1) = F_{n}$, the sequence of Fibonacci numbers. The sequence $ J_{n}= J_{n}(2) =J_{n+2}=J_{n+1}+2J_{n}$ is the sequence of Jacobsthal numbers. The first few terms are $ \{0, 1,1, 3, 5, 11, 21, 43, 85, ...\}$. The first apperance of this sequence was in [4]. It was Horadam who first considered such a sequence in detail in his seminal paper [5] . Motivated by Horadam’s work, a lot of research has been conducted. For some recent works, see [1-3,8], for example. Recently, there has been an increasing interest in studying the reciprocal sums of the Fibonacci numbers ($J_{n}(1)$). For instance, see [9-11].\\ In this article we confine ourselves the infinite sum and alternating infinite sum of the reciprocals of Jacobsthal numbers and square Jacobsthal numbers. We obtain many intersting results in this direction. We first state several well known results on Jacobsthal numbers, which will be used throughout the article. The detailed expositions can be found in [6,7]. \begin{lemma} For $n\geq 1$, we have \begin{equation}\label{*} J_{n}+J_{n+1}=2^{n} \end{equation} \end{lemma} \begin{lemma} For any positive integer $n\geq 1$, we have \begin{equation}\label{*} J_{n}< 2^{n} \end{equation} For any positive integer $n\geq 2$, we have \begin{equation}\label{*} J_{n}< 2^{n-1} \end{equation} For any positive integer $n\geq 3$, we have \begin{equation}\label{*} 2^{n-2}<J_{n}< 2^{n-1} \end{equation} \end{lemma} \begin{lemma} For $n\geq 1, k\geq1, n\geq k$, the Jacobsthal numbers has the Cassini-like formula \begin{equation}\label{*} J_{n+k}J_{n-k}-J_{n}^{2}= (-1)^{n-k+1}2^{n-k}J_{k}^{2} \end{equation} \end{lemma} \begin{lemma} For $n\geq 1$, we have \begin{equation}\label{*} J_{n+1}^{2}-J_{n}^{2} =2^{n+1}J_{n-1} \end{equation} \end{lemma} \begin{lemma} For $n\geq 1$, we have \begin{equation}\label{*} J_{n+1}^{2}+2J_{n}^{2}= J_{2n+1} \end{equation} \end{lemma} \section{Reciprocal sums of the Jacobsthal numbers } In this section, \begin{theorem} Let $n\geq 2$. Then \\ $$ J_{n-2} < (\displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}})^{-1}< 4(J_{n-2}+1).$$ \end{theorem} \begin{proof} For $k\geq 1$, we have \begin{eqnarray*} \frac{1}{J_{n}}- \frac{2}{J_{n+2}}-\frac{1}{J_{n+3}}&=&\frac{J_{n+2}-2J_{n}}{J_{n}J_{n+2}}-\frac{1}{J_{n+3}}\\ &=& \frac{J_{n+1}}{J_{n}J_{n+2}}-\frac{1}{J_{n+3}}\\ &=&\frac{J_{n+1}J_{n+3}- J_{n}J_{n+2}}{J_{n}J_{n+2}J_{n+3}} \end{eqnarray*} Clearly, $J_{n+1}J_{n+3}- J_{n}J_{n+2}> 0$. Therefore,\\ $$\frac{1}{J_{n}}> \frac{1}{J_{n+2}}+ \frac{1}{J_{n+2}} +\frac{1}{J_{n+3}}.$$ Repeating this inequality for $n> 2$, we obtain \begin{eqnarray*} \frac{1}{J_{n-2}} &>& \frac{1}{J_{n}}+ \frac{1}{J_{n}} +\frac{1}{J_{n+1}}\\ &>& \frac{1}{J_{n}}+ \frac{1}{J_{n+1}} +(\frac{1}{J_{n+2}}+ \frac{1}{J_{n+2}}+\frac{1}{J_{n+3}})\\ &>& \frac{1}{J_{n}}+ \frac{1}{J_{n+1}} + \frac{1}{J_{n+2}}+\frac{1}{J_{n+3}}+(\frac{1}{J_{n+4}}+ \frac{1}{J_{n+4}}+\frac{1}{J_{n+5}})\\ &>& ...\\ &>& \displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}}. \end{eqnarray*} Therefore, for $n > 2$, we have \begin{equation}\label{*} \displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}}< \frac{1}{J_{n-2}}. \end{equation} Now assume that $k\geq m\geq 1$. Then \begin{center} $ \frac{J_{k-m}+1}{J_{k}}\geq \frac{2^{k-m-2}+1}{2^{k-1}}> 2^{-m-1}.$ \end{center} Let $m=k-n+2$. Then \begin{equation*} \displaystyle \sum_{k=n}^{\infty} \frac{J_{n-2}+1}{J_{k}} > \displaystyle \sum_{k=n}^{\infty}2^{n-k-3}= \displaystyle \sum_{t=2}^{\infty}2^{-t-1}= \frac{1}{4}. \end{equation*} Therefore, \begin{equation}\label{*} \displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}} >\frac{1}{4(J_{n-2}+1)}. \end{equation} Combining (2.1) and (2.2), we get \begin{equation*} \frac{1}{4(J_{n-2}+1)} < \displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}}< \frac{1}{J_{n-2}}. \end{equation*} Hence, \begin{equation*} J_{n-2} < (\displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}})^{-1}< 4(J_{n-2}+1). \end{equation*} This completes the proof. \end{proof} \begin{theorem} Let $n \in \mathbf{N}$. If $n$ is odd, then\\ $$\lfloor (\displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}^{2}})^{-1}\rfloor \leq J_{n-1}J_{n}.$$ \end{theorem} \begin{proof} If $n=1$, then $\displaystyle \sum_{k=1}^{\infty} \frac{1}{J_{k}^{2}}> \frac{1}{J_{1}^{2}}=1$. This implies that $0< (\displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}^{2}})^{-1}< 1$. Therefore, $\lfloor (\displaystyle \sum_{k=1}^{\infty} \frac{1}{J_{k}^{2}})^{-1}\rfloor = 0= J_{0}J_{1}.$ Let $n\geq 3$. Then \begin{eqnarray*} \frac{1}{J_{n-1}J_{n}}- \frac{1}{J_{n}^{2}}-\frac{2}{J_{n+1}^{2}}-\frac{4}{J_{n+1}J_{n+2}} &=& \frac{J_{n}-J_{n-1}}{J_{n-1}J_{n}^{2}}- \frac{2[J_{n+2}+2J_{n+1}]}{J_{n+1}^{2}J_{n+2}} \\ &=& \frac{2J_{n-2}}{J_{n-1}J_{n}^{2}}- \frac{2J_{n+3}}{J_{n+1}^{2}J_{n+2}} \\ &=& \frac{2J_{n-2}J_{n+1}^{2}J_{n+2}-2J_{n-1}J_{n}^{2}J_{n+3}}{J_{n-1}J_{n}^{2}J_{n+1}^{2}J_{n+2}} \\ &=& \frac{2J_{n+1}^{2}[J_{n}^{2}+(-1)^{n-1}2^{n-2}J_{2}^{2}]-2J_{n}^{2}[J_{n+1}^{2}+(-1)^{n}2^{n-1}J_{2}^{2}]}{J_{n-1}J_{n}^{2}J_{n+1}^{2}J_{n+2}}\\ &=& \frac{(-1)^{n-1}2^{n-1}J_{n+1}^{2}+(-1)^{n+1}2^{n}J_{n}^{2}}{J_{n-1}J_{n}^{2}J_{n+1}^{2}J_{n+2}}\\ &=& \frac{(-1)^{n-1}2^{n-1}(J_{n+1}^{2}+2J_{n}^{2})}{J_{n-1}J_{n}^{2}J_{n+1}^{2}J_{n+2}}\\ &=& \frac{(-1)^{n-1}2^{n-1}J_{2n+1}}{J_{n-1}J_{n}^{2}J_{n+1}^{2}J_{n+2}}\\ \end{eqnarray*} Since $n$ is odd, then $$ \frac{1}{J_{n-1}J_{n}} > \frac{1}{J_{n}^{2}}+\frac{2}{J_{n+1}^{2}}+\frac{4}{J_{n+1}J_{n+2}}.$$ Repeating this for $n\geq 3$, we obtain \begin{eqnarray*} \frac{1}{J_{n-1}J_{n}} &>& \frac{1}{J_{n}^{2}}+\frac{2}{J_{n+1}^{2}}+\frac{4}{J_{n+1}J_{n+2}}\\ &>& \frac{1}{J_{n}^{2}}+\frac{2}{J_{n+1}^{2}}+ 4(\frac{1}{J_{n+2}^{2}}+\frac{2}{J_{n+3}^{2}}+\frac{4}{J_{n+3}J_{n+4}})\\ &>& \frac{1}{J_{n}^{2}}+\frac{2}{J_{n+1}^{2}}+ \frac{4}{J_{n+2}^{2}}+\frac{8}{J_{n+3}^{2}}+16 (\frac{1}{J_{n+4}^{2}}+\frac{2}{J_{n+5}^{2}}+\frac{4}{J_{n+5}J_{n+6}})\\ &>& ...\\ &>& \displaystyle \sum_{k=0}^{\infty} \frac{2^{k}}{J_{n+k}^{2}}\\ &>& \displaystyle \sum_{k=0}^{\infty} \frac{1}{J_{n+k}^{2}}\\ &=& \displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}^{2}} \end{eqnarray*} Therefore, \begin{equation*} \displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}^{2}} < \frac{1}{J_{n-1}J_{n}} \end{equation*} Hence, $\lfloor (\displaystyle \sum_{k=n}^{\infty} \frac{1}{J_{k}^{2}})^{-1}\rfloor \leq J_{n-1}J_{n}.$ \end{proof} \section{Alternating reciprocal sums of the Jacobsthal numbers } \begin{theorem} Let $n \in \mathbf{N}$. If $n$ is even, then\\ $$\lfloor (\displaystyle \sum_{k=n}^{\infty} \frac{(-1)^{k}}{J_{k}^{2}})^{-1}\rfloor = 2^{n-1}-1.$$ \end{theorem} \begin{proof} For $n \geq 1$, we have \begin{eqnarray*} \frac{(-1)^{n}}{J_{n-1}+J_{n}-(-1)^{n}}-\frac{(-1)^{n}}{J_{n}} - \frac{(-1)^{n+1}}{J_{n}+J_{n+1}-(-1)^{n+1}} &=& \frac{(-1)^{n+1} J_{n-1}+1}{J_{n}(J_{n-1}+J_{n}-(-1)^{n})}+ \frac{(-1)^{n}}{J_{n}+J_{n+1}-(-1)^{n+1}} \\ &=& \frac{M}{J_{n}(J_{n-1}+J_{n}-(-1)^{n})(J_{n}+J_{n+1}-(-1)^{n+1})} \end{eqnarray*} Where $M= (-1)^{n+1}J_{n-1}J_{n+1}+J_{n+1}-J_{n-1}+(-1)^{n}J_{n}^{2}+(-1)^{n}$. We simplify $M$ as follows \begin{eqnarray*} M &=&(-1)^{n+1}J_{n-1}J_{n+1}+J_{n+1}-J_{n-1}+(-1)^{n}J_{n}^{2}+(-1)^{n} \\ &=& (-1)^{n+1}(J_{n}^{2}+(-1)^{n}2^{n-1})+2^{n-1}+(-1)^{n}J_{n}^{2}+(-1)^{n} \\ &=& (-1)^{n} \end{eqnarray*} Since $n$ is even, then \begin{eqnarray*} \frac{1}{(-1)^{n}(J_{n-1}+J_{n})-1} &> & \frac{(-1)^{n}}{J_{n}} + \frac{1}{(-1)^{n+1}(J_{n}+J_{n+1})-1} \end{eqnarray*} A repetition of this shows that, \begin{equation}\label{*} \displaystyle \sum_{k=n}^{\infty} \frac{(-1)^{k}}{J_{k}} < \frac{1}{(-1)^{n}(J_{n-1}+J_{n})-1} \end{equation} On the other hand, for $n\geq 1$, we find \begin{eqnarray*} \frac{(-1)^{n}}{J_{n}-(-1)^{n}}-\frac{(-1)^{n}}{J_{n-1}+J_{n}} + \frac{(-1)^{n+1}}{J_{n}+J_{n+1} } &=& (-1)^{n}[\frac{ J_{n-1}}{J_{n}(J_{n-1}+J_{n} )}- \frac{1}{J_{n}+J_{n+1}}] \\ &=& (-1)^{n}[\frac{ J_{n-1}J_{n+1}-J_{n}^{2}}{J_{n}(J_{n-1}+J_{n} )(J_{n}+J_{n+1})}] \\ &=& \frac{ 2^{n-1}}{J_{n}(J_{n-1}+J_{n} )(J_{n}+J_{n+1})} \\ &>& 0 \end{eqnarray*} Therefore, \begin{equation}\label{*} \displaystyle \sum_{k=n}^{\infty} \frac{(-1)^{k}}{J_{k}} > \frac{1}{(-1)^{n}(J_{n-1}+J_{n})}. \end{equation} Combining (3.1) and (3.2), we obtain ( for $n$ even) \begin{equation*} J_{n-1}+J_{n}-1 < (\displaystyle \sum_{k=n}^{\infty} \frac{(-1)^{k}}{J_{k}})^{-1} < J_{n-1}+J_{n}. \end{equation*} Hence, \begin{equation*} \lfloor(\displaystyle \sum_{k=n}^{\infty} \frac{(-1)^{k}}{J_{k}})^{-1}\rfloor = J_{n-1}+J_{n}-1=2^{n-1}-1. \end{equation*} \end{proof} \begin{corollary} If $n$ is odd, then $$\lfloor(\displaystyle \sum_{k=n}^{\infty} \frac{(-1)^{k}}{J_{k}})^{-1}\rfloor \leq -(2^{n-1}+1).$$ \end{corollary} \begin{proof} If $n$ is odd, then $M=-1$. We conclude that $$\frac{1}{(-1)^{n}(J_{n-1}+J_{n})-1} < \frac{(-1)^{n}}{J_{n}} + \frac{1}{(-1)^{n+1}(J_{n}+J_{n+1})-1}$$ Applying this repeatedly and invoking the relation $J_{n-1}+J_{n}=2^{n-1}$ give the desired result. \end{proof} \begin{theorem} Let $n$ be a positive integer. Then $\lceil (\displaystyle \sum_{k=n}^{\infty} \frac{(-1)^{k}}{J_{k}^{2}})^{-1} \rceil \leq J_{n-1}^{2}+J_{n}^{2}-1.$ \end{theorem} \begin{proof} \begin{eqnarray*} \frac{(-1)^{n}}{J_{n-1}^{2}+J_{n}^{2}-(-1)^{n}}-\frac{(-1)^{n}}{J_{n}^{2}} - \frac{(-1)^{n+1}}{J_{n}^{2}+J_{n+1}^{2}-(-1)^{n+1}} &=& \frac{(-1)^{n+1} J_{n-1}^{2}+1}{J_{n}^{2}(J_{n-1}^{2}+J_{n}^{2}-(-1)^{n})}+ \frac{(-1)^{n}}{J_{n}^{2}+J_{n+1}^{2}+(-1)^{n}} \\ &=& \frac{N}{J_{n}^{2}(J_{n-1}^{2}+J_{n}^{2}-(-1)^{n})(J_{n}^{2}+J_{n+1}^{2}+(-1)^{n})} \end{eqnarray*} Where $N= (-1)^{n+1}J_{n-1}^{2}J_{n+1}^{2}+J_{n+1}^{2}-J_{n-1}^{2}+(-1)^{n}J_{n}^{4}+(-1)^{n}.$ Now, we bound $N$. Using $J_{n-1}J_{n+1}= J_{n}^{2}+(-1)^{n}2^{n-1}$, we immediately get \begin{equation*} N= 2^{n+1}J_{n-1}+J_{n}^{2}+(-1)^{n+1}2^{2n-2}-J_{n-1}^{2}-2^{n}J_{n}^{2}+(-1)^{n}. \end{equation*} Elementary manipulations of the inequality (1.4) entail that \begin{eqnarray*} N &<& 2^{2n-1}+ 2^{2n-2}+2^{2n-2}-2^{2n-6}-2^{3n-4}+1 \\ &=& \frac{63}{64} 2^{2n}-2^{3n-4}+1\\ &<& 2^{2n}-2^{3n-4}+1. \end{eqnarray*} We note that $2n < 3n-4$ for $n\geq 5$. Hence, $N< 0$ for $n\geq 5$. In other words, we have (for $n\geq 5$) \begin{equation*} \frac{1}{(-1)^{n}(J_{n-1}^{2}+J_{n}^{2})-1}<\frac{(-1)^{n}}{J_{n}^{2}} + \frac{1}{(-1)^{n+1}(J_{n}^{2}+J_{n+1}^{2})-1}. \end{equation*} using this equation, we can prove that \begin{equation*} \displaystyle \sum_{k=n}^{\infty} \frac{(-1)^{k}}{J_{k}^{2}}> \frac{1}{(-1)^{n}(J_{n-1}^{2}+J_{n}^{2}-1)} \end{equation*} For $n$ even, we get $(\displaystyle \sum_{k=n}^{\infty} \frac{(-1)^{k}}{J_{k}^{2}})^{-1} < J_{n-1}^{2}+J_{n}^{2}-1$. The result follows. \end{proof}
1,108,101,563,756
arxiv
\section{Introduction} The Higgs-strahlung process $e^+e^-\rightarrow Z^0h^0$ offers an unique opportunity for a model independent precision measurement of the Higgs Boson mass by means of the recoil mass to the $\rm{Z^0}$, gives as $M^{2}_{h^0}=s+M^{2}_{Z^{0}}-2E_{Z^0}\sqrt{s}$, where $\rm{Z^0}$ decays to $e^+e^-$ ($e$ channel) or $\mu^+\mu^-$ ($\mu$ channel). At the same time, the Higgs production cross-section and therefore also the coupling strength at the $Z^0h^0$ vertex can be determined, $g^2 \propto \sigma = N/\mathcal{L}\varepsilon$. This article presents results of the studies performed for two center of mass energies ($\sqrt{s}$), at 230 GeV and 250 GeV. The 230 GeV is the optimal $\sqrt{s}$ in terms of the resolution of Higgs Mass according to a previous study\cite{richard}, where the $e$ channel is analyzed using detector model $\rm{LDC01Sc}$\cite{mokka} (LDC1). The 250 GeV is the benchmark scenario as proposed in the ILD\cite{ild} LOI, where both the $e$ channel and $\mu$ channel are analyzed using detector models $\rm{LDCPrime\_02Sc}$\cite{mokka} (LDCP), $\rm{LDC01\_06Sc}$\cite{mokka} (LDC6) and $\rm{LDC\_GLD\_01Sc}$\cite{mokka} (LDCG). \section{Experimental Remarks} In this study, the Higgs mass is presumed to be 120 GeV, the luminosity is assumed to be $\rm{500\ fb^{-1}}$, and unpolarized beams are assumed. The events generator is PYTHIA\cite{pythia}, simulation is performed using MOKKA\cite{mokka}, and the reconstruction is performed using MarlinReco\cite{marlin} and PandoraPFA\cite{pfa}. Both of the two $\sqrt{s}$ analyses include Beamstrahlung, ISR and FSR. The Beamstrahlung spectrum is generated using GUINEA-PIG\cite{gp}, and the interface to PYTHIA is CALYPSO\cite{calypso}. \begin{table}[h] \centerline{\begin{tabular}{|l|c|c|l|c|} \cline{1-2} \cline{4-5} \multicolumn{2}{|c|}{\texttt{$\sqrt{s}=230GeV$ $e$ channel}} && \multicolumn{2}{c|}{\texttt{$\sqrt{s}=250GeV$ $e$ and $\mu$ channels}}\\ \cline{1-2} \cline{4-5} \texttt{Reactions} & \texttt{$\sigma$} && \texttt{Reactions} & \texttt{$\sigma$} \\ \cline{1-2} \cline{4-5} \texttt{\boldmath $Z^0h^0\rightarrow e^+e^-X$} & \texttt{\boldmath $6.3$ fb} && \texttt{\boldmath $Z^0h^0\rightarrow e^+e^-X$} & \texttt{\boldmath $7.5$ fb} \\ \cline{1-2} \cline{4-5} $e^+e^-(\gamma)$ & $5.96\times10^5$ fb && \texttt{\boldmath $Z^0h^0\rightarrow \mu^+\mu^-X$} & \texttt{\boldmath $7.5$ fb} \\ \cline{1-2} \cline{4-5} $\tau^+\tau^- \rightarrow e^+e^-4\nu$ & $146$ fb && $Z^0Z^0 \rightarrow e^+e^-f\bar{f}$ & $78.7$ fb \\ \cline{1-2} \cline{4-5} $W^+W^- \rightarrow e^+e^-2\nu$ & $181$ fb && $Z^0Z^0 \rightarrow \mu^+\mu^-f\bar{f}$ & $79.0$ fb \\ \cline{1-2} \cline{4-5} $Z^0/\gamma^*Z^0/\gamma^* \rightarrow e^+e^-f\bar{f}$ & $113$ fb & \multicolumn{3}{c}{}\\ \cline{1-2} \end{tabular}} \caption{Signals (bold) and backgrounds, and their cross-sections. Note, for $\sqrt{s}$ 230GeV backgrounds, only detector acceptance pre-cut is applied, $|cos\theta_{e^\pm}|<0.98$. Assume $\mathcal{L}=500fb^{-1}$ and unpolarized beams.} \label{tab:xsec} \end{table} The cross-sections for signals and backgrounds are shown in Tab. \ref{tab:xsec}. For $\sqrt{s}$ at 230GeV, since only detector acceptance cut is applied, the Bhabha scattering ($e^+e^-$) gives a cross-section five orders of magnitude larger than that of the signal. The four detector models under study are composed of: TPC, time projection chamber; VXD, 5 single or 3 double layers of vertex detector; SIT/SET, 2 cylindrical layers of silicon strips inside and outside the TPC; FTD, pixel silicon disks in the forward region; ECAL, SiW electromagnetic calorimeter; HCAL, scintillator hadronic calorimeter. The magnetic fields in tracking system are 4 Tesla for LDC1 and LDC6, $3.5$ Tesla for LDCP, and $3$ Tesla for LDCG. The momentum resolution ($\Delta P/P^2$) is about $\rm{5 \times 10^{-5}\ GeV^{-1}}$ for the four detector models for momentum within 10 to 100 GeV\cite{track}. A cut based electron ID is developed for the analysis at $\sqrt{s}=230\ \rm{GeV}$. The efficiency is greater than 99.5\%, and the rejection rate is 100\% for $\mu$, and larger than 98\% for $\pi$, for momentum within 20 to 80GeV, which covers the momentum range of the $e^+/e^-$ candidates of the signal. For the $\sqrt{s}$ at 250GeV analyses, MC information is employed temporarily for the lepton ID. The final state lepton pair selection is based on a $\chi^2$ criteria that the invariant mass is the closest to the $Z^0$ mass. \section{Background Rejection} \begin{wrapfigure}{r}{0.5\columnwidth} \vspace{-50pt} \centering \begin{minipage}[b]{0.25\columnwidth} \centering \includegraphics[width=0.99\columnwidth, origin=tl]{ntracks_zh.eps}\\ \end{minipage}% \begin{minipage}[b]{0.25\columnwidth} \centering \includegraphics[width=0.99\columnwidth, origin=tr]{ntracks_ee.eps} \end{minipage} \begin{minipage}[b]{0.25\columnwidth} \centering \includegraphics[width=0.99\columnwidth, origin=tr]{ntracks_tt.eps} \end{minipage}% \begin{minipage}[b]{0.25\columnwidth} \centering \includegraphics[width=0.99\columnwidth, origin=tr]{ntracks_ww.eps} \end{minipage} \caption{$N_{tracks}$ distributions of: $ZH$ top-left; $Bhabha$ top-right; $\tau\tau$ bottom-left; $WW$ bottom-right.}\label{Fig:ntk} \end{wrapfigure} Standard Model (SM) Higgs with $\rm{M_h}$ $\sim$ 120 GeV dominantly decays to 2 jets\cite{higgs}. Together with $e^+e^-$ decayed from $Z^0$, more than $95\%$ of the signal have a multiplicity of larger than 4 in the final states. For the backgrounds $Bhabha$, $\tau\tau$ and $WW$, in their visible final states, the multiplicity is $2$. Due to bremsstrahlung, one electron may lead to several reconstructed tracks. Therefore, for the $\sqrt{s}$ at 230 GeV $e$ channel analysis, cut on $N_{tracks}>6$ is applied to reject all the three backgrounds totally, with a penalty of $\sim 8\%$ reduction of the signal. The $N_{tracks}$ distributions of signal and the three backgrounds are shown on Fig \ref{Fig:ntk}. A Likelihood method is applied for the rejection of $ZZ$ background for both center of mass energies. \begin{figure}[h] \centering \begin{minipage}[b]{0.3\columnwidth} \centering \includegraphics[width=0.99\columnwidth]{acol.eps} \end{minipage} \begin{minipage}[b]{0.3\columnwidth} \centering \includegraphics[width=0.99\columnwidth]{ctha_2d_zh.eps} \end{minipage} \begin{minipage}[b]{0.3\columnwidth} \centering \includegraphics[width=0.99\columnwidth]{ctha_2d_zz.eps} \end{minipage} \caption{PDFs of acolli\-ne\-ari\-ty (left) and $cos\theta_{e^-}$ vs. $cos\theta_{e^+}$ of signal (center) and $ZZ$ (right)} \vspace{-6pt} \label{Fig:pdf} \end{figure} The Likelihood of an event to be the signal is defined as $L_S=\prod{P^S_i}$, where the $P^S_i$ is the probability of the event to be the signal according to the PDF of the signal of the $i$th selection variable. Similarly, the Likelihood of an event to be the background is defined as $L_B=\prod{P^B_i}$. Thereafter, the Likelihood Fraction is defined as $f_{L}=L_S/(L_S+L_B)$, which is within $(0,1)$. \begin{wrapfigure}{r}{0.56\columnwidth} \centering \begin{minipage}[b]{0.27\columnwidth} \centering \includegraphics[width=0.999\columnwidth, origin=tl]{nvts_lh.eps} \end{minipage} \begin{minipage}[b]{0.27\columnwidth} \centering \includegraphics[width=0.999\columnwidth, origin=tr]{s_sb_lh.eps} \end{minipage} \caption{Number of signal and number of backgrounds vs. $f_L$ cut. (top), and $S/\sqrt{S+B}$ vs. $f_L$ cuts (bottom).} \vspace{-6pt} \label{Fig:ssb} \end{wrapfigure} In order to be less dependent on the physics presumptions and to obtain a flat background distribution after its rejection, in this study, only two angular variables are employed: acol\-li\-ne\-a\-rity and $cos\theta_{e^-}$ vs. $cos\theta_{e^+}$. The PDFs of these two variables of $\sqrt{s}$ 230 GeV $e$ channel are shown in Fig. \ref{Fig:pdf}. \begin{wraptable}{r}{0.5\columnwidth} \centerline{\begin{tabular}{|c|c|c|c|} \hline Detectors & LDCP & LDC6 & LDCG\\ \hline $e^+e^-X$ & $39.5\%$ & $37.2\%$ & $39.5\%$ \\ \hline $\mu^+\mu^-X$ & $55.5\%$ & $52.7\%$ & $35.5\%$ \\ \hline \end{tabular}} \caption{Efficiencies of signal selection of $\sqrt{s}$ 250 GeV $e$ and $\mu$ channels, determined within $M_h$ window $\rm{118-135\ GeV}$.} \vspace{-6pt} \label{tab:ssb} \end{wraptable} The number of signal and background vs. $f_L$ cuts, and the significance $S/\sqrt{S+B}$ vs. $f_L$ cuts are shown in Fig. \ref{Fig:ssb}, which have the $N_{tracks}>6$ cut applied already, and they are determined within the fitting range of $M_h$ from 118 to 135 GeV. According to the maximum of the significance, the $f_L$ cut is determined to be $f_L>6.7$, resulting in an overall signal selection efficiency of $55.5\%$ and a fraction of remained $ZZ$ background of $3.5\%$ for $\sqrt{s}$ at 230 GeV analysis. For the $\sqrt{s}$ at 250 GeV analysis, since only $ZZ$ background is considered, the same Likelihood method is applied for the rejection of $ZZ$ as that of the $\sqrt{s}$ at 230 GeV. The resulted efficiencies are shown in Tab. \ref{tab:ssb}. \section{Fitting Methods} Referred to fitting methods in previous studies\cite{jc}\cite{pwa}\cite{wolf}\cite{martin}\cite{manqi}, the \emph{Gaussian core for the Peak with Exponential complement for the Tail} (GPET) method is chosen and improved in this study, which eventually has both it-self and its first derivative continuous at all points, as shown in Eq.\ref{eq:fit}. It is a partial function, the left part is a pure Gaussian, the right part is a sum of Gaussian and Exponential with the fractions of contribution to be $\beta$ and $1-\beta$ respectively, and a factor $k$ is introduced in order to keep the maximum ($x_0$) been covered by the pure Gaussian. Fig.\ref{Fig:sgf} showed an example fitting to the signal using this formula with a $\chi^2/Ndf \sim 1$. \begin{equation} \begin{array}{ll} f(x)=N \left\{ \begin{array}{ll} e^{-\frac{(x-x_0)^2}{2\sigma^2}} &: \frac{x-x_0}{\sigma} \le k \\ \beta e^{-\frac{(x-x_0)^2}{2\sigma^2}}+(1-\beta) e^{-(x-x_0) \frac{k}{\sigma}} e^{\frac{k^2}{2}} &: \frac{x-x_0}{\sigma} > k \end{array} \right. \end{array} \label{eq:fit} \end{equation} \begin{wrapfigure}{r}{0.36\columnwidth} \centering \includegraphics[width=0.36\columnwidth]{signal_fitting.eps} \caption{Example fitting using the GCET formula.} \label{Fig:sgf} \end{wrapfigure} The GPET fit provides an explicit description of the final spectrum including the detector response. Together with the Higgs mass, the mass resolution can also be measured by this method. However, due to the uncertainty of the detector response, the maximum of final recoil mass spectrum has a shift to value larger than the physics presumption ($M_h^{mc}=120\rm{\ GeV}$ in this study). A correction of the detector effects is required in order to restore to real mass. Since the shift ($\Delta{x}=x_0-M_h^{mc}$) comes from the detector effects, an explicit measurement of this shift from full detector simulation is possible and acceptable. For $\sqrt{s}$ at 230GeV analysis, beside $M_h^{mc}=120\rm{\ GeV}$, simulations and reconstructions for $M_h^{mc}=$ 117, 118, 119, 121, 122 and 123 GeV are performed, too. The shifts ($\Delta{x}$) are determined subsequently. The mean of the shifts measured $\overline{\Delta{x}}=0.270GeV$ is taken as the correction, and the standard deviation $\sigma(\Delta{x})=0.021\rm{GeV}$, which is the uncertainty of the measurement of the shifts, is taken as the systematic error, for the Higgs mass measurement. For $\sqrt{s}$ 250GeV analyses, since no simulations are available for $M_h^{mc}$ besides 120GeV, the shifts measured from $M_h^{mc}=120\rm{GeV}$ simulations are taken as the corrections, while the statistical errors from the measurements of the shifts are taken as the systematic errors. Actually, this systematic error can be eliminated by increasing the statistics of MC data samples in the determination of the shifts. \begin{wrapfigure}{r}{0.56\columnwidth} \centering \includegraphics[width=0.56\columnwidth]{final_fitting_230_e.eps} \caption{The recoil mass spectrum of signal and backgrounds of $\sqrt{s}$ $230GeV$ $e$ channel.} \label{Fig:ft230} \vspace{-36pt} \end{wrapfigure} To describe the backgrounds spectrum, the Chebyshev Polynomial function with two coefficients is employed. \section{Results} The final fitting to the signal with background is performed with the mass ($M_h=x_0-\Delta{x}$), the mass resolution ($\sigma$ in the pure Gaussian) and the number of signals as free parameters. A typical result is shown in Fig.\ref{Fig:ft230}, and the fitting results including the systematic errors of $M_h$ are listed in Tab.\ref{tab:rst}. \begin{table}[h] \vspace{-6pt} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $\sqrt{s}$ & Detector & Chan- & $M_h$ $(GeV)$ & $\sigma$ $(fb)$ & $\delta_m$ $(MeV)$ \\ $(GeV)$ & Model & nel & $(\pm stat.~err.\pm sys.~err.)$ & $(\pm stat.~err.)$& $(\pm stat.~err.)$\\ \hline $230$ & LDC1& $e$ & $120.022\pm0.039\pm0.021$& $6.41\pm0.43(6.7\%)$ & $360\pm17$ \\ \hline & LDCP & $e$ & $119.973\pm0.047\pm0.039$& $7.82\pm0.52(6.6\%)$ & $540\pm25$ \\ \cline{3-6} & $$ & $\mu$ & $120.019\pm0.023\pm0.016$& $7.78\pm0.28(3.6\%)$ & $500\pm12$ \\ \cline{2-6} $250$ & LDC6 & $e$ & $119.963\pm0.047\pm0.044$& $7.93\pm0.49(6.2\%)$ & $560\pm28$ \\ \cline{3-6} & $$ & $\mu$ & $119.994\pm0.023\pm0.016$& $7.45\pm0.27(3.6\%)$ & $550\pm12$ \\ \cline{2-6} & LDCG & $e$ & $119.973\pm0.051\pm0.044$& $7.24\pm0.52(7.2\%)$ & $490\pm27$ \\ \cline{3-6} & $$ & $\mu$ & $120.003\pm0.029\pm0.020$& $7.45\pm0.32(4.3\%)$ & $530\pm15$ \\ \hline \end{tabular} \caption{Results of Higgs mass ($M_h$), cross-section ($\sigma$) and mass resolution ($\delta_m$). } \label{tab:rst} \end{table} \section{Conclusion and Outlook} According to the analysis results shown in Tab.\ref{tab:rst}, with the same luminosity, the statistical error of $\mu$ channel is about one half compared with the $e$ channel. This is due to the bremsstrahlung effect of electrons reduces the statistics on the maximum of the recoil mass. The results between different detector models are nearly the same, since the momentum resolutions are roughly the same\cite{track}. Improvements of this analysis can be expected with polarized beams, which may increase the cross-section of Higgs-strahlung reaction, and the left-handed polarized positron beam with right-handed polarized electron beam may suppress the WW background largely. For the rejection of Bhabha scattering and $ee\rightarrow \mu\mu$ in a model independent analysis, the $P_T$ of ISR photon can be used to balance the $P_T$ of the di-lepton system. Taken the Higgs mass to be 120 GeV, the $ZZ$ background can be further reduced by reconstruction both Z bosons. Finally, the fitting range and fitting method may need to be optimized. \begin{footnotesize}
1,108,101,563,757
arxiv
\section{Introduction} A branching process in a random environment (BPRE) is a natural and important generalisation of the Galton-Watson process, where the reproduction law varies according to a random environment indexed by time. It was introduced for the first time in Smith and Wilkinson \cite% {smith} to modelize the growth of a population submitted to an environment. For background concepts and basic results concerning a BPRE we refer to Athreya and Karlin \cite% {athreya1971branching, athreya1971branching2}. In the critical and subcritical regime the branching process goes out and the research interest is mostly concentrated on the survival probability and conditional limit theorems, see e.g. Afanasyev, B\"oinghoff, Kersting, Vatutin \cite{afanasyev2012limit, afanasyev2014conditional}, Vatutin \cite{Va2010}, Vatutin and Zheng \cite{VaZheng2012}, and the references therein. In the supercritical case, a great deal of current research has been focused on large deviations, see Bansaye and Berestycki \cite{bansaye2009large}, Bansaye and B\"oinghoff \cite{bansaye2011upper, bansaye2013lower, bansaye2014small}, B\"oinghoff and Kersting \cite{boinghoff2010upper}, Huang and Liu \cite{liu}, Nakashima \cite{Nakashima2013lower}. In the particular case when the offspring distribution is geometric, precise asymptotics can be found in B\"oinghoff \cite{boinghoff2014limit}, Kozlov \cite{kozlov2006large}. An important closely linked issue is the asymptotic behavior of the distribution of a BPRE $(Z_n),$ i.e. the limit of $\mathbb{P} ( Z_n = j | Z_0=k )$ as $n \to \infty$, for fixed $j \geqslant 1$ when the process starts with $k\geqslant 1$ initial individuals. For the Galton-Watson process, the asymptotic is well-known and can be found in the book by Athreya \cite{athreya}. For the need of the lower large deviation principle of a BPRE, Bansaye and B\"oinghoff have shown in \cite{bansaye2014small} that, for any fixed $j\geqslant 1$ and $k \geqslant 1$ it holds $n^{-1} \log \mathbb{P} ( Z_n = j | Z_0 = k) \to -\rho $ as $ n \to \infty$, where $\rho >0$ is a constant. This result characterizes the exponential decrease of the probability $\mathbb{P} ( Z_n = j | Z_0 = k)$ for the general supercritical case, when extinction can occur. However, it stands only on a logarithmic scale, and the constant $\rho$ is not explicit, except when the reproduction law is fractional linear, for which $\rho$ is explicitly computed in \cite{bansaye2014small}. Sharper asymptotic results for the fractional linear case can be found in \cite{boinghoff2014limit}. In the present paper, we improve the results of \cite{bansaye2014small} and extend those of \cite{boinghoff2014limit} by giving an equivalent of the probability $\mathbb{P} ( Z_n =j | Z_0 =k)$ as $n \to \infty$, provided that each individual gives birth to at least one child. These results are important to understand the asymptotic law of the process, and are useful to obtain sharper asymptotic large deviation results. We also improve the result of \cite{liu} about the critical value for the harmonic moment of the limit variable $W=\lim_{n\to\infty}\frac{Z_n}{\mathbb E (Z_n|\xi)}.$ Let us explain briefly the findings of the paper. Assume that $\mathbb{P} (Z_1=0)=0.$ From Theorem \ref{thm small value probability 2} of the paper it follows that when $Z_0=1,$ \begin{equation} \label{into-eq small val prob} \mathbb{P} \left( Z_n = j \right) \underset{n \to \infty}{\sim} \gamma ^n q_{j} \quad \text{with } \quad \gamma=\mathbb{P} (Z_1=1)>0, \end{equation} where $q_{j} \in (0, + \infty )$ can be computed as the unique solution of some recurrence equations; moreover, the generating function $Q(t)=\sum_{j=1}^{\infty} q_j t^j$ has the radius of convergence equal to $1$ and is characterized by the functional equation \begin{equation} \label{intro-eq func Q} \gamma Q(t) = \mathbb E Q(f_0 (t)),\quad t \in [0,1), \end{equation} where $f_0(t)=\sum_{i=1}^{\infty} p_i ( \xi_0 ) t^i$ is the conditional generating function of $Z_1$ given the environment. These results extend the corresponding results for the Galton-Watson process (see \cite{athreya}). They also improve and complete the results in \cite{bansaye2014small} and \cite{boinghoff2014limit}: it was proved in \cite{bansaye2014small} that $\frac{1}{n}\log \mathbb{P} \left( Z_n = j \right) \to \log \gamma,$ and in \cite{boinghoff2014limit} that $\mathbb{P} \left( Z_n = 1 \right) \underset{n \to \infty}{\sim} \gamma ^n q_{1}$ in the fractional linear case. In the proofs of the above results we make use of Theorem \ref{thm harmonic moments W} which shows that, with $m_0=\mathbb{E} _{\xi} Z_1,$ we have, for any fixed $a>0,$ \begin{equation}\label{intro-eq mom harm} \mathbb{E} W^{-a} < \infty \quad \text{if and only if} \quad \mathbb{E} \left[ p_1 (\xi_0) m_0^a \right] <1, \end{equation} under a simple moment condition $\mathbb{E} \left[ m_0^{p} \right] < \infty $ for some $p >a,$ which is much weaker than the boundedness condition used in \cite[Theorem 1.4]{liu} (see \eqref{condition H} below). For the proof of Theorem \ref{thm harmonic moments W} our argument consists of two steps. In the first step we prove the existence of the harmonic moment of some order $a>0$ using the functional relation \eqref{eq relation phi xi 1}. The key argument to approach the critical value is in the second step, which is based on the method developed in \cite[Lemma 4.1]{liu1999asymptotic} obtaining the decay rate of the Laplace transform $\phi(t)=\mathbb{E} e^{-tW}$ as $t \to \infty,$ starting from a functional inequation of the form \begin{equation} \phi (t) \leqslant q \mathbb{E} \phi ( Yt) + C t^{-a}, \label{basic001} \end{equation} where $Y$ is a positive random variable. To prove \eqref{basic001} we use a recursive procedure for branching processes starting with $k$ individuals and choosing $k$ large enough. The intuition behind this consideration is that as the number of starting individuals $k$ becomes larger, the decay rate of $\phi_k(t)=\mathbb{E} \left[ e^{-tW} | Z_0=k \right]$ as $t \to \infty$ is higher which leads to the desired functional inequation. In the proof of Theorem \ref{thm small value probability 2}, the equivalence relation \eqref{into-eq small val prob} and the recursive equations for the limit values $(q_j)$ come from simple monotonicity arguments. The difficulty is to characterize the sequence $(q_j)$ by its generating function $Q$. To this end, we first calculate the radius of convergence of $Q$ by determining the asymptotic behavior of the normalized harmonic moments $\mathbb{E} Z_n^{-r}/\gamma^{n}$ as $n \to\infty$ for some $r>0$ large enough and by using the fact that $\sum_{j=1}^{\infty} j^{-r} q_j = \lim_{n\to \infty} \mathbb{E} Z_n^{-r}/\gamma^{n}.$ We then show that the functional equation \eqref{intro-eq func Q} has a unique solution subject to an initial condition. The rest of the paper is organized as follows. The main results, Theorems \ref{thm harmonic moments W} and \ref{thm small value probability 2}, are presented in Section \ref{secMain}. Their proofs are given in Sections \ref{sec harmonic moments W} and \ref{sec small value non extinction}. \section{Main results}\label{secMain} A BPRE $(Z_n)$ can be described as follows. The random environment is represented by a sequence $\xi = (\xi_0, \xi_1 , ... ) $ of independent and identically distributed random variables (i.i.d.\ r.v.'s), whose realizations determine the probability generating functions \begin{equation} f_n (t) = \mathnormal{f} (\xi_n ,t) = \sum_{i=0}^{\infty} p_i ( \xi_n ) t^i, \quad t \in [0,1], \quad p_i ( \xi_n ) \geqslant 0, \quad \sum_{i=0}^{ \infty} p_i (\xi_n) =1. \label{defin001} \end{equation} The branching process $(Z_n)_{n \geqslant 0}$ is defined by the relations \begin{equation} \label{relation recurrence Zn} Z_0 = 1, \quad Z_{n+1} = \sum_{i=1}^{Z_n} N_{n, i}, \quad \text{for} \quad n \geqslant 0, \end{equation} where $N_{n,i} $ is the number of children of the $i$-th individual of the generation $n$. Conditionally on the environment $\xi $, the r.v.'s $N_{n,i} $ (i = 1, 2, ...) are independent of each other with common probability generating function $\mathnormal{f}_n,$ and also independent of $Z_n$. In the sequel we denote by $\mathbb{P}_{\xi}$ the \textit{quenched law}, i.e.\ the conditional probability when the environment $\xi$ is given, and by $\tau $ the law of the environment $\xi$. Then $\mathbb{P}(dx,d\xi) = \mathbb{P}_{\xi}(dx) {\tau}(d\xi)$ is the total law of the process, called \textit{annealed law}. The corresponding quenched and annealed expectations are denoted respectively by $\mathbb{E}_{\xi}$ and $\mathbb{E}$. We also denote by $\mathbb{P}_k$ and $\mathbb{E}_k$ the corresponding probability and expectation starting with $k$ individuals. For $n \in \mathbb{N}$, the probability generating function of $Z_n$ is \begin{equation} \label{Gn} G_n (t) = \mathbb{E} t^{Z_n} = \mathbb{E} \left[ f_0 \circ \ldots \circ f_{n-1} (t) \right] = \mathbb{E} \left[ g_n (t) \right], \end{equation} where $g_n (t) = f_0 \circ \ldots \circ \ f_{n-1} (t)$ is the conditional probability generating function of $Z_n$ when the environment $\xi$ is given. It follows from \eqref{relation recurrence Zn} that the probability generating function $G_{k,n}$ of $Z_n$ starting with $k$ individuals is \begin{equation} \label{Gkn} G_{k,n} (t) = \mathbb{E}_k t^{Z_n} = \mathbb{E} \left[ g_n^k (t) \right]. \end{equation} We also define, for $n\geqslant 0$, \begin{equation*} m_n = m ( \xi_n )= \sum_{i=0}^\infty i p_i ( \xi_n ) \quad \text{and} \ \ \Pi_n = \mathbb{E}_{\xi} Z_n = m_0 ... m_{n-1}, \end{equation*} where $m_n $ represents the average number of children of an individual of generation $n$ when the environment $\xi $ is given. Let \begin{equation} \label{Wn} W_n =\frac{Z_n}{\Pi_n} , \quad n\geqslant 0, \end{equation} be the normalized population size. It is well known that under $\mathbb{P}_{\xi},$ as well as under $\mathbb{P},$ the sequence $(W_n)_{n \geqslant 0} $ is a non-negative martingale with respect to the filtration $$\mathcal{F}_n = \sigma \left(\xi, N_{j,i} , 0 \leqslant j \leqslant n-1, i = 1,2 \ldots \right), $$ where by convention $\mathcal{F}_0 = \sigma(\xi)$. Then the limit $W = \lim_{n\to \infty} W_n $ exists $\mathbb{P}$ - a.s. and $\mathbb{E} W \leqslant 1 $. We shall assume that \begin{equation*} \mu := \mathbb{E} \log m_0 \in (0, \infty ), \end{equation*} which implies that the BPRE is supercritical and that \begin{equation} \gamma := \mathbb{P} ( Z_1=1) \in [0,1). \end{equation} With the extra condition $\mathbb E |\log (1-p_0(\xi_0))| <\infty$ (see \cite{smith}), the population size tends to infinity with positive probability. We also assume in the whole paper that each individual gives birth to at least one child, i.e. \begin{equation} \label{condition p0=0} p_0(\xi_0) = 0 \quad a.s. \end{equation} Therefore, under the condition \begin{equation} \label{CN CV L1 W} \mathbb{E} \frac{Z_1}{m_0} \log^+ Z_1 < \infty , \end{equation} the martingale $(W_n)$ converges to $W$ in $L^1 (\mathbb{P})$ (see e.g. \cite{tanny1988necessary}) and \[ \mathbb{P} (W>0) = \mathbb{P} (Z_n \to \infty) =1. \] Our first result concerns the harmonic moments of the r.v.\ $W$. \begin{theorem} \label{thm harmonic moments W} Assume that there exists a constant $p>0$ such that $\mathbb{E} \left[ m_0^{p} \right] < \infty $. Then for any $a\in (0,p)$, \begin{equation*} \mathbb{E}_k W^{-a} < \infty \quad \text{if and only if} \quad \mathbb{E} \left[ p_1^k (\xi_0) m_0^a \right] <1. \end{equation*} \end{theorem} From Theorem \ref{thm harmonic moments W} we get the following corollary. \begin{corollary} \label{cor harmonic moments W} Let $a_k>0$ be the solution of the equation \begin{equation} \label{eq moment harmonique critique ak} \mathbb{E} [ p_1^k m_0^{a_k} ] =1 . \end{equation} Assume that $\mathbb{E}m_0^{a_k} < \infty$. Then, \begin{equation*} \left\{ \begin{array}{l} \mathbb{E}_k W^{-a} < \infty \quad \text{for}\quad a \in [0, a_k), \\ \mathbb{E}_k W^{-a} =\infty \quad \text{for}\quad a \in [a_k, \infty). \end{array} \right. \end{equation*} \end{corollary} The solution $a_k$ of the equation \eqref{eq moment harmonique critique ak} is the critical value for the existence of harmonic moments of the r.v.\ $W$. Note that, when the process starts with one individual, the critical value $a_1$ for the harmonic moments of $W$ has been found in Theorem 1.4 of \cite{liu} under the more restrictive condition \begin{equation} \label{condition H} A_1 \leqslant m_0 \quad \text{and} \quad \sum_{i=1}^\infty i^{1+ \delta} p_i (\xi_0) \leqslant A^{1+ \delta} \quad a.s., \end{equation} where $\delta>0$ and $1<A_1 <A$ are some constants. Theorem \ref{thm harmonic moments W} and Corollary \ref{cor harmonic moments W} generalize the result of \cite{liu}, in the sense that we consider $k$ initial individuals rather than just one and that the boundedness condition \eqref{condition H} is relaxed to the simple moment condition $\mathbb{E} \left[ m_0^{p} \right] < \infty $. The next result gives an equivalent as $n \to \infty$ of the probability $\mathbb{P}_k \left( Z_n = j \right) = \mathbb{P} \left( Z_n = j | Z_0=k \right)$, with $k \in \mathbb{N}^*$ and $j \geqslant k,$ in the case when $\mathbb{P}(Z_1=1)>0$. The last condition implies that, for $k \geqslant 1$, \begin{equation} \label{eq gamma_k} \gamma_k = \mathbb{P}_k (Z_1=k) = \mathbb{E} [ p_1^k (\xi_0) ] >0. \end{equation} Define $r_k$ as the solution of the equation \begin{equation} \label{eq rk} \gamma_k = \mathbb{E} m_0^{-r_k}. \end{equation} \begin{theorem} Assume that $\mathbb{P}(Z_1=1)>0$. For any $k\geqslant 1$ the following assertions holds. \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item[a)] \label{thm small value probability 2} For any accessible state $ j \geqslant k$ in the sense that $\mathbb{P}_k (Z_l=j)>0$ for some $l \geqslant 0$, we have \begin{equation} \label{small value asymptotic 2} \mathbb{P}_k \left( Z_n = j \right) \underset{n \to \infty}{\sim} \gamma_k^n q_{k,j}, \end{equation} where $q_{k,k}= 1$ and, for $j>k$, $q_{k,j} \in (0, + \infty )$ is the solution of the recurrence relation \begin{equation} \label{relation rec qkj} \gamma_k q_{k,j} = \sum_{i=k}^j p(i, j) q_{k, i}, \end{equation} with $q_{k,i}=0$ for any non-accessible state $i$, i.e.\ $\mathbb{P}_k (Z_l=i)=0$ for all $l \geqslant 0$. \item[b)] Assume that there exists $\varepsilon>0$ such that $\mathbb{E} [ m_0^{r_k+ \varepsilon} ] < \infty $. Then, for any $r>r_k$, we have $ \sum_{j=k}^{\infty} j^{-r} q_{k,j} < \infty$. In particular the radius of convergence of the power series \begin{equation} Q_k (t) = \sum_{j=k}^{+ \infty} q_{k,j} t^j \end{equation} is equal to 1. \item[c)] For all $t \in [0, 1)$ and $k \geqslant 1$, we have, \begin{equation} \label{cv Qnk ->Qk} \frac{G_{k, n} (t)}{\gamma^n_k} \uparrow Q_k(t) \ \ \text{as} \ \ n \to \infty, \end{equation} where $G_{k,n}$ is the probability generating function of $Z_n$ when $Z_0=k$, defined in \eqref{Gkn}. \item[d)] $Q_k (t)$ is the unique power series which verifies the functional equation \begin{equation} \label{relation Q_k} \gamma_k Q_k (t) = \mathbb{E} \left[ Q_k ( f_0 (t) ) \right], \ \ t\in [0,1), \end{equation} with the condition $Q_k^{(k)} (0) = 1.$ \end{enumerate} \end{theorem} Part a) improves the bound $\mathbb{P} \left( Z_n \leqslant j \right) \leqslant n^j \gamma^n $ obtained in \cite{bansaye2009large} (Lemma 7) for a BPRE with $\mathbb{P}(Z_1=0)=0$. Furthermore, Theorem \ref{thm small value probability 2} extends the results of \cite{athreya} for the Galton-Watson process, with some significant differences. Indeed, when the environment is random and non-degenerate, we have, for $k\geqslant 2,$ $ G_{k,1} (t) = \mathbb{E} f_0^k (t) \neq G_1^k (t)$ in general, which implies that $Q_k (t) \neq Q^k (t)$, whereas we have the relation $Q_k (t) = Q^k (t)$ for the Galton-Watson process. Theorem \ref{thm small value probability 2} also improves the results of \cite{bansaye2014small} (Theorem 2.1), where it has been proved that for a general supercritical BPRE \begin{equation} \label{const-rho-001} \lim_{n\to\infty}\frac{1}{n} \log \mathbb{P}_k \left( Z_n = j \right) = -\rho <0. \end{equation} Our result is sharper in the case where $\mathbb{P} \left( Z_1 = 0 \right) =0$. Moreover, in the case where $\mathbb{P} \left( Z_1 = 0 \right) =0$, it has been stated mistakenly in \cite{bansaye2014small} that $\lim_{n\to\infty} \frac{1}{n} \log \mathbb{P}_k \left( Z_n = j \right) = k \log \gamma$, whereas the correct asymptotic is $$\lim_{n\to\infty}\frac{1}{n} \log \mathbb{P}_k \left( Z_n = j \right) = \log \gamma_k.$$ Now we discuss the particular fractional linear case. The reproduction law of a BPRE is said to be fractional linear if \begin{equation} \label{loi linéaire fractionnaire} p_0 (\xi_0) = a_0, \quad p_k (\xi_0) = \frac{(1-a_0)(1-b_0)}{b_0} b_0^k, \end{equation} with generating function $f_0$ given by \begin{equation*} \label{eq f cas lineraire fractionnaire} f_0(t) = a_0 + \frac{(1-a_0)(1-b_0)t}{1-b_0t}, \end{equation*} where $a_0\in [0,1),$ $b_0 \in (0,1)$, with $a_0+b_0 \leqslant 1 $, are random variables depending on the environment $\xi_0$. In this case, the mean of the offspring distribution is given by \begin{equation*} \label{m_0 lineraire fractionnaire} m_0 = \frac{1-a_0}{1-b_0}. \end{equation*} The constant $\rho$ in \eqref{const-rho-001} was computed in \cite{bansaye2014small}: with $X= \log m_0,$ \begin{equation*} \label{rho cas LF} \rho = \left\{ \begin{array}{l l l l} - \log \mathbb{E} [ e^{-X} ] & \text{if}& \ \mathbb{E} [ X e^{-X} ] \geqslant 0 & (\text{intermediately and}\\ &&& \ \ \text{strongly supercritical case}), \\ - \log \inf_{\lambda \geqslant 0} \mathbb{E} [ e^{-X} ] & \text{if}& \ \mathbb{E} [ X e^{-X} ] < 0 & (\text{weakly supercritical case)}. \end{array} \right. \end{equation*} Moreover, precise asymptotic results for the strongly and intermediately supercritical case can be found in \cite{boinghoff2014limit}, where the following assertions are proved: \begin{enumerate} \item if $\mathbb{E} [X e^{-X} ] > 0$ (strongly supercritical case), \[ \mathbb{P} (Z_n=1) \sim \nu \left( \mathbb{E} [e^{-X} ] \right)^n; \] \item if $\mathbb{E} [X e^{-X} ] = 0$ (intermediately supercritical case), \[ \mathbb{P} (Z_n=1) \sim \theta \left( \mathbb{E} [e^{-X} ] \right)^n l(n) n^{-(1-s)}, \] \end{enumerate} with $\theta$, $\nu$, $s$ positive constants and $l(\cdot)$ a slowly varying function. In the particular case where $a_0=0$, Theorem \ref{thm small value probability 2} recovers Theorem 2.1.1 of \cite{boinghoff2014limit} with $p_1 (\xi_0) = 1/m_0$, $X= \log m_0 >0$ and $\mathbb{E} \left[X e^{-X} \right] >0$. Therefore the process is strongly supercritical and $ \mathbb{P} (Z_n=1) \sim \nu \left( \mathbb{E} [e^{-X} ] \right)^n = \gamma^n. $ However, since we assume $\mathbb{P}(Z_1=0)=0$, our result does not highlight the previous two asymptotic regimes stated in the particular case when the distribution is fractional linear. The study of the general case is a challenging problem which still remains open. \section{Harmonic moments of $W$} \label{sec harmonic moments W} In this section we prove Theorem \ref{thm harmonic moments W}. Denote the quenched Laplace transform of $W$ under the environment $\xi$ by \begin{equation} \label{quenched laplace Wn W} \phi_{\xi} (t) = \mathbb{E}_{\xi} \left[ e^{-t W} \right], \end{equation} and the annealed Laplace transform of $W$ starting with $k$ individuals by \begin{equation} \label{annealed laplace Wn W} \phi_{k} (t) = \mathbb{E}_{k} \left[\phi_{\xi} (t) \right] = \mathbb{E} \left[\phi_{\xi}^k (t) \right] = \mathbb{E}_{k} \left[ e^{-t W} \right]. \end{equation} We start with a lemma which gives a lower bound for the harmonic moment of $W$. \begin{lemma} \label{lem ak min} Assume that $\mathbb{E} \left[ m_0^{p} \right] < \infty$ for some constant $p>0$. For any $k \geqslant 1$, let \begin{equation} \label{eq ak min} \alpha_k = \frac{ p }{ 1 - \log \mathbb{E} m_0^{p} / \log \gamma_k }, \end{equation} with the convention that $\alpha_k=p$ if $p_1 (\xi_0)=0$ a.s. (so that $\gamma_k=0$). Then, for all $a \in (0, \alpha_k)$, \[ \mathbb{E}_k W^{-a} < \infty . \] Furthermore, if $\mathbb{P}(p_1 (\xi_0)=0)<1$, we have $\alpha_k < \alpha_{k+1}$ ; if additionally $\mathbb{P} \left( p_1(\xi_0)<1 \right)=1$, then $\lim_{k \to \infty} \alpha_k = p.$ \end{lemma} \begin{proof} We use the same approach as in \cite{glm2016berry} where the case $k=1$ was treated. Since $W$ is a positive random variable, it can be easily seen that, for $\alpha >0$, \begin{equation} \label{moment harmonique et Laplace} \mathbb{E}_k W^{- \alpha} = \frac{1}{\Gamma (\alpha)} \int_0^{+ \infty} \phi_k (t) t^{\alpha-1} dt, \end{equation} where $\Gamma(\alpha )=\int_0^\infty t^{\alpha-1}e^{-t}dt$ is the Gamma function. Moreover, it is well-known that $\phi_{\xi} (t) $ satisfies the functional relation \begin{equation} \label{eq relation phi xi 1} \phi _{\xi} (t) = \mathnormal{f}_0 \left( \phi_{T \xi} \left( \frac{t}{m_0} \right) \right), \end{equation} where $f_0 (t) = \sum_{k=1}^{\infty} p_k (\xi_0) t^k$ is the generating function of $Z_1$ under $\xi_0$, defined in \eqref{defin001}. Using \eqref{eq relation phi xi 1} and the fact that $ \phi^k_{T \xi} \left( \frac{t}{m_0} \right) \leqslant \phi^2_{T \xi} \left( \frac{t}{m_0} \right)$ for all $k \geqslant 2$, we obtain \begin{equation} \label{eq relation phi xi 2} \phi_{\xi} (t) \leqslant p_1 (\xi_0) \phi_{T \xi } \left( \frac{t}{m_0} \right) + ( 1 - p_1 ( \xi_0 ) ) \phi_{T \xi }^2 \left( \frac{t}{m_0} \right). \end{equation} Taking the $k$-th power in \eqref{eq relation phi xi 2}, using the binomial expansion and the fact that $\phi_{T \xi}^{2k-i} \left( \frac{t}{m_0} \right) \leqslant \phi_{T \xi}^{k+1} \left( \frac{t}{m_0} \right)$ for all $i \in \{ 0, \ldots, k-1 \}$, we get \begin{eqnarray} \label{eq relation phi xi 3} \phi_{\xi}^{k} (t) &=& p_1^{k} (\xi_0) \phi_{T \xi}^{k} \left( \frac{t}{m_0} \right) + \sum_{i=0}^{k-1} C^i_{k}\ p_ 1 (\xi_0)^i (1-p_1 (\xi_0))^{k-i} \phi_{T \xi}^{2(k-i)+i}\left( \frac{t}{m_0} \right) \notag \\ & \leqslant & p_1^{k} (\xi_0) \phi_{T \xi}^{k} \left( \frac{t}{m_0} \right) + (1-p_1^k(\xi_0))\phi_{T \xi}^{k+1} \left( \frac{t}{m_0} \right) \notag \\ &=& \phi_{T \xi}^{k} \left( \frac{t}{m_0} \right) \left[ p_1^{k} (\xi_0) + (1-p_1^k(\xi_0)) \phi_{T \xi} \left( \frac{t}{m_0} \right) \right]. \end{eqnarray} By iteration, this leads to \begin{equation} \label{2.6} \phi_{\xi}^k (t) \leqslant \phi_{T^n \xi}^k \left(\frac{t}{\Pi_n}\right) \ \prod_{j=0}^{n-1} \left( p_1^k ( \xi_j) + (1- p_1^k ( \xi_j) ) \phi_{T^n \xi} \left( \frac{t}{\Pi_n} \right) \right). \end{equation} Taking expectation and using the fact that $ \phi_{T^n \xi} (\cdot) \leqslant 1$, we have \[ \phi_k (t) \leqslant \mathbb{E} \left[ \prod_{j=0}^{n-1} \left( p_1^k ( \xi_j) + (1- p_1^k ( \xi_j) ) \phi_{T^n \xi} \left( \frac{t}{\Pi_n} \right) \right) \right]. \] Since $ \phi_{\xi} ( \cdot ) $ is non-increasing, using a truncation, we get for all $A>1$, \begin{eqnarray*} \phi_k (t) &\leqslant& \mathbb{E} \left[ \prod_{j=0}^{n-1} \left( p_1^k ( \xi_j) + (1- p_1^k ( \xi_j) ) \phi \left( \frac{t}{A^n} \right) \right) \right] + \mathbb{P}( \Pi_n \geqslant A^n ). \end{eqnarray*} As $T^n \xi$ is independent of $\sigma ( \xi_0, ... ,\xi_{n-1} )$, and the r.v.'s $ p_1 ( \xi_i)$ ($i\geqslant 0$) are i.i.d., we obtain \begin{equation*} \phi_k (t) \leqslant \left[ \gamma_k + (1- \gamma_k ) \phi \left(\frac{t}{A^n}\right) \right]^n + \mathbb{P}( \Pi_n \geqslant A^n ), \end{equation*} where $\gamma_k = \mathbb{E} p_1^k (\xi_0)$ is defined in \eqref{eq gamma_k}. By the dominated convergence theorem, we have $\lim_{t \to \infty} \phi (t) = 0$. Thus, for any $\delta \in (0,1)$, there exists a constant $K>0$ such that, for all $ t \geqslant K$, we have $\phi (t) \leqslant \delta$. Consequently, for all $ t \geqslant K A^n,$ we have $ \phi \left(\frac{t}{A^n}\right) \leqslant \delta $ and \begin{equation} \phi_k (t) \leqslant \beta^n + \mathbb{P}( \Pi_n \geqslant A^n ) , \label{alpha0} \end{equation} where \begin{equation} \beta = \gamma_k + (1- \gamma_k ) \delta \in (0,1). \label{alpha} \end{equation} Using Markov's inequality, we have $\mathbb{P} ( \Pi_n \geqslant A^n) \leqslant \left( \mathbb{E} m_0^{p}/ A^{p} \right)^n $. Setting $ A = \left(\frac{\mathbb{E} m_0^{p}}{\beta} \right)^{1/ p} >1, $ we get for any $ n \in \mathbb{N} $ and $ t \geqslant K A^n$, \begin{equation} \phi_k (t) \leqslant 2 \beta^n. \label{bbb001} \end{equation} Now, for any $t \geqslant K$, define $n_0 = n_0 (t) = \left[ \frac{\log (t/K)}{\log A } \right] \geqslant 0$, where $ [x]$ stands for the integer part of $x$, so that \[ \frac{\log (t/K)}{\log A} - 1 \leqslant n_0 \leqslant \frac{\log (t/K)}{\log A} \ \ \text{and} \ \ t \geqslant K A^{n_0} .\] Then, for $t \geqslant K$, \[\phi_k (t) \leqslant 2 \beta^{n_0} \leqslant 2 \beta^{-1} (t/K)^{\frac{\log \beta }{\log A}} = C_0 t^{-\alpha}, \] with $C_0 = 2 \beta^{-1} K^{\alpha}$ and $ \alpha = - \frac{\log \beta}{\log A } > 0 $. Thus, we can choose a constant $C >0$ large enough, such that, for all $t > 0$, \begin{equation} \label{majoration phi} \phi_k (t) \leqslant C t^{- \alpha}. \end{equation} Furthermore, by the definition of $\beta$, $A$ and $\alpha$, we have \begin{eqnarray*} \alpha &=& \frac{ p }{1- \log \mathbb{E} m_0^{p} / \log \left(\gamma_k + (1- \gamma_k ) \delta \right) }, \end{eqnarray*} where $\delta \in (0,1)$ is an arbitrary constant and $\gamma_k=\mathbb{E} p_1^k(\xi_0)$. When $\delta \rightarrow 0$, we have $\alpha \rightarrow \alpha_k$, so that \eqref{majoration phi} holds for all $\alpha < \alpha_k$, where $\alpha_k$ is defined in \eqref{eq ak min}. By \eqref{moment harmonique et Laplace} and \eqref{majoration phi}, we conclude that $\mathbb{E} W^{- \alpha} < \infty$ for any $\alpha < \alpha_k$. Moreover, it is easily seen that if $\mathbb{P}(p_1 (\xi_0)=0)<1$, then $\alpha_k < \alpha_{k+1}$ since $\gamma_{k+1} < \gamma_k$; if additionally $\mathbb{P} (p_1(\xi_0)<1)=1$, then $\lim_{k \to \infty} \gamma_k =0$ so that $\lim_{k \to \infty} \alpha_k = p$. \end{proof} The following lemma is the key technical tool to study the exact decay rate of the Laplace transform of the limit variable $W$. \begin{lemma}[\cite{liu1999asymptotic}, Lemma 4.1] \label{lem liu} Let $\phi : \mathbb{R}_+ \to \mathbb{R}_+$ be a bounded function and let $Y$ be a positive random variable such that for some constants $q \in (0,1)$, $a \in (0, \infty)$, $C>0$ and $t_0 \geqslant 0$ and all $t>t_0$, \[ \phi (t) \leqslant q \mathbb{E} \phi ( Yt) + C t^{-a}. \] If $ q \mathbb{E} \left(Y^{-a} \right) < 1$, then $ \phi (t) = O ( t^{-a} )$ as $t \to \infty$. \end{lemma} Now we proceed to prove Theorem \ref{thm harmonic moments W}. We first prove the necessity. Assume that $\mathbb{E}_k W^{-a} < \infty$ for some $a>0$. We shall show that $\mathbb{E} p_1^k (\xi_0) m_0^a < 1$. Note that the r.v.\ $W$ admits the well-known decomposition \[ W = \frac{1}{m_0} \sum_{i=1}^{Z_1} W{(i)}, \] where the r.v.'s $W{(i)}$ $(i \geqslant 1)$ are i.i.d.\ and independant of $Z_1$ under $\mathbb{P}_\xi$, and are also independent of $Z_1$ and $\xi_0$ under $\mathbb{P}$. The conditional probability law of $W(i)$ satisfies $\mathbb{P}_\xi ( W(i) \in \cdot ) = \mathbb{P}_{T \xi} ( W \in \cdot )$. Since $\mathbb{P}_k ( Z_1 \geqslant k+1)>0$, we have \begin{equation} \mathbb{E}_k W^{-a} > \mathbb{E}_k m_0^a \left(\sum_{i=1}^{Z_1} W{(i)} \right)^{-a} \mathds{1} \{ Z_1 = k \} = \mathbb{E} p_1^k (\xi_0) m_0^a \ \mathbb{E}_k W^{-a}, \end{equation} which implies that $\mathbb{E} p_1^k (\xi_0) m_0^a < 1$. We now prove the sufficiency. Assume that $\mathbb{E} m_0^{p}< \infty$ and $\mathbb{E} p_1^k(\xi_0) m_0^a < 1$ for some $a \in (0,p)$. We first consider the case where $\mathbb{P}(p_1 (\xi_0)<1)=1$. We prove that $\mathbb{E}_k W^{-a}< \infty$ by showing that $ \phi_k (t) = O \left( t^{-(a+ \varepsilon)} \right)$ as $t \to \infty$, for some $\varepsilon>0$. By Lemma \ref{lem ak min}, there exists an integer $j \geqslant k$ large enough and a constant $C>0$ such that \begin{equation} \label{majoration phi_j} \phi_j (t) \leqslant C t^{-(a+ \varepsilon)}, \end{equation} with $\varepsilon>0$ and $a+ \varepsilon<p$. By \eqref{eq relation phi xi 3}, we have \begin{equation} \label{eq relation phi xi 4} \phi_{\xi}^{j-1} (t) \leqslant p_1^{j-1} (\xi_0) \phi_{T \xi}^{j-1} \left( \frac{t}{m_0} \right) + \phi_{T \xi}^j \left( \frac{t}{m_0} \right). \end{equation} Taking the expectation in \eqref{eq relation phi xi 4}, using \eqref{annealed laplace Wn W}, \eqref{majoration phi_j} and the independence between $\xi_0$ and $T \xi$, we obtain \begin{eqnarray} \label{eq relation phi xi 5} \phi_{j-1} (t) &\leqslant& \mathbb{E} \left[ p_1^{j-1}(\xi_0) \phi_{j-1} \left( \frac{t}{m_0} \right) \right] + C t^{-(a+ \varepsilon)} \notag \\ &=& \gamma_{j-1} \mathbb{E} \left[ \phi_{j-1} (Yt) \right] + C t^{-(a+ \varepsilon)}, \end{eqnarray} where $\gamma_{j-1}= \mathbb{E} \left[ p_1^{j-1} (\xi_0)\right]<1$ and $Y$ is a positive random variable whose distribution is determined by \[ \mathbb{E} \left[ g(Y) \right] = \frac{1}{\gamma_{j-1}} \mathbb{E} \left[ p_1^{j-1}(\xi_0) g \left( \frac{1}{m_0} \right) \right], \] for all bounded and measurable function $g$. By hypothesis, $\mathbb{E} p_1^k (\xi_0) m_0^a < 1$. Then, by the dominated convergence theorem, there exists $\varepsilon >0$ small enough such that $\mathbb{E} p_1^k (\xi_0) m_0^{a+\varepsilon} < 1$, and since $ j-1 \geqslant k$, we have $\mathbb{E} p_1^{j-1} (\xi_0) m_0^{a+\varepsilon} \leqslant \mathbb{E} p_1^k(\xi_0) m_0^{a+\varepsilon}<1$. Therefore, $ \gamma_{j-1} \mathbb{E} [ Y^{-(a+\varepsilon)} ]<1$ and using \eqref{eq relation phi xi 5} and Lemma \ref{lem liu}, we get $ \phi_{j-1} (t) = O ( t^{-(a+ \varepsilon)} )$ as $t \to \infty $. By induction, applying \eqref{eq relation phi xi 4} and \eqref{eq relation phi xi 5} to the functions $\phi_{j-2}, \phi_{j-3}, \ldots , \phi_{k}$ and using the same argument as in the proof for $\phi_{j-1}$, we obtain \begin{equation} \phi_{k} (t) = O ( t^{-(a+ \varepsilon)} ) \quad \text{as} \quad t \to \infty . \end{equation} Therefore, in the case where $\mathbb{P}(p_1 (\xi_0)<1)=1$, we have proved that \begin{equation} \label{eq implication moment harm W cas p1<1} \mathbb{E} p_1^k (\xi_0) m_0^a < 1 \quad \text{implies} \quad \mathbb{E}_k W^{-a} < \infty. \end{equation} Now consider the general case where $\mathbb{P}(p_1 (\xi_0)<1)<1.$ Denote the distribution of $\xi_0$ by $\tau_0$ and define a new distribution $\tilde{\tau}_0$ as \begin{equation} \tilde{\tau}_0 (\cdot) = \tau_0 (\cdot | p_1 (\xi_0) <1). \end{equation} Consider the new branching process whose environment distribution is $\tilde{\tau}= \tilde{\tau_0}^{\otimes \mathbb{N}}$ instead of $\tau = \tau_0^{\otimes \mathbb{N}}$. The corresponding probability and expectation are denoted by $\tilde{\mathbb{P}}(dx,d\xi) = \mathbb{P}_\xi(dx) \tilde{\tau}(d\xi)$ and $\tilde{\mathbb{E}}$, respectively. Of course $(W_n)$ is still a martingale under $\tilde{\mathbb{P}}$. Moreover, the condition $\tilde{\mathbb{E}} \left[ \frac{Z_1}{m_0} \log^+ Z_1 \right] = \mathbb{E} \left[ \frac{Z_1}{m_0} \log^+ Z_1 \right] /\mathbb{P} ( p_1 (\xi_0) < 1) < \infty$ implies that $W_n \to W$ in $L^1 (\tilde{\mathbb{P}})$. Now we show that $\mathbb{E}_k \left[ W^{-a} \right] \leqslant \tilde{\mathbb{E}}_k \left[ W^{- a} \right] $. For $ 0 \leqslant i \leqslant n$, denote \begin{eqnarray*} A_{i,n} &=& \big\{ ( \xi_0, \ldots, \xi_{ n-1}) \ | \ p_1 ( \xi_{j_1}) = \ldots = p_1 ( \xi_{j_i}) =1 \ \text{for some}\ 0 \leqslant j_1 < \ldots < j_i \leqslant n-1, \\ &&\ \text{and}\ p_1 ( \xi_h) <1 \ \text{for all }\ h \in \{ 0, \ldots n -1\} \backslash \{ j_1, \ldots , j_{i} \} \big\}. \end{eqnarray*} Conditioning by the events $A_{i,n}$ ($i \in \{0, \ldots, n \}$) and using the fact that the r.v.'s $\xi_0, \ldots ,\xi_{ n-1}$ are i.i.d., we obtain, for all $n \in \mathbb{N}$, \begin{eqnarray} \label{eq moment harmonique Wn p1<1 et p1=1} \mathbb{E}_k\left[ W_n^{- a} \right] &=& \sum_{i=0}^n \mathbb{E}_k \left[ W_n^{- a} \big| A_{i,n} \right] \mathbb{P} \left( A_{i,n} \right) \notag \\ &=& \sum_{i=0}^{n} \mathbb{E}_k \left[ W_n^{- a} \big| A_{i,n} \right] C^i_n \eta^i (1- \eta)^{n-i}, \end{eqnarray} with $\eta = \mathbb{P} ( p_1 (\xi_0) =1)$. Moreover, using \eqref{relation recurrence Zn}, a straightforward computation leads to the decomposition \begin{equation} \label{decomposition produit Wn} W_{n} = \prod_{i=0}^{n-1} \eta_i, \quad \text{with} \quad n \geqslant 1 \quad \text{and} \quad \eta_i =\frac{1}{Z_i} \sum_{j=1}^{Z_{i}} \frac{N_{i,j}}{m_i}. \end{equation} Note that, on the event $\left\{p_1 (\xi_i) =1\right\}$ we have $\eta_i=1$. Therefore, using \eqref{decomposition produit Wn} and the fact that the r.v.'s $\xi_0, \ldots, \xi_{ n-1}$ are i.i.d., we get \begin{equation} \label{eq E Wn | Akn = tilde E Wn-k} \mathbb{E}_k \left[ W_n^{- a} \big| A_{i,n} \right] = \tilde{\mathbb{E}}_k \left[ W_{n-i}^{- a} \right]. \end{equation} By the convexity of the function $x \mapsto x^{- a}$, we have $ \sup_{n \geqslant i} \tilde{\mathbb{E}}_k [W_{n-i}^{-a}] \leqslant \tilde{\mathbb{E}}_k W^{-a}$ (see \cite{liu} Lemma 2.1). Thus, by \eqref{eq moment harmonique Wn p1<1 et p1=1} and \eqref{eq E Wn | Akn = tilde E Wn-k}, we obtain \begin{equation} \label{eq majoration E W-a leq tilde E W-a} \mathbb{E}_k \left[ W^{- a} \right] \leqslant \tilde{\mathbb{E}}_k \left[ W^{-a} \right]. \end{equation} Note that, conditioning by the events $\{p_1 (\xi_0)=1 \}$ and $ \{ p_1 (\xi_0)<1 \}$, we have \[ \mathbb{E} p_1^k (\xi_0) m_0^a = (1- \eta) \tilde{\mathbb{E}} p_1^k (\xi_0) m_0^a + \eta, \] with $\eta = \mathbb{P} ( p_1 (\xi_0) =1)$. So the condition $\mathbb{E} p_1^k(\xi_0) m_0^a <1$ implies that $ \tilde{\mathbb{E}} p_1^k(\xi_0) m_0^a <1$. Then, by \eqref{eq implication moment harm W cas p1<1} applied under the probability $ \tilde{\mathbb{P}} $, and the fact that $\tilde{\mathbb{P}} ( p_1 (\xi_0) < 1) =1$, we get $\tilde{\mathbb{E}}_k \left[ W^{-a} \right] < \infty$. Therefore, by \eqref{eq majoration E W-a leq tilde E W-a}, it follows that \begin{equation} \mathbb{E} p_1^k (\xi_0) m_0^a < 1 \quad \text{implies} \quad \mathbb{E}_k W^{-a} < \infty, \end{equation} which ends the proof of Theorem \ref{thm harmonic moments W}. \section{Small value probability in the non-extinction case} \label{sec small value non extinction} In this section we prove Theorem \ref{thm small value probability 2}. We start with the proof of part a). For $k \geqslant 1$ and $j \geqslant k$, define \begin{equation} a_{k,n} (j) = \frac{ \mathbb{P} \left( Z_n =j | Z_0=k \right)}{\gamma_k^n}, \end{equation} with $\gamma_k = \mathbb{P}_k (Z_1=k)$. By the Markov property, we have \[ \mathbb{P}_k \left( Z_{n+1} = j \right) \geqslant \mathbb{P}_k \left( Z_1 = k \right) \mathbb{P}_k \left( Z_n = j \right). \] Dividing by $\gamma_k^{n+1}$ leads to \begin{equation} \label{majoration a_k,n (j)} a_{k,n+1} (j) \geqslant a_{k,n} (j) . \end{equation} Therefore, by the monotone ratio theorem, we obtain \[ \lim_{n \to \infty} \uparrow a_{k,n} (j) = q_{k,j} \in \bar { \bb R} . \] We shall prove that $q_{k,j}$ satisfies the properties claimed in the theorem. If $j$ is such that $\mathbb{P}_k (Z_n=j) =0$ for any $n \geqslant 0$, then $a_{k,n}(j)=0$ for any $n \geqslant 0$, so that $ \lim_{n \to \infty} a_{k,n}(j)=0=q_{k,j}$. If there exists $l \geqslant 0$ such that $\mathbb{P}_k (Z_l=j) >0$, then $q_{k,j} \geqslant a_{k,l} (j) = \mathbb{P}_k (Z_l=j)/ \gamma_k^l >0$. Now we show by induction that for all $j \geqslant k$, we have \[ H (j): \quad \underset{n \in \bb N}{\sup} \ a_{k,n} (j) = a_k (j) < \infty. \] For $j=k$, we have $a_k(k)=1$. Assume that $j \geqslant k+1$ and that $H(i)$ is true for all $ k \leqslant i \leqslant j-1$. By the total probability formula, we obtain \[ \frac{\mathbb{P}_k \left( Z_{n+1}=j \right)}{\gamma_k^{n+1}} = \frac{1}{\gamma_k} \sum_{i=k}^{j} \mathbb{P}_k \left( Z_{n+1} = j | Z_n = i \right) \frac{\mathbb{P}_k \left( Z_n = i \right)}{\gamma_k^n} , \] which is equivalent to \begin{equation} \label{recurrence a_n (k)} a_{k,n+1} (j) = \frac{1}{\gamma_k} \left[ \sum_{i=k}^{j-1} p(i,j) a_{k,n} (i) + \gamma_{j} a_{k,n} (j) \right] \end{equation} with $p(i,j) = \mathbb{P} \left( Z_{1} = j | Z_0 = i \right) $. Using the fact that $a_{k,n} (j) \leqslant a_{k,n+1} (j)$, we get by induction that \begin{eqnarray} \label{RDR pi_k} \sup_{n \in \mathbb{N}} \ a_{k,n+1} (j) (\gamma_k - \gamma_{j}) &\leqslant& \sum_{i=k}^{j-1} p(i,j) a_k(i) < \infty. \nonumber \end{eqnarray} Thus $q_{k,j} < \infty$ for all $j \geqslant k+1$ and $k\geqslant 1$. Furthermore, taking the limit as $n \to \infty$ in (\ref{recurrence a_n (k)}), leads to the following recurrent relation for $q_{k,j}$: \begin{equation*} q_{k,k} = 1, \quad \gamma_k q_{k,j} = \sum_{i=k}^j p(i, j) q_{k,i} \quad (j \geqslant k+1). \end{equation*} This end the proof of part a) of Theorem \ref{thm small value probability 2}. Now we prove part b) of Theorem \ref{thm small value probability 2}. We give a proof that the radius of convergence of the power series $Q_k$ is equal to 1. The method is new even in the case of the Galton-Watson process. We start with a lemma. \begin{lemma} \label{lem R Q} Let $k \geqslant 1$. Assume that $\mathbb{P} (p_1(\xi_0) <1 ) =1$ and that there exists $\varepsilon>0$ such that $\mathbb{E}[ m_0^{r_k+ \varepsilon} ] < \infty$, where $r_k$ is the solution of the equation $\gamma_k = \mathbb{E} m_0^{-r_k}$. Then, for any $r>r_k$, we have \begin{equation} \label{eq lem R Q} \lim_{n \to \infty} \uparrow \frac{\mathbb{E}_k Z_n^{-r}}{\gamma_k^n} < \infty. \end{equation} \end{lemma} \begin{proof} By the Markov property, $$ \mathbb{E}_k \left[ Z_{n+1}^{-r} \right] \geqslant \mathbb{E}_k \left[ Z_{n+1}^{-r} |Z_1=k \right] \mathbb{P}_k(Z_1=k) = \gamma_k \mathbb{E}_k \left[ Z_{n}^{-r} \right], $$ which proves that the sequence $(\mathbb{E}_k \left[ Z_{n}^{-r} \right] / \gamma_k^{n})_{n \in \mathbb{N}}$ is increasing. We show that it is bounded. For $n \geqslant 1$ and $m \geqslant 0$, we have the following well-known branching property for $Z_n$: \begin{equation} \label{decomposition Zn1} Z_{n+m} = \sum_{i=1}^{Z_m} Z_{n,i}^{(m)}, \end{equation} where, under $\mathbb{P}_{\xi}$, the random variables $Z_{n,i}^{(m)}$ $( i \geqslant 1) $ are i.i.d., independent of $Z_m$, whose conditional probability law satisfies $ \mathbb{P}_{\xi} \left( Z_{n,i}^{(m)} \in \cdot \right)= \mathbb{P}_{T^m \xi} \left( Z_n \in \cdot \right) $, with $T^m$ the shift operator defined by $T^m (\xi_0, \xi_1 , \ldots ) = (\xi_m, \xi_{m+1} , \ldots )$. Intuitively, relation \eqref{decomposition Zn1} shows that, conditionally on $Z_m=i$, the annealed law of the process $Z_{n+m}$ is the same as that of a new process $Z_n $ starting with $i$ individuals. Using \eqref{decomposition Zn1} with $m=1$, the independence between $Z_1$ and ${Z}_{n,i}^{(1)}$ $(i \geqslant 1)$ and the fact that $\mathbb{E}_{i}Z_n^{-r} \leqslant \mathbb{E}_{k+1} Z_n^{-r}$ for all $i \geqslant k+1$, we have \begin{eqnarray} \label{eq induction 1} \mathbb{E}_k \left[ Z_{n+1}^{-r} \right] &=& \mathbb{E}_k \left[ Z_{n+1}^{-r} | Z_1 = k\right] \mathbb{P}_k ( Z_1 = k) \notag \\ && + \sum_{i=k+1}^{\infty} \mathbb{E} \left[ \left( \sum_{h=1}^{i} Z_{n,h}^{(1)} \right)^{-r} \bigg| Z_1 = i \right] \mathbb{P}_k ( Z_1 =i) \nonumber \\ &\leqslant& \gamma_k \mathbb{E}_k \left[ Z_n^{-r} \right] + \mathbb{E}_{k+1} \left[ Z_n^{-r} \right] . \end{eqnarray} We shall use the following change of measure: for $k \geqslant 1$ and $r>0$, let $\mathbb{P}_k^{(r)}$ be a new probability measure determined by \begin{equation} \label{changement de mesure} \mathbb{E}_k^{(r)} [T] =\frac{\mathbb{E}_k \left[ \Pi_n^{-r} T \right]}{c_r^n} \end{equation} for any $\mathcal{F}_n$-measurable random variable $T$, where $c_r = \mathbb{E} m_0^{-r}$. By \eqref{changement de mesure}, we obtain \begin{equation} \mathbb{E}_{k+1} \left[ Z_n^{-r} \right] = \mathbb{E}_{k+1}^{(r)} [ W_n^{-r} ] c_r^n, \end{equation} with $\sup_{n \in \mathbb{N}} \mathbb{E}_{k+1}^{(r)} [ W_n^{-r} ] = \mathbb{E}_{k+1}^{(r)} [ W^{-r} ]$ (see \cite{liu}, Lemma 2.1). Moreover, we have $\mathbb{E}^{(r)} [ p_1^{k+1} (\xi_0) m_0^{r} ] = \gamma_{k+1}/ \mathbb{E} m_0^{-r}<1$ for any $r<r_{k+1}$. So by Theorem \ref{thm harmonic moments W} we get $ \mathbb{E}_{k+1}^{(r)} [ W^{-r} ] = C(r) < \infty $ and then $\mathbb{E}_{k+1} \left[ Z_n^{-r} \right] \leqslant C(r) c_r^n$ for any $r<r_k+ \varepsilon < r_{k+1}$. Coming back to \eqref{eq induction 1} with $r< r_k+ \varepsilon $, we get by induction that \begin{equation} \label{induction j_0-1} \mathbb{E}_{k} \left[ Z_{n+1}^{-r} \right] \leqslant \gamma_{k}^{n+1} + C \sum_{j=0}^{n} \gamma_{k}^{n-j} c_r^j. \end{equation} Choose $r>r_k$ such that $c_r < \gamma_k$. Then, we have, as $ n \to \infty$, \begin{equation} \label{cv gamma < delta} \frac{\mathbb{E}_{k} \left[ Z_{n+1}^{-r} \right]}{\gamma_k^{n+1}} \leqslant 1 + \frac{C}{\gamma_{k}} \sum_{j=0}^{n} \left( \frac{c_r}{\gamma_k} \right)^j \to \frac{C}{\gamma_{k} - c_r }. \end{equation} Thus the sequence $(\mathbb{E}_k \left[ Z_{n}^{-r} \right] / \gamma_k^{n})_{n \in \mathbb{N}}$ is bounded and \eqref{eq lem R Q} holds for any $r \in (r_k , r_{k} + \varepsilon)$. Using the fact that $\mathbb{E}_{k} \left[ Z_{n+1}^{-r'} \right] \leqslant \mathbb{E}_{k} \left[ Z_{n+1}^{-r} \right]$ for any $r'>r$, the result follows for any $r>r_k$, which ends the proof of the lemma. \end{proof} \begin{remark} From the results stated above, with some additional analysis one can obtain the equivalent of the harmonic moments $\mathbb{E} Z_n^{-r}$ for any $r>0.$ However, it is delicate to have an expression of the concerned constant in the equivalence. This will be considered in a forthcoming paper. \end{remark} Now we show that the radius of convergence $R$ of the power series $Q_k (t) = \sum_{j=k}^{\infty} q_{k,j} t^j$ is equal to $1$. Using the fact that $\sum_{j=k}^{\infty} \mathbb{P}_k \left( Z_n = j \right) = 1$, part a) of Theorem \ref{thm small value probability 2} and the monotone convergence theorem, we have \[ \lim_{n \to \infty} \uparrow \gamma_k^{-n} \sum_{j=k}^{\infty} \mathbb{P}_k \left( Z_n = j \right) = \sum_{j=k}^{\infty} q_{k,j} = + \infty,\] which proves that $R \leqslant 1$. We prove that $R=1$ by showing that $\sum_{j=k}^{+ \infty} j^{-r} q_{k,j} < \infty $ for $r>0$ large enough. Using part a) of Theorem \ref{thm small value probability 2}, the monotone convergence theorem and Lemma \ref{lem R Q}, we have, for any $r>r_k$, \begin{equation} \sum_{j=k}^{+ \infty} j^{-r} q_{k,j} = \sum_{j=k}^{+ \infty} j^{-r} \lim_{n \to \infty} \uparrow \frac{\mathbb{P}_k (Z_n =j)}{\gamma_k^n}= \lim_{n \to \infty} \uparrow \frac{\mathbb{E}_k Z_n^{-r}}{\gamma_k^n} < \infty, \end{equation} which proves part b). Now we prove part c) of Theorem \ref{thm small value probability 2}. Using part a), the definition of $G_{k,n}$ and the monotone convergence theorem, we get \eqref{cv Qnk ->Qk}. To prove the functional relation (\ref{relation Q_k}), recall that $G_{k,1} (t) = \sum_{j=k}^{\infty} p(k,j) t^j=\mathbb{E} f_0^k (t)$. By (\ref{relation rec qkj}) and Fubini's theorem, we get \begin{eqnarray*} \gamma_k Q_k (t) &=& \sum_{j=k}^{\infty} \sum_{i=k}^{\infty} q_{k,i} \ p (i, j) \mathds{1}(i \leqslant j) t^j \\ &=& \sum_{i=k}^{\infty} q_{k,i} \sum_{j=i}^{\infty} p (i, j) t^j \\ &=& \sum_{i=k}^{\infty} q_{k,i} \mathbb{E} \left[ f_0^i (t)\right] \\ &=& \mathbb{E} \left[ \sum_{i=k}^{\infty} q_{k,i} f_0^i (t) \right] \\ &=& \mathbb{E} \left[ Q_k (f_0 (t)) \right]. \end{eqnarray*} This proves the functional relation (\ref{relation Q_k}). We now prove that the previous functional relation characterizes the function $Q_k$. To this end it suffices to show the unicity of the solution of (\ref{relation Q_k}). Assume that there exists a power series $\hat Q(t)= \sum_{j=0}^{\infty} \hat q_{k,j} t^j$ on $[0,1)$ which verifies \eqref{relation Q_k} with the initial condition $q_{k,k}=\hat q_{k,k}=1$. We first show by induction in $n$ that $\hat Q^{(n)} (0)=0$ for all $n \in \{ 0, \ldots , k-1 \}$. Since $f_0 (0)=0$ and $\gamma_k \in (0,1)$, by \eqref{relation Q_k}, we get $\gamma_k \hat Q(0) = \hat Q (0)$, which implies that $\hat Q^{(0)}(0)=\hat Q(0)=0$. By the induction hypothesis we have that $\hat Q^{(j)} (0)=0$ for all $j \in \{ 0, \ldots , n-1 \}$ for some $n < k-1.$ We show that $\hat Q^{(n)} (0)=0$. Using Fa\`a di Bruno's formula, we have \begin{equation} \label{Faa di Bruno} \left( \hat Q \circ f_0 \right)^{(n)} (t) = \sum_{j=1}^n \hat Q^{(j)}(f_0(t)) B_{n,j} \left( f_0^{(1)}(t), \ldots , f_0^{(n-j+1)} (t) \right), \end{equation} where $B_{n,j}$ are the Bell polynomials, defined for any $ 1 \leqslant j \leqslant n $ by \begin{eqnarray*} \label{Bell} &&B_{n,j}(x_1,x_2,\dots,x_{n-j+1}) \\ && \qquad \qquad =\sum{n! \over i_1!i_2!\cdots i_{n-j+1}!} \left({x_1\over 1!}\right)^{i_1}\left({x_2\over 2!}\right)^{i_2}\cdots\left({x_{n-j+1} \over (n-j+1)!}\right)^{i_{n-j+1}}, \end{eqnarray*} where the sum is taken over all sequences $(i_1, \ldots, i_{n-j+1})$ of non-negative integers such that $i_1 + \cdots + i_{n-j+1} = j$ and $i_1 + 2 i_2 + \cdots + (n-j+1)i_{n-j+1} = n$. In particular $B_{n,n} (x_1) = x_1^n$. Applying \eqref{Faa di Bruno} and using the fact that $f_0(0)=0$, $B_{n,n} \left( f_0^{(1)}(0)\right)= f_0^{(1)}(0)^{n}$ and $\hat Q^{(j)} (0)=0$ for all $j \in \{ 0, \ldots , n-1 \}$, we get \begin{equation} \label{derivee nieme Q(f(t))} \left( \hat Q \circ f \right)^{(n)} (0) = \hat Q^{(n)} (0) \left( f_0^{(1)}(0) \right)^{n}. \end{equation} Then taking the derivative of order $n$ of both sides of \eqref{relation Q_k} and using \eqref{derivee nieme Q(f(t))}, we obtain that $\gamma_k \hat Q^{(n)} (0) = \hat Q^{(n)} (0) \gamma_{n}$ for $n<k-1$, which implies that $\hat Q^{(n)}(0)=0$. Now we show that $\hat q_{k,j} = q_{k,j}$ for any $j \geqslant k+1$. Using Fubini's theorem, the fact that $f_0, \ldots, f_{n-1}$ are i.i.d.\ and iterating (\ref{relation Q_k}), we get \begin{equation} \label{relation Qk gn} \mathbb{E} \left[ Q_k (\bar{g}_n (t)) \right] = \gamma_k^n Q_k (t) \quad \text{and} \quad \mathbb{E} \left[ \hat Q_k (\bar{g}_n (t)) \right] = \gamma_k^n \hat Q_k (t), \end{equation} where $\bar{g}_n (t) = f_{n-1} \circ \ldots \circ f_0 (t) $. By \eqref{relation Qk gn}, for all $t \in [0,1)$ and $n \in \mathbb{N}$, we have \begin{eqnarray} \label{Q1-Q2} \left| Q_k(t)- \hat Q_k (t) \right| &=& \gamma_k^{-n} \left| \mathbb{E} \left[ Q_k(\bar{g}_n (t))- \hat Q_k(\bar{g}_n (t)) \right] \right| \notag \\ &=& \gamma_k^{-n} \left| \sum_{j=k}^{\infty} ( q_{k,j} - \hat q_{k,j} ) \mathbb{E} \left[ \bar{g}_n^j (t) \right] \right|\notag \\ &\leqslant& \gamma_k^{-n} \sum_{j=k+1}^{\infty} \left| q_{k,j}- \hat q_{k,j} \right| G_{j,n} (t) , \end{eqnarray} where $G_{j,n} (t)$ is the generating function of $Z_n$ starting with $j$ individuals. To conclude the proof of the unicity it is enough to show that \begin{equation} \lim_{n \to \infty} \sum_{j=k+1}^{\infty} \left| q_{k,j}- \hat q_{k,j} \right| \gamma_k^{-n} G_{j,n} (t) = 0. \label{final-001} \end{equation} We prove \eqref{final-001} using the Lebesgue dominated convergence theorem. Note that, by \eqref{cv Qnk ->Qk}, for all $n \in \mathbb{N}$, \begin{equation} \gamma_j^{-n} G_{j,n} (t) \leqslant Q_j (t). \label{final-002} \end{equation} Therefore, using the fact that $ \gamma_j < \gamma_k$ for all $j \geqslant k+1$, we have \begin{eqnarray*} \lim_{n \to \infty} \gamma_k^{-n} G_{j,n} (t) = \lim_{n \to \infty} \left(\frac{\gamma_j}{ \gamma_k} \right)^n \gamma_j^{-n} G_{j,n}(t) \leqslant \lim_{n \to \infty} \left(\frac{\gamma_j}{ \gamma_k}\right)^n Q_j (t)= 0, \end{eqnarray*} and $\gamma_k^{-n} G_{j,n} (t) \leqslant Q_j (t)$. Now we show that $\sum_{j=k+1}^{\infty} \left| q_{k,j} - \hat q_{k,j} \right| Q_j (t) < \infty$ for all $t \in [0,1)$. Indeed, by part b) of Theorem \ref{thm small value probability 2}, we have $\sum_{j=k}^{\infty} q_{k,j} j^{-r}< \infty$ for any $r>r_k$. In particular for a fixed $r>r_k$, there exists a constant $C>0$ such that, for all $j \geqslant 1$, $i \geqslant j$, it holds $q_{j,i} \leqslant C i^{r}$. Therefore, \[ Q_j(t) \leqslant C \sum_{i=j}^{\infty} i^{r} t^i \leqslant C t^{j} \sum_{i=0}^{\infty} (i+j)^{r} t^i \leqslant C \left( 2^r t^{j} \sum_{i=0}^{\infty} i^{r} t^i + 2^r t^{j} \sum_{i=0}^{\infty} j^{r} t^i \right) \leqslant C_{r} j^r t^j. \] Since $Q_k$ and $\hat Q_k$ are power series whose radii of convergence are equal to 1, we have, for any $t<1$, \begin{equation} \label{majoration Q} \sum_{j=k+1}^{\infty} q_{k,j} Q_j (t) \leqslant C_r \sum_{j=k+1}^{\infty} q_{k,j} j^r t^j < \infty, \quad \text{and} \quad \sum_{j=k+1}^{\infty} \hat q_{k,j} Q_j (t) < \infty. \end{equation} Using the dominated convergence theorem, we see that $$ \lim_{n \to \infty} \gamma_k^{-n} \sum_{j=k+1}^{\infty} \left| q_{k,j} - \hat q_{k,j} \right| G_{j,n} (t) =0. $$ Therefore, from \eqref{Q1-Q2} we conclude that $Q_k (t)=\hat Q_k(t)$ for all $t \in [0,1)$. This ends the proof of Theorem \ref{thm small value probability 2}. \bibliographystyle{plain}
1,108,101,563,758
arxiv
\section{Introduction} Events such as conferences, workshops, or occasionally organized group talk and themed party, form a collocated social context for attendees to exchange work and life experiences \cite{Fischer2016CollocatedResearch}. People bring experiences to the event, sharing their own and listening to other's. Digital personal archives log life materials and support re-visiting and reminiscing life experiences \cite{Odom:2014:DSA:2611528.2557178,Sellen:2007:LTS:1240624.1240636}. Experiential items mentioned in conversations during the event are also documented in blogs, Twitter/Facebook posts, books, and other forms of life content. Experiences from all attendees form a collection of \textit{group content}, from which people learn information and understand the community. Social activities are commonly around life contents, such as sharing a blog collection about a trip, commenting social media topics one was following, or sharing an article one has read. Communicating about these contents depends on how well a speaker can recognize related experiences, and the degree the listener can capture other's experiences. However, even one person's experiential record could be huge. Exchanging and connecting these life contents is challenging without sufficient approaches to retrieve and connect life contents. \par Prior studies have looked into system designs to re-present the records of life experiences. Interaction designs supporting re-visitation of personal life-logs seek to regenerate prior experiences by encouraging interaction with past photographs \cite{Odom:2014:DSA:2611528.2557178}, videos \cite{Kalnikaite:2010:LMS:1753326.1753638,Sellen:2007:LTS:1240624.1240636}, and geo-locations \cite{Thudt2016VisualData}. However, personal textual content such as blogs, articles, and social media posts have long been under-utilized to encourage re-visitation and support social exchange during collocated events. Personal blog archives take time to read, especially when they grows massively large but minimally organized \cite{Baumer:2008:ERR:1357054.1357228,Indratmo:2008:EBA:1385569.1385578}. Recent advancement in data science and visualization open new pathways to utilize them to augment social communication. Experience records contributed by participants form a group content repository, from which experience connections can be identified. Topics, common items, and related experiences can be mined from the large group content and visualized to support communication. The pattern of how experiences are related brings up new perspectives about the participants as an entity. Supporting \textit{event mining} for hybrid co-located events needs to consider ways to construct group content repository, interactions with the group content, and knowledge from group content for reflection. This position paper explores the design space of mining the life contents of collocated event participants to support connecting experiences. We first discusses empirical and constructive problems in this design space, and then demonstrate our initial exploration of an interactive blog visualization, BlogCloud, which incorporates machine learning and visualization techniques to support re-visitation and reflection of large personal blogs. \section{Background} Harrison et al. \cite{Harrison1996Re-Place-ingSystems} defined the role of \textit{space} and \textit{place} in CSCW design as \textit{``space is the opportunity; place is the understood reality"}, and CSCW systems should support ``place-making" in physical and virtual space. Collocated events forms an opportunity space for participants to meet people with similar experiences. The social activities during the event make it an understood place to seek such experiences. A hybrid event not only brings people physically together, but also provide digital channels to bridge participants and enhance connection. Digital and virtual channels of communication construct novel attending experiences and inspire new perspectives about group. To make a collocated event a better place to connect similar experiences, CSCW systems should not only support communications in the current space, but also consider experience records from participant's life and connecting different experiences. \par People spend time crafting textual posts, recording marks in life, and capturing essences of thoughts \cite{Nardi:2004:WWB:1035134.1035163}. However, years after crafted, the large volume of minimally organized personal content could hinder the re-visitation and reflection. Re-collecting and retrieving information about things in personal experience costs time, which is difficult without text-processing techniques. Looking up and collecting information about particular items rely on memory, and cost time with big content volume \cite{Nardi:2004:BSA:1031607.1031643}. In co-located space, connecting personal content from different people needs automated approaches to identify related experiential items and common themes and topics. To support connecting experiences during co-located activities, machine learning (ML) and visualization (VIS) provide new opportunities to support collaborative re-visitation to the experience records \cite{Dove:2017:UDI:3025453.3025739}. Related things and similar experiences can be algorithmically mined from the time stream and reordered in a visualization. ML and VIS could build new perspectives to encourage reflection on each other's experiences, or explore integrated group content as a whole. Devices such as mobile phones, large interactive surfaces, VR/AR gadgets can be used to enable hybrid interactions with the each other's experiences, and trigger social communication about the group content. \section{Problem Space} Towards better CSCW systems for hybrid events, we consider the design space of \textit{event mining}, which incorporates data mining and visualization to support connecting experiences between collocated participants. Based on the timeline of an event, we consider \textit{preparation}, \textit{interaction}, and \textit{reflection} as three stages to implement event mining systems. Preparation stage gathers life content from individual attendees and form a focused repository. In interaction stage meaningful experience connections are recognized from group content and visualized for collaborative interaction. Reflection stage concerns how to archive the group content and people's interaction during and after the events. \begin{figure*}[h!] \includegraphics [width=\textwidth]{figure/problem_structure} \caption{Empirical and constructive problems of event mining} \label{fig:all_viz} \end{figure*} \subsection{Preparation} Mining the experience records of collocated participants needs to collect personal contents from people. Depend on the theme of the event, different types of documents can be collected. Life contents could be articles about personal life, such as blogs, Twitter/Facebook posts, and books people wrote. For work events such as conferences and seminars, contents such as publications, job descriptions, and CV/resumes can also be sources of repository. Event mining during preparation stage should encourage people to make contributions to the event. Potential ways to collect data from participants are either utilizing life contents people already posted on social media, or asking participants to suggest things to be shared with others. Either ways involve questions about what kind of data are appropriate to be connected, and how could the data be meaningfully processed. People might hesitate about sharing individual contents due to issues such privacy consideration, social anxiety, and individual roles in the events. Motivation factors need to be evaluated and understood to avoid privacy and social concerns. Approaches to collect personal contents also need to identify construct of the experience records. Factors such as formats of the data, amount of data, and time-span of collection need to be considered. Textual features need to be feasible and desirable to be used to connect experiences. This knowledge will benefit building better machine learning techniques to mine the group content. \subsection{Interaction} During the event, people visit the group content and connect to each other's experiences. This phase focuses on designing collaborative tools which present group content and support connecting experiences. Data visualizations could be used to present group content, but they need to consider manners to show individual content to the public. Visualizations of group content need to motivate people to share and reflect on each other's experiences. Collaborative interactions can be interpreted to reorganize the group content and retrieve experiences dynamically. Paradigms to present the connect experiences may lead to different social communication effects. For the constructive problem, practitioners need to consider forms of devices to support viewing event mining results. Multi-user multi-touch displays in public space support visualizing large data collection and enables people to interact at the same time \cite{Peltonen2008ItsCentre, Jacucci2010WorldsDisplay, Niu2017AnCollaboration}. This type of devices support connecting experiences between dyads or trios. Distributed devices such as mobile phones or computers allow more participants to access mining results on the internet, but this implementation needs to consider how to motivate social exchange between the collocated participants. \subsection{Reflection} Studying people's reflection on the group content need to understand the phenomenon of how people perceive and interpret the ML-processed content. Factors of event mining which raise awareness of experience connections and trigger conversation need to be identified and understood \cite{Niu2017InvestigatingDisplays}. Evaluation methods are needed to assess people's reflective activities and effectiveness of the data mining approaches. Summaries and deliverables of group content can benefit re-visiting and reminiscing the attending experience. After events, the methods to archive group content and event activities should be designed to support long term reflection. Event mining needs to record not only the group content, but also people's interactions during the events. Interactive components including marks, notes, and interaction data could be used to build novel reflective materials for hybrid events. The construction of event mining systems should consider how to design for short and long term reflection on the group content. Design opportunities exist in converting the event mining results to take-away items such as mementos and event archives. \section{BLOGCLOUD: BLOG REFLECTION TOOL} In this section, we introduce \textit{BlogCloud}, an interactive blog reflection tool implemented with machine learning techniques and text visualization (Figure \ref{fig:interface}). BlogCloud is designed for the interaction stage, which supports participants come and interact with the experience records in an event setting. BlogCloud seeks to provide overviews of experiential items, support organization and reminiscence of experiences, and allow searching symbols from the temporally-ordered content. To support interaction in collocated space, BlogCloud is implemented with a dual-display design - a multi-touch tabletop and a vertical display. Though BlogCloud current only supports interacting with one participant's blog set, it is an preliminary exploration of how people interact with the ML-processed content and generate new perspectives about experiences. \begin{figure} \includegraphics[width=0.43\textwidth]{figure/interface} \caption{The BlogCloud System. The tabletop supports blog searching and viewing. The vertical display shows a visualization.} \label{fig:interface} \end{figure} \begin{figure} \includegraphics[width=0.3\textwidth]{figure/blogcard} \caption{A paragraph card. Words with background color are recognized keywords.} \label{fig:blogcard} \end{figure} Experiential items in Blogs are recorded in reverse chronological order \cite{Nardi:2004:WWB:1035134.1035163}. BlogCloud incorporates natural language processing (NLP) techniques to break the flow of blogs, and to chuck and clean the blogs into elementary experiential items. A \textit{paragraph} is used as the basic reflection unit, since paragraphs usually focus on a few inter-related experiential items that are lexically self-contained and map well for machine learning. For each paragraph, nouns, verbs, adjectives and adverbs are marked as keywords with coreNLP (a NLP toolkit for part-of-speech recognition. Digital cards present he processed paragraphs on the multi-touch tabletop (Figure \ref{fig:blogcard}). The cards can be moved, rotated and zoomed with multi-finger gestures. Zooming a card shows the blog paragraph in multiple levels of detail. Segmentation and term-weighting techniques enable the system to process experiential items in blogs. \par A word cloud is visualized on the vertical display for each paragraph group created by the user, with four visualization components: highlighted words, feature words, number of similar documents, and average sentiment values (Figure \ref{fig:interface}). The highlighted words are presented with yellow borders. The size of the words is proportional to the weight of words in the paragraph group. A count of similar paragraphs is presented in a circle. A strip with the number of associated blogs connects each cloud pair. The width of the strip is proportional to the number of blog paragraphs associated with both groups. \begin{figure} \includegraphics[width=0.47\textwidth]{figure/interface2} \caption{Top: Touch interface runs on the tabletop display. Bottom: Connected word cloud visualizing paragraph groups and connections between groups and search keys. Up-right corner: Word cloud for one paragraph group.} \label{fig:interface2} \end{figure} \subsection{How Blog Authors Reflect} Two blog authors attended a workshop reception and used the tool. The third blog author was unable to attend the reception, so he used the system during a separate lab visit. The workshop has a hiking theme, and attendees are either lovers of outdoor activities or experts studying the trail. In the author session, each blog author spent 20-40 minutes using BlogCloud to explore his or her own hiking blogs. During the workshop, other attendees occasionally joined the conversation and engaged with the system. However, the blog author remained the primary user of the system throughout the session. \subsubsection{Reminisce about particular experiences} During the author session, blog authors reminisced particular experiences and shared with other people around. When A1 searched a word ``climb", he noticed a paragraph about a gradually narrowed cliff path. A1 reminisced the experience on the cliff path by telling a story. \textit{``It was a path on the cliff. The first time when I passed it, it was this wide,"} A1 opened his arms to show how wide it was. \textit{``But the second time when I was there it was barely my feet's width. I have to pass it with my body clinging to the cliff and be careful."} The particular experience of passing a terrifying cliff path re-presented this kind of ``climb" experience. A2 noted reflecting on the word cloud surfaced her memory about her experiences. She commented about the system \textit{``the connections I got to explore and the way that the semantic connections surfaced specific memories... it was a really cool experience to see words connected that then acted as triggers to surface memories that I otherwise would not necessarily have been thinking about. The connections were the interesting part, much more than just reflecting on some experience alone."} Reading the visualization raised the reflection on particular experiences, which reflects that re-presentation of a general experience turns into a particular one which represent that kind of experience. \par \subsubsection{Making sense of symbols} BlogCloud offers a different view of symbols for reflection. In the author session, blog authors had sense-making activities with their own blogs. The visualization of paragraph groups was used by A3 to make sense of symbols, as a way to reflect on his own blog from a different perspective. A3 used the visualization to compare recurring experiences. He searched ``difficult", ``rocky", ``steep", and ``hot", creating four corresponding pragraph groups. After reading the connecting strips (Figure \ref{fig:A3wordcloud}), A3 commented, \textit{``it makes sense to me since 'steep' is more related to 'rocky' and 'difficult' than 'hot'"}. When seeing ``bike" had more documents than ``scooter", he commented \textit{``[the visualization] reflects that I used bike} more than \textit{scooter for transportation"}. When speculating that ``cold" was mentioned more than ``hot" in his blogs, he paused for a while and said, \textit{``'cold' is bigger than 'hot', hmm... maybe that is because it took me more time hiking the colder north part, than the warmer south."} Though A1 did not specifically use BlogCloud to compare search results, he found sense making important for reflection on personal content. He commented, \textit{``Somebody else did a grounded theory analysis of my blogs. They surprised me as they (themes of blogs) were all about social experience whereas I said that the vast majority of the time I was alone. Although you have written the blog, it does not mean you fully grasp the big themes that run through it ... hence the need for sense-making tools."} \begin{figure}[!ht] \includegraphics[width=0.47\textwidth]{figure/A3wordcloud} \caption{Four word clouds created by A3.} \label{fig:A3wordcloud} \end{figure} \subsubsection{Recovering lost memory} Experience documentation has been recognized as the most prevalent purpose for blogging \cite{Nardi:2004:WWB:1035134.1035163}. However, considering the large number of the blog posts and time passed since creation, less significant experiential items might be lost in memory, requiring effort to be re-generated. We notice all three blog authors found words in the word cloud or paragraphs on the cards that they did not recognize. But blog authors did not skip these clues; they would collect more information to recover their lost memory. When the visualization showed ``dead" and ``porpoise" as keywords in one paragraph group. One attendee asked whether seeing ``dead porpoise" made A1 sad? A1 could not remember where he wrote about porpoise, and searched ``porpoise" and learned that he once thought the cracked wood looked like ``snout of a dead porpoise". A3 searched words Jersey and several people's names came out. He searched a name among the ``Jersey" group and found the paragraph with those names. Through reading the paragraph he recalled the moment he met other hikers in New Jersey. \section{Conclusions and Future Directions} CSCW community meets the era when data mining and artificial intelligence are incorporated in HCI and CSCW applications \cite{Dove:2017:UDI:3025453.3025739}. Connections during hybrid events are not only through the channel of face-to-face conversation, but could be ``mined" from participants' life records. This position paper explores \textit{event mining} on group content - using data mining techniques to process the life contents collected from event participants and visualized for connecting experiences. We explore the problem space and design space for event mining systems, and share our experience deploying a ML- and VIS- augmented blog exploration tool during a themed workshop. \par Moving forward, we will explore designs and applications which incorporate different types of life records to support interactive exploration of experiences. Design options to trigger conversations and support social exchanges will be compared to contribute new knowledge to support hybrid events. Meaningful reflection activities such as reminiscence, sense-making and memory recovery will be further examined with different group content and group content representations. Following the empirical and constructive problem structure, we seek to gain a deeper understanding of design opportunities and challenges in utilizing data mining techniques to support hybrid events.
1,108,101,563,759
arxiv
\section{Introduction} \label{sec:intro} Explaining the accelerated expansion of the Universe \cite{Riess:1998cb,Perlmutter:1998np} with a cosmological constant $\Lambda$ requires an unacceptable amount of fine tuning, due to the extreme smallness of the observed density of dark energy $\rho_\Lambda\sim2.5\cdot10^{-47}$\,GeV$^4$ \cite{Peebles:2002gy,Martin:2012bt,Sola:2013gha,Aghanim:2018eyx}. Here we define fine tuning as the careful adjustment of initial conditions, couplings or differences of couplings, to very large or very small non-zero values. A possible and popular mechanism for alleviating this fine tuning is to construct a theory where the energy of the vacuum is strictly zero, forbidding a $\Lambda$ term in the Lagrangian. For example, this is often achieved by invoking some additional (often unspecified) symmetry. The current observation of an approximately constant density of dark energy is then explained by the gradual dynamics of fields \cite{Copeland:2006wr}. Similar to inflation, many models for dark energy contain slowly-rolling scalar fields \cite{Wetterich:1987fm,Ratra:1987rm}. These are usually referred to as ``quintessence'' models. However, they frequently suffer from issues related to fine tuning and naturalness. Even if not always the case \cite{Copeland:1997et,Ferreira:1997hj,Zlatev:1998tr,Steinhardt:1999nw}, predictions are often highly sensitive to the initial conditions. Successful quintessence generally requires very small (i.e.\ fine-tuned) parameters, such as masses of the order of $m\sim10^{-33}$\,eV. Similarly, couplings of the quintessence field to other fields, which should otherwise be allowed by the known gauge and Lorentz symmetries of the Standard Model (SM), must also be heavily suppressed or screened \cite{Khoury:2003aq,Khoury:2003rn,Brax:2004qh,Hinterbichler:2010es} in order to avoid fifth-force constraints [\citenum{Adelberger:2003zx,Will:2005va,Adelberger:2009zz}; see also recent experimental proposals \citenum{Burrage:2014oza,Sabulsky:2018jma}]. A rather unexpected connection was recently discovered \cite{Dimopoulos:2018eam} between the observed dark energy density $\rho_\Lambda$, cosmic inflation and electroweak symmetry breaking: \ee{\rho_\Lambda\approx\f{v^8}{{\mathcal P}_\zeta M_{\rm P}^4}\,,\label{eq:conspiracy}} where ${\mathcal P}_\zeta\approx2.2\cdot10^{-9}$ is the observed amplitude of the spectrum of primordial perturbations at microwave background scales \cite{Ade:2015xua}, $M_{\rm P}$ is the {reduced} Planck mass and $v = 246$\,GeV is the vacuum expectation value (VEV) of the Higgs field $h$. The relation (\ref{eq:conspiracy}) is more than just a curious coincidence between parameters; in many models it arises as the magnitude of the potential energy at electroweak symmetry breaking (EWSB) left over from the interplay of the Higgs boson and the inflaton \cite{Dimopoulos:2018eam}. Unfortunately, (\ref{eq:conspiracy}) is not a panacea for all fine-tuning problems of quintessence. Even if (\ref{eq:conspiracy}) can generate the scale of dark energy, successful quintessence still requires a small enough effective mass, the suppression of couplings to other fields, and fine-tuned initial conditions. The main result of this work is to show that all these issues can be naturally resolved in a two-field model, whilst simultaneously explaining the observed magnitude of dark energy via (\ref{eq:conspiracy}). We use positive metric, Riemann and Einstein sign conventions $(+,+,+)$ \cite{Misner:1974qy}. \section{Dark energy scale from inflation and EWSB} \label{sec:scale} The tree-level potential for the Higgs at $T=0$ can be conveniently parameterised as \ee{V(h)=\f{\lambda}{4}\left(h^2-v^2\right)^2\,,\label{eq:Hig}} where the total vacuum energy {-- including the zero-point quantum fluctuations --} is assumed to strictly vanish at $h=v$, in accordance with the discussion in the introduction with the hope that in a complete theory it has a more fundamental motivation. Crucially, the potential energy from the Higgs changes at EWSB. Above the critical temperature $T_{\rm EW}$ at which the mass parameter of the Higgs potential $m^2_H(T_{\rm EW}) = 0$, the potential is minimised at $h = 0$. At $T < T_{\rm EW}$, we see that $m^2_H(T < T_{\rm EW}) < 0$, such that the potential recovers its familiar Mexican-hat shape and $h = 0$ is rendered a local maximum \cite{Arnold:1992rz} \ee{V(h_{\rm min})=\begin{cases}V(0)=\f{\lambda}{4}v^4\,, &T>T_{\rm EW}\\V(v)=0\,, &T<T_{\rm EW} \end{cases}\,.\label{eq:change}} Consider the following action $S=\int\sqrt{-g} \mathcal{L}$ for the inflaton $\phi$, where \ea{-\mathcal{L}& = \f{1}{2}\partial_\mu\phi\partial^\mu\phi+U(\phi,h)\label{eq:L}\\ & = \f{1}{2}\partial_\mu\phi\partial^\mu\phi+\f{1}{2}m^2\phi^2+{c}\f{\phi}{M_{\rm P}}V(h)\,.\nonumber} In addition to the usual quadratic piece defining the bare inflaton mass $m$, the potential of this theory possesses a linear Planck-suppressed coupling of the inflaton to the Higgs, with strength characterised by the constant $c$. Before EWSB ($T>T_{\rm EW}$), the Higgs has non-zero vacuum energy $V(0) =(\lambda/4)v^4$, slightly displacing the minimum of the inflaton potential from the origin, \ee{U'(\phi_0,0)=0\quad \Leftrightarrow\quad \phi_0=- \f{cV(0)}{M_{\rm P}m^2}=-\f{c\lambda v^4}{4M_{\rm P}m^2}\,.\label{eq:min}} After EWSB ($T<T_{\rm EW}$), once the Higgs reaches its (new) minimum $h=v$, a second rolling of the inflaton is triggered starting from $\phi = \phi_0$. {Choosing $|c| \approx 1.7 $ and using $M_{\rm P}=2.43 \cdot 10^{18} \, \mbox{GeV}$, $\lambda = 0.129$ and $v = 246 \, \mbox{GeV}$, we see that for the quadratic model of inflation, $m\approx6\cdot10^{-6}M_{\rm P}\sim 10^{-1}\sqrt{{\mathcal P}_\zeta}M_{\rm P}$, the initial value of the potential energy at the beginning of this rolling phase is \ee{ {U(\phi_0,v)}=\f{c^2\lambda^2 v^8}{32M_{\rm P}^2m^2}\approx \frac{ (10 \, c \, \lambda \, v^4)^2}{32 M_{\rm P}^4 \mathcal{P_\zeta} } \approx 2.5 \cdot 10^{-47} \, \mbox{GeV}^4\,,\label{eq:vac_e}} which} agrees with the observed value of $\rho_\Lambda$ \cite{Aghanim:2018eyx}. Dark energy with the magnitude (\ref{eq:conspiracy}) is thus a direct prediction of inflation and electroweak physics in this setup, as long as the potential energy does not change significantly after EWSB. The relation (\ref{eq:vac_e}) is a consequence of the specific linear coupling to the Higgs $\f{\phi}{M_{\rm P}} V(h)$. Here we tacitly stay within the vicinity of the minimum $\phi_0$, where the linear term dominates and for now assume that problems such as boundedness of the potential are resolved by additional, unspecified, higher-order terms. In Sec.\ \ref{sec:mr} we present a realistic scenario that indeed has no such issues. The symmetries of the theory also permit a portal term $\phi^2 h^2$, however such a term has only a very small impact on the end result, as the mass of the inflaton is significantly larger than the electroweak scale. On the other hand, a linear term of the form $\phi M^3$ with a scale $M^3\gtrsim \f{1}{M_{\rm P}} V(0)\sim \f{v^4}{M_{\rm P}}$ would spoil the relation (\ref{eq:vac_e}). Qualitatively the same features as in (\ref{eq:L}) are present in the Starobinsky model of inflation \cite{Starobinsky:1979ty}, with the added bonus that a coupling between the Higgs and the inflaton is generated automatically \cite{Dimopoulos:2018eam}. Unfortunately, even if the dark energy density $\rho_\Lambda$ somewhat miraculously arises as a function of known and observable scales, the inflaton remains much too heavy to act as a quintessence field. For it to roll slowly today, its mass must be much smaller than the current Hubble rate, an extremely small number $H_0\sim {\sqrt{\rho_\Lambda}}/{M_{\rm P}}\sim 10^{-33}$eV. Fortunately, there are ways around this problem. \section{Slow-roll from a non-canonical kinetic term} \label{sec:sr} The general idea of ameliorating the problems of quintessence with non-canonical kinetic terms has existed for some time \cite[see e.g.][]{Chiba:1999ka,Hebecker:2000au,Hebecker:2000zb,ArmendarizPicon:2000ah,ArmendarizPicon:2000dh}. In this vein, let us now modify the theory characterised by the action (\ref{eq:L}) by taking the kinetic term to have a non-canonical form \ee{(\partial_\mu \phi)^2\quad\longrightarrow\quad\bigg(\f{M_{\rm P}}{\phi}\bigg)^2(\partial_\mu \phi)^2\equiv (\partial_\mu \chi)^2\,,\label{eq:alpha}} with the same notation as introduced in the Appendix. {In the minimum defined by (\ref{eq:min}), the potential after EWSB now has precisely the behaviour required for quintessence of the thawing variety \cite{Caldwell:2005tm}, $\tilde{U}(\chi_0,v)\sim\rho_\Lambda$ and $\tilde{U}''(\chi_0,v)\sim H_0^2$.} By introducing an $O(1)$ coupling in front of the non-canonical kinetic term (\ref{eq:alpha}), one may evade current observational bounds on the parameter of state for dark energy \cite{Dimopoulos:2018eam}. We explore these bounds quantitatively for our two-field model in Sec.\ \ref{sec:ar}. The modification (\ref{eq:alpha}) not only makes the mass of $\chi$ effectively small (see Appendix), but also suppresses all interactions with other fields. Consider, for example, a Yukawa coupling of the form $g\phi\bar{\psi}\psi$. Close to the minimum (\ref{eq:min}) in the canonical variable $\chi$, the effective Yukawa coupling is $\tilde{g}\sim g\sqrt{\rho_\Lambda/(m^2 M_{\rm P}^2)}$, which leads to a negligible interaction strength. This is very similar to the suppression of interactions that occurs in $\alpha$-attractor models of inflation \cite{Kallosh:2016gqp,Dimopoulos:2017tud}. Unfortunately, the term (\ref{eq:alpha}) also makes it extremely difficult for the inflaton field to reach the minimum (\ref{eq:min}) in the first place. In the early Universe, a very light field will be stopped by Hubble friction long before ever reaching such a minimum. Decay via interactions (required for reheating) is also virtually impossible, due to the manifest suppression of all interactions, as in our earlier Yukawa example. This is not a problem unique to our scenario, but rather a common feature of quintessence models (albeit not all such models, as discussed in Sec.\ \ref{sec:intro}). Avoiding this usually requires very careful fine-tuning of the initial field conditions. To avoid this issue more naturally, we need a mechanism that first allows the field to reach the minimum unhindered, and only then triggers the non-canonical kinetic behaviour (\ref{eq:alpha}). In Ref.\ \cite{Dimopoulos:2018eam}, this was dubbed the {\it bait-and-switch} mechanism, and several examples were also provided. The most natural one comes by coupling (\ref{eq:alpha}) with the Higgs, such that the kinetic term is canonical up until EWSB, providing ample time for the relaxation into the minimum. Although not problematic at the classical/mean-field level, coupling the kinetic term to the Higgs introduces interactions that are likely already excluded by collider bounds (on e.g.\ invisible Higgs decays). The other examples provided in Ref.\ \cite{Dimopoulos:2018eam} were more for illustrative purposes, and not expected to be easily realised in top-down approaches. In Ref.\ \cite{Wang:2018kly}, a mechanism designed to extend that of Ref.\ \cite{Dimopoulos:2018eam} was presented, requiring a very large non-minimal coupling between the Higgs and the scalar curvature of gravity, of the order $|\xi|\sim 10^{32}$. \section{A two-field model: assisted relaxation} \label{sec:ar} So far we have only discussed quintessential inflation \cite{Peebles:1998qn}, where inflation and dark energy are given by the same field $\phi$. The mechanism that we now present relies on the relation (\ref{eq:conspiracy}) and the stretching of the potential by a non-canonical kinetic term of the form (\ref{eq:alpha}), but with the crucial difference that it includes a second field in addition to the inflaton. It is this additional field that sources dark energy in the late Universe, in contrast to Ref.\ \cite{Dimopoulos:2018eam}. What we will show is that the dynamics of the inflaton can assist and effectively force the relaxation of a field with a flat potential into the pre-EWSB minimum. This provides a natural resolution of the initial-value problem, leading to a model that does not suffer from {\it any} of the usual fine-tuning issues of quintessence. Models involving multiple quintessence fields have been previously presented in the literature; see for example \cite{Barreiro:1999zs,Fujii:1999fc,Masiero:1999sq,Blais:2004vt,Kim:2005ne,Elizalde:2008yf,vandeBruck:2009gp,Akrami:2017cir}. Many-field models with a connection to Early Universe inflation, as in our set-up, are less common (although not absent \cite{Akrami:2017cir,Elizalde:2008yf}), as are embeddings in more fundamental theories. However, although our study also does not incorporate a fundamental theoretical framework, the novel connection between electroweak theory and inflation potentially provides a clue that can aid in finding top-down manifestations of our mechanism. Our mechanism likely has many manifestations. We will first focus on the simple Lagrangian $\mathcal{L} = \mathcal{L}_\phi + \mathcal{L}_\sigma$, where $\phi$ is the inflaton, $\sigma$ the quintessence field, $-\mathcal{L}_\phi \equiv \f12\partial_\mu\phi\partial^\mu\phi +\f12m^2\phi^2$ and \ea{-\mathcal{L}_\sigma& = b^2\f{1}{2}\f{M_{\rm P}^2}{\phi^2+\sigma^2}\partial_\mu\sigma\partial^\mu\sigma + \f{1}{2}m^2\sigma^2+{c}\f{\sigma}{M_{\rm P}}V(h) \nonumber \\ & \equiv \f12 \partial_\mu\chi\partial^\mu\chi+\tilde{U}_\sigma(\chi,h)\,.\label{2field}} The constants $b$ and $c$ are $\mathcal{O}(1)$ dimensionless couplings. Crucially, we assume that $\sigma$ couples to the Higgs similarly to (\ref{eq:L}), letting it acquire a non-zero VEV before EWSB. For simplicity, we choose the same mass $m$ for $\phi$ and $\sigma$, as we expect that both parameters are sourced from similar physics --- but their exact masses can differ without introducing the need for any fine tuning. Note also that our mechanism does not rely on the quadratic form for the inflaton potential (which we have also chosen for simplicity), and works equally well for any other theory of inflation. Non-canonical kinetic terms can be interpreted as a metric $G$ in field space, i.e. $\mathcal{L} \subset G^{IJ} \Phi_I \Phi_J$, with $\Phi$ the fields ($\Phi = [\phi,\sigma]$ in our case). As such, the signature of $G$ cannot be modified by a change of coordinates (in this case field redefinitions), provided the field-space metric is invertible. Throughout, we will consider models where only one of the fields possesses a non-canonical kinetic term. This means that $G$ is always diagonal. If the non-canonical kinetic term is always positive (as it is in all our examples), it then follows that there are no ghosts in the models that we present. If $\phi > M_{\rm P}$ during inflation, the non-canonical kinetic term in (\ref{2field}) effectively makes $\sigma$ heavy enough to roll, even when close to $\sigma=0$ (see Appendix). For a large range of initial conditions, if inflation lasts long enough $\sigma$ will therefore be driven towards its minimum by the inflaton. We call this {\it assisted relaxation}. When inflation has ended and the Universe has reheated, $\phi=0$, the same kinetic term makes $\sigma$ exponentially light. This allows $\sigma$ to source dark energy precisely as discussed in Sec.\ \ref{sec:sr}. As shown in Sec.\ \ref{sec:scale}, a linear coupling to the Higgs potential and a tree-level mass of the order required for successful inflation ($m\sim10^{-6}M_{\rm P}$) leads to an excellent match to observations. Indeed, this is the mass scale that should be chosen for $\sigma$ in order to avoid spoiling (\ref{eq:conspiracy}), regardless of actual inflaton potential. Let us explicitly show the mechanism at work for the model (\ref{2field}). Deep in inflation, $\phi\gg M_{\rm P}$, and (neglecting the derivatives of $\phi$ in the change of variables, as they are subleading in the slow-roll expansion) \ea{-\mathcal{L}_\sigma & \approx \f{1}{2}\bigg(\f{b M_{\rm P}}{\phi}\bigg)^2\partial_\mu\sigma\partial^\mu\sigma + \f{1}{2}m^2 \sigma^2+c\f{\sigma}{M_{\rm P}}V(0) \label{2field2} \\\label{2field30} & \equiv \f{1}{2}\partial_\mu\chi\partial^\mu\chi + \frac12 \f{m^2}{(bM_{\rm P})^2}\phi^2\chi^2+c\f{\lambda v^4}{4bM_{\rm P}^2}\phi\chi\,,} where $\sigma \sim \phi\chi/(bM_{\rm P})$. An important assumption in (\ref{2field2}) is that the Higgs is at $h=0$. This is not particularly constraining, as it can be arranged via a portal coupling to the inflaton or a non-minimal coupling to curvature. Solving the equation of motion for $\sigma$ (or $\chi$) from (\ref{2field2}) whilst holding $H$ and $\phi$ constant, we see that the average energy density of the quintessence field, $\rho_\sigma\sim \f12 m^2\sigma^2$, dilutes approximately as either $\rho\propto e^{-3N}$ or $\rho\propto e^{-\f{4}{b^2}N}$, where $N=H t$ is the number of $e$-folds since the beginning of inflation. Here we have made the approximation that the energy density is completely dominated by the inflaton. The first case corresponds to a heavy field oscillating around its minimum, i.e.\ acting as normal matter \cite[see e.g.][]{Turner:1983he}. The second case corresponds to a light field slow-rolling down its potential. The two cases are approximately distinguished by whether or not the slow roll parameter \ee{\eta_\sigma=\f{\tilde{U}_\sigma''(\chi,0)}{3H^2}=\f{\f{\phi^2}{(bM_{\rm P})^2}m^2}{{\f{1}{2}\f{m^2}{M_{\rm P}^2}\phi^2}}=\f{2}{b^2}\,} is larger (leading to $\rho\propto e^{-3N}$) or smaller than unity (leading to $\rho\propto e^{-\f{4}{b^2}N}$). As a representative initial condition, consider $\sigma=M_{\rm P}$. Setting $b=1$, leading to a coherently oscillating $\sigma$, we have the approximate behaviour $\rho \propto \sigma^2 \propto e^{-3N}$, so { $\sigma$ approaches the minimum $\sigma_0$ as \ea{\sigma - \sigma_0 &= \sigma + \f{c\lambda v^4}{4M_{\rm P}m^2} =\left(M_{\rm P}+\f{c\lambda v^4}{4M_{\rm P}m^2}\right) e^{-\f{3N}{2}}\nonumber \\&\approx M_{\rm P}e^{-\f{3N}{2}}\,.}} This translates into a minimal required duration of inflation for our mechanism to work, as inflation must continue long enough for the quintessence field to relax into the minimum. In terms of $e$-folds, this is \ee{e^{-\f{3N}{2}}\lesssim \f{|c|\lambda v^4}{4M_{\rm P}^2m^2}\sim \f{\sqrt{\rho_\Lambda}}{M_{\rm P}m}\Rightarrow {\rm for}\ |c| = 1,\ N \gtrsim {84}\,. \label{eq:efolds}} Importantly, this number is not very sensitive to the initial condition for $\sigma$ in units of $M_{\rm P}$. Observations require only that inflation extends over at least 50--60 e-folds, and in many theories it continues for much longer. Our model is therefore naturally consistent with both theoretical expectations and observational bounds on $N$. Inflation is therefore generically expected to drive the $\sigma$ field to its minimum, for a wide range of initial conditions. Note also that although we have assumed an initial hierarchy $\phi\gg\sigma$, this can be expected to arise more or less automatically: as visible in (\ref{2field2}), the effective mass for $\chi$ is generally larger than $m$ deep into inflation, making $\sigma$ roll faster than $\phi$ and hence quite generically leading to $\phi\gg\sigma$. We chose not to include a portal term $\sim \phi^2\sigma^2$ in (\ref{2field}), however such a term can be added: when $\phi^2\sigma^2\gtrsim m^2\sigma^2$, it pushes the minimum for $\sigma$ closer to the origin, from which $\sigma$ rolls towards $\sigma_0$ when $\sigma_0\lesssim\phi\lesssim m$. Suppose $b^2=10$ instead, corresponding to the case of a light, slowly-rolling field. The left-hand side of (\ref{eq:efolds}) then becomes $e^{-\f15 N}$, leading to the requirement that $N\gtrsim {633}$. Again, this is not an unrealistic number. Once the inflaton has decayed, $\sigma$ becomes exponentially light due to the pole $\sim\sigma^{-2}$ in the kinetic term and is frozen at $\sigma_0$, unaffected by the thermalization of the Higgs. After EWSB, when the Higgs has rolled to its minimum, \ea{-\mathcal{L}_\sigma & \approx \f12\f{b^2 M_{\rm P}^2}{\sigma^2}\partial_\mu\sigma\partial^\mu\sigma+\f12m^2 \sigma^2\\ & \equiv \f12\partial_\mu\chi\partial^\mu\chi + \f12{m^2}{M_{\rm P}^2}e^{\f{2\chi}{b M_{\rm P}}}\,,\label{2field3}} with the initial condition at EWSB \ee{\sigma_0=\pm M_{\rm P}e^{\f{\chi_0}{b M_{\rm P}}}=-\f{c\lambda v^4}{4M_{\rm P}m^2}\,,\label{eq:chi0} } where ``$\pm$'' refers separately to the cases where $c>0$ ($-$) or $c<0$ ($+$). Choosing $|c|\approx 2.1$ and $m\approx 6 \cdot10^{-6}M_{\rm P}$, and again that $M_{\rm P}=2.43 \cdot 10^{18} \, \mbox{GeV}$, $\lambda = 0.129$ and $v = 246 \,$GeV, this gives \ee{\rho_\Lambda=\tilde{U}_\sigma(\chi_0,v)=\f12m^2\sigma_0^2=\f{c^2\lambda^2 v^8}{32M_{\rm P}^2m^2}\approx2.5\cdot10^{-47}{\rm GeV}^4\,.} At early times, to a very good approximation $w=-1$ and $\rho_\Lambda$ is constant. The present-day deviation from $w=-1$ can be parameterised in standard fashion as \ee{w \equiv -1 + \delta w = \f{X^2 - 6}{X^2 + 6}\,;\quad X \equiv M_{\rm P}\f{\tilde{U}'_\sigma(\chi,v)}{\tilde{U}_\sigma(\chi,v)}=\f{2}{b}\,.} Taking e.g.\ $b = 1$ leads to $w = -0.2$, and $b^2 = 10$ gives $w = -0.875$. The current observational bound of $\delta w \lesssim 0.12$ at 90\% confidence \cite{Abbott:2018wog} means that our scenario is consistent with existing constraints for $b\gtrsim 3.2$. Light fields on an inflating background are known to generate large quantum fluctuations $\delta \chi$, which for a field with a mass $M$ can be shown to saturate with the root-mean-square fluctuation $\sqrt{\langle\delta \chi^2\rangle} \sim H^2/M$. On the other hand, massless fields have fluctuations that grow without bound \cite{Starobinsky:1994bd} as $\sqrt{\langle\delta \chi^2\rangle} \sim \sqrt{N} H$. From (\ref{eq:chi0}) we see that the minimum in the canonical variable occurs at $\chi_0\lesssim -100 M_{\rm P}$, meaning that quantum fluctuations are generally not large enough to significantly displace the field: the effective mass when $\phi \gtrsim M_{\rm P}$ can be read from (\ref{2field30}), giving $\sqrt{\langle\delta \chi^2\rangle}\sim m\phi/M_{\rm P}$. Furthermore, inflation is not expected to last for more than a few $e$-folds in the field regime where the effective mass is small ($\phi \lesssim M_{\rm P}$). \section{A more realistic model} \label{sec:mr} When writing (\ref{2field}), we simply assumed that $\sigma$ is coupled to the SM only through a very particular Planck-suppressed linear coupling to the Higgs potential. From a model-building point of view, however, this interaction is also somewhat non-trivial to achieve. Specifically, similar couplings to the inflaton linear in $\sigma$ will likely spoil the mechanism. In this section we will discuss a model where the coupling structure is better justified. The model that we will present here also possesses a symmetry in the UV limit between the inflaton $\phi$ and quintessence field $\sigma$, giving further motivation for the idea that $\phi$ and $\sigma$ may have similar masses. { It also possesses a more realistic non-canonical kinetic term, as we will elaborate in Sec.\ \ref{sec:conc}.} Let us postulate the following Lagrangian \ea{&-{\mathcal L} = -\f12 M_{\rm P}^2 e^{c\f{\sigma}{M_{\rm P}}}R+\f{3}{4}c^2e^{c\f{\sigma}{M_{\rm P}}}\partial_\mu\phi\partial^\mu\phi \label{eq:promo} \\ & + \f12 b^2 e^{-\phi/M_{\rm P}}\left( e^{-\f{\sigma}{M_{\rm P}}}-1\right)^{-2}\partial_\mu\sigma\partial^\mu\sigma +V(h)+\cdots \nonumber \\ & +\alpha^2M_{\rm P}^4\left[\left( e^{-\f{\phi}{M_{\rm P}}}-1\right)^2+\left( e^{-\f{\sigma}{M_{\rm P}}}-1\right)^2\right]\left(e^{c\f{\sigma}{M_{\rm P}}}\right)^2\,,\nonumber} where the dots signify all SM contributions not written explicitly, { which we drop from now on}. Here we choose $\alpha$ such that $\phi$ leads to successful inflation, and $b$ and $c$ are dimensionless couplings similar to those in (\ref{2field}). { Terms similar to the first one are often present in those scalar-tensor theories where the Planck mass is promoted to a function of a field, e.g.\ \cite{Clifton:2011jh} and will introduce a linear coupling to the Higgs potential, which is manifest in the Einstein frame.} Denoting Einstein-frame quantities with an overline, {we use the standard relations \footnote{Here we drop a $\Box$ term as the space is unbounded.}, \begin{equation} g_{\mu\nu}=\Omega^{-2}\overline{g}_{\mu\nu}~~\Rightarro ~~ R/\Omega^2=\overline{R}-\f{3}{2}\overline{g}^{\mu\nu}\partial_\mu(\ln \Omega^2)\partial_\nu(\ln \Omega^2),\nonumber \end{equation} with $\Omega^2=e^{c\f{\sigma}{M_{\rm P}}}$, to write (dropping the overlines) \ea{&-{\mathcal L} = -\f12 M_{\rm P}^2 R+\f34 c^2 \Big(\partial_\mu\phi\partial^\mu\phi + \partial_\mu\sigma\partial^\mu\sigma \Big) \label{eq:promo0}\\ & + \f12 b^2 e^{-\f{\phi}{M_{\rm P}}}\left(e^{-\f{\sigma}{M_{\rm P}}}-1\right)^{-2}e^{-c\f{\sigma}{M_{\rm P}}}\partial_\mu\sigma\partial^\mu\sigma \nonumber \\ & + \alpha^2M_{\rm P}^4\left[\left( e^{-\f{\phi}{M_{\rm P}}}-1\right)^2+\left( e^{-\f{\sigma}{M_{\rm P}}}-1\right)^2\right] + e^{-\f{2c\sigma}{M_{\rm P}}}V(h). \nonumber } When} $\sigma\lesssim M_{\rm P}$ the last term introduces a linear coupling to the Higgs potential as required. The term $(e^{c\frac{\sigma}{M_{\rm P}}})^2$ multiplying the last line of \eqref{eq:promo}, automatically avoids couplings between $\phi$ and $\sigma$ in the Einstein frame. We do this for simplicity, as in that case the mechanism follows exactly as in our first example in Sec.\ \ref{sec:ar}. Such couplings do not necessarily spoil the mechanism, but a detailed analysis would be necessary for specific realisations. \textit{The limit $\phi\gg M_{\rm P},\sigma\gg M_{\rm P}$.} In this limit the theory (\ref{eq:promo0}) simplifies approximately to \ea{-{\mathcal L} & \approx-\f12 M_{\rm P}^2 R+\f34 c^2 \left(\partial_\mu\phi\partial^\mu\phi+\partial_\mu\sigma\partial^\mu\sigma\right)\label{eq:rmod2}\\ & \phantom{=} + \alpha^2M_{\rm P}^4\left[\left( e^{-\f{\phi}{M_{\rm P}}}-1\right)^2+\left( e^{-\f{\sigma}{M_{\rm P}}}-1\right)^2\right]\,,\nonumber} so in this region we have inflation from both fields. We take the $\phi\leftrightarrow\sigma$ symmetry in this limit as motivation for choosing the mass scale $\alpha M_{\rm P}$ to be the same for both fields. \textit{The regime $\phi\gg M_{\rm P}, \sigma\lesssim M_{\rm P}$.} In this case, \ea{-{\mathcal L}&\approx-\f12 M_{\rm P}^2 R+\f34 c^2 \left(\partial_\mu\phi\partial^\mu\phi+\partial_\mu\sigma\partial^\mu\sigma\right) +\alpha^2M_{\rm P}^2\sigma^2\nonumber \\ & \phantom{=} + \alpha^2M_{\rm P}^4\left( e^{-\f{\phi}{M_{\rm P}}}-1\right)^2 +\left({1}-c\f{2\sigma}{M_{\rm P}}\right)\f{\lambda v^4}{4} \label{eq:rmod3}\,.} In this regime, we have Starobinsky-type single-field inflation from $\phi$, which successfully matches observations for $\alpha\sim 10^{-5}$. Furthermore, the $\sigma$ field will be driven towards the minimum at $\sigma_0= c\lambda v^4/(4\alpha^2 M_{\rm P}^3)$. The condition for slow-roll of $\sigma$ is $\eta_\sigma \ll 1$. Taking the canonically-normalised field $\chi = \f{\sqrt{3}}{2}c\sigma$, we see that $\eta_\sigma =\f{8}{3 c^2}$ or that the field rolls slowly only for $c\gtrsim 1$. For (\ref{eq:rmod3}) to be valid, the pre-factor in front of the quintessence field's kinetic term $(\partial\sigma)^2$ in the {second} line of (\ref{eq:promo0}) must not grow large and flatten the potential when $\sigma\rightarrow\sigma_0$. We can use this to place a crude lower bound on the initial value of $\phi$: { \ea{ \f12 b^2 e^{-\f{\phi}{M_{\rm P}}}\left(e^{-\f{\sigma_0}{M_{\rm P}}}-1\right)^{-2}e^{-c\f{\sigma_0}{M_{\rm P}}}&\approx e^{-\f{\phi}{M_{\rm P}}}\f{8\alpha^4 M_{\rm P}^8}{c^2\lambda^2 v^8 \lesssim1 \nonumber\\ \implies \phi \gtrsim \ln\left(\f{M_{\rm P}^4}{4\rho_\Lambda}\right) M_{\rm P} &\sim 276\, M_{\rm P}\,.\label{eq:phibo}}} We see that (unlike in the scenario in Sec.\ \ref{sec:ar}), we require a mild hierarchy amongst the initial field values for the model to work, with $\phi\sim{\mathcal O}(100)M_{\rm P}$ for $\sigma\sim M_{\rm P}$. \textit{The regime $\phi= 0, \sigma\ll M_{\rm P}$.} After the inflaton decays, the $\sigma$-dependent terms are{ \ee{-{\mathcal L}_\sigma \appro \f12 \left(\f{b M_{\rm P}}{\sigma}\right)^2\partial_\mu\sigma\partial^\mu\sigma+ \alpha^2M_{\rm P}^2\sigma^2-\f{2c\sigma}{M_{\rm P}}V(h) \label{eq:rmod4}\,,} which} leads to qualitatively identical behaviour to (\ref{2field3}) in the late Universe: the quintessence field will start slow-rolling from $\sigma_0$ following EWSB, sourcing dark energy in the late Universe in a manner consistent with observations if $b$ and $c$ are set to appropriate $\mathcal{O}(1)$ numbers. Equation (\ref{eq:phibo}) indicates a very long period of inflation with a practically massless field. Repeating the steps laid out at the end of Sec.\ \ref{sec:ar} shows that quantum fluctuations larger than the classical displacement are in fact generated in this model. However, they can be avoided by taking $A$ to be an $\mathcal{O}(1)$ number and making the simple modification $\exp(-\phi/M_{\rm P}) \to \exp(-A\phi/M_{\rm P})$ in the second line of (\ref{eq:promo}). All the arguments given in this section remain unchanged under this replacement. Taking e.g.\ $c=1$ and $H=10^{11}$\,GeV, this modification lowers the hierarchy in (\ref{eq:phibo}) enough to avoid large quantum corrections when $A\gtrsim5$. Finally, it is worth noting that even \textit{were} large corrections generated during inflation, their only impact would be to shift $\sigma_0$, the effective value that the field is driven to during inflation. In that case, after inflation as the Universe cools and the ratio of the fluctuation correlation length to horizon scale decreases, by the time of EWSB the field will simply have relaxed gradually to the same minimum $\sigma_0$ as we have assumed in this section, changing none of the features of the model. \section{Conclusions} \label{sec:conc} The mechanism that we present successfully explains the strength of dark energy, via the inflation-assisted relaxation of a quintessence field and electroweak symmetry breaking. It does this without the fine tuning that traditionally dogs quintessence: the need for a small effective mass, the initial value problem, and the need to forbid interactions of the quintessence field with SM fields. Our mechanism therefore poses significant interest for model building. We also showed that the critical aspects of the theory are expected to be achievable in well-motivated top-down constructions. In particular, the form of the kinetic term that we propose in (\ref{eq:promo}) is likely more realistic than the one postulated in (\ref{2field}). Indeed, theories beyond Einstein gravity can be generically parameterised in the form \cite{Maeda:1988ab,DiMarco:2002eb} \ee{\mathcal{L}_{\rm GE}=\f{M_{\rm P}^2}{2}R-\f{1}{2}\partial_\mu\phi\partial^\mu\phi-e^{-2F(\phi)}\f{1}{2}\partial_\mu\sigma\partial^\mu\sigma-U(\phi,\sigma)\,.} For the appropriate choice of $F(\phi)$ and potential, this clearly leads to the same qualitative features as (\ref{eq:promo0}). Finding such a UV-complete theory that realises our mechanism would be of substantial interest, and therefore should constitute a high priority for future work. \begin{acknowledgments} { We are grateful to Andrew Tolley and Matthew Roberts for helpful discussions, and to STFC (ST/K00414X/1, ST/N000838/1, ST/P000762/1) and the Estonian Research Council (Mobilitas Plus MOBJD323) for funding support. } \end{acknowledgments}
1,108,101,563,760
arxiv
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{intro.pdf} \caption{Comparison between our approach with existing AU graph-based approaches: (a) \textbf{pre-defined AU graphs} that use a single topology to define AU association for all facial displays; (b) \textbf{Facial display-specific AU graphs} that assign a unique topology to define AU association for each facial display. Both (a) and (b) use a single value as an edge feature; (c) \textbf{Our approach} encodes a unique AU association pattern for each facial display in node features, and additionally describes the relationship between each pair of AUs using a pair of multi-dimensional edge features.} \label{fig:intro} \end{figure} \noindent Facial Action Coding System~\cite{friesen1978facial} represents human face by a set of facial muscle movements called Action Units (AUs). Compared with the emotion-based categorical facial expression model, AUs describe human facial expressions in a more comprehensive and objective manner~\cite{martinez2017automatic}. Facial AU recognition is a multi-label classification problem as multiple AUs can be activated simultaneously. While previous studies found that underlying relationships among AUs' activation \cite{corneanu2018deep,Song_2021_CVPR,shao2021jaa} are crucial for their recognition, how to properly model such relationships is still an open research question in the field. A popular strategy applies various traditional machine learning models (\emph{e.g.}, conditional models \cite{eleftheriadis2015multi}) or neural network-based operations (\emph{e.g.}, convolution \cite{zhao2016deep}, Long-Short-Term-Memory networks \cite{niu2019local} or attention \cite{shao2021jaa}), to encode all AU descriptors as a single representation which reflects the underlying relationship among all AUs. A key drawback of such solutions is that they fail to individually model the relationship between each pair of AUs, which may contain crucial cues for their recognition (\textbf{Problem 1}). Some studies represent all AUs of the target face as a graph, where each AU is represented as a node, and each pair of AUs relationship is specifically described by an edge that contains a binary value or a single weight to describe their connectivity or association \cite{song2021uncertain,Song_2021_CVPR}. However, a single value may not be enough to represent the complex underlying relationship between a pair of AUs (\textbf{Problem 2}). In particular, some studies \cite{li2019semantic,liu2020relation} even manually define a single graph topology for all face images based on prior knowledge (\emph{e.g.}, AUs co-occurrence pattern), which fails to consider the influences of the unique facial display on AU relationships (\textbf{Problem 3}). \begin{figure*}[t] \centering \includegraphics[width=2.0\columnwidth]{fig_overview.pdf} \caption{The pipeline of the proposed AU relationship modelling approach. It takes the full face representation $X$ as the input, and the AFG block that is jointly trained with the FGG block, firstly provides a vector as a node feature to describe each AU's activation as well as its association with other AUs (Sec. \ref{subsec: node_feature}). Then, the MEFL module learns a pair of vectors as multi-dimensional edge features to describe task-specific relationship cues between each pair of AUs (Sec. \ref{subsec: multi-dimensional_edge_features}). The AU relation graph produced by our approach is then fed to a GatedGCN for AU recognition. Only the modules and blocks contained within the blue dashed lines are used at the inference stage.} \label{fig:method_overview} \end{figure*} In this paper, we propose a novel AUs relationship modelling approach to address the problems described above, which can be easily incorporated with various deep learning backbones. It takes a full face representation produced by the backbone as the input, and outputs an AUs relation graph that explicitly describes the relationship between each pair of AUs (\textbf{addressing problem 1}). Specifically, our approach consists of two modules: (i) the \textbf{AUs relationship-aware node feature learning (ANFL) module} first individually learns a representation for each AU from the input full face representation (Sec. \ref{subsec: node_feature}), which encodes not only the AU's activation status but also its association with other AUs; and then (ii) the \textbf{multi-dimensional edge feature learning (MEFL) module} learns multiple task-specific relationship cues as the edge representation for each pair of AUs (Sec. \ref{subsec: multi-dimensional_edge_features}) (\textbf{addressing problem 2}). Since both node and edge feature learning take the full face representation as the input, the influence of the unique facial display on AUs relationship is considered when generating its AUs relation graph (\textbf{addressing problem 3}). In summary, the main contributions of our AU relationship modelling approach are that it represents AU relationships as a unique graph for each facial display, which (i) encodes both the activation status of the AU and its association with other AUs into each node feature; and (ii) learns a multi-dimensional edge feature to explicitly capture the task-specific relationship cues between each pair of AUs. Our multi-dimensional edge encodes unique and multiple relationships between each pair of AUs, rather than a single relationship (\emph{e.g.}, spatial adjacency, co-occurrence patterns, \emph{etc}.) that the single value-edge encoded, which would theoretically generalizes better in modeling complex relationships between vertices \cite{gong2019exploiting,song2021learning,shao2021personality}. The main novelty of the proposed approach in comparison to pre-defined AU graphs \cite{li2019semantic,liu2020relation} and deep learned facial display-specific graphs \cite{song2021uncertain,Song_2021_CVPR} are illustrated in Figure \ref{fig:intro}. To the best of our knowledge, this is the first CNN-GCN approach that conducts end-to-end multi-dimensional edge feature learning for face image processing tasks. The pipeline of the proposed approach is illustrated in Figure \ref{fig:method_overview}. \section{The proposed approach} \noindent Our AU relationship modelling approach deep learns a unique AU relation graph from the representation of the target face, which explicitly captures recognition-related relationship cues among AUs based on the end-to-end learned relationship modelling modules. The learned AU relation graph represents the $i_{th}$ AU as the node $\bm{v}_i \in \bm{V}$ in the graph, which contains a vector describing the activation status of the $i_{th}$ AU as well as its association with other AUs in the target facial display. Besides, the task-specific relationship cues between nodes (AUs) $\bm{v}_i$ and $\bm{v}_j$ are also explicitly described by two directed edges $\bm{e}_{i,j}, \bm{e}_{j,i} \in \bm{E}$ that are represented by two deep learned vectors. \subsection{AUs relationship-aware node feature learning} \label{subsec: node_feature} \noindent As illustrated in Figure~\ref{fig:method_overview}, the ANFL module consists of two blocks: an AU-specific Feature Generator (AFG) and a Facial Graph Generator (FGG). The AFG individually generates a representation for each AU, based on which the FGG automatically designs an optimal graph for each facial display, aiming to accurately recognize all target AUs. To achieve this, the FGG would enforce the AFG to encode task-specific associations among AUs into their AU-specific representations. \subsubsection{AU-specific Feature Generator} The AFG is made up of $N$ AU-specific feature extractors, each of which contains a fully connected layer (FC) and a global average pooling (GAP) layer. It takes the full face representation $\bm{X} \in \mathbb{R}^{D \times C}$ ($C$ channels with $D$ dimensions) as the input, which can be produced by any standard machine learning backbone. The FC layer of $i_{th}$ AU-specific feature extractor first projects the $\bm{X}$ to an AU-specific feature map $\bm{U}_i \in \mathbb{R}^{D \times C}$, which is then fed to a GAP layer, yielding a vector containing $C$ values as the $i_{th}$ AU's representation $\bm{v}_i$. Consequently, $N$ AU representations can be learned from the full face representation $\bm{X}$, respectively. \subsubsection{Facial Graph Generator} \label{subsec: FGG} \noindent Our hypothesis is that the relationship cues among AUs are unique for each facial display. As a result, directly utilizing relationship cues defined in the training set (\emph{e.g.}, co-occurrence pattern) may not generalise well at the inference stage. As a result, we propose to represent AU relationships in each facial display as a unique graph which considers the influence of the target facial display on AUs relationship. For a face image, the FGG block treats $N$ target AUs' feature vectors $\bm{\mathcal{V}} = \{\bm{v}_1, \bm{v}_2, \cdots, \bm{v}_N\}$ as $N$ node features and defines the connectivity (edge presence) between a pair of nodes $\bm{v}_i$ and $\bm{v}_j$ by their features' similarity ($\text{Sim}(i,j) = \bm{v}_i^{T}\bm{v}_j$). Specifically, we choose the $K$ nearest neighbours of each node as its neighbours, and thus the graph topology is defined by the learned node features. Then, a GCN layer is employed to jointly update all AUs activation status from the produced graph, where the $i_{th}$ AU's activation representation $\bm{v}_i^{\text{FGG}}$ is generated by $\bm{v}_i$ and its connected nodes as: \begin{equation} \label{eq:oper_k} \bm{v}_i^{\text{FGG}} = \sigma[\bm{v}_i + g(\bm{v}_i, \sum_{j=1}^N r(\bm{v}_j, a_{i,j}))], \end{equation} where $\sigma[]$ is the non-linear activation; $g$ and $r$ denote differentiable functions of the GCN layer, and $a_{i,j} \in \{0, 1\}$ represents the connectivity between $\bm{v}_i$ and $\bm{v}_j$. To provide a prediction for the $i_{th}$ AU, we propose a similarity calculating (SC) strategy which learns a trainable vector $\bm{s}_i$ that has the same dimension as the $\bm{v}_i^{\text{FGG}}$, and then generates the $i_{th}$ AU's occurrence probability by computing the cosine similarity between $\bm{v}_i^{\text{FGG}}$ and $\bm{s}_i$ as: \begin{equation} \label{eq:SC} p_i^{\text{FGG}} = \frac{\text{ReLU}(\bm{v}_i^{\text{FGG}})^T\text{ReLU}(\bm{s}_i)}{\| \text{ReLU}(\bm{v}_i^{\text{FGG}}) \|_2 \|\text{ReLU}(\bm{s}_i)\|_2}, \end{equation} where ReLU denotes a non-linearity activation. As a result, a pair of AUs that have a strong association (high similarity) would have connected nodes. In other words, the FGG block enforces the AFG block to encode node (AU) features that contain task-specific relationship cues among AUs of the target facial display, in order to produce an optimal graph for their recognition. \subsection{Multi-dimensional edge feature learning} \label{subsec: multi-dimensional_edge_features} \noindent In addition to relationship cues encoded in node features, we also propose a Multi-dimensional Edge Feature Learning (MEFL) module to deep learn a pair of multi-dimensional edge features, aiming to explicitly describe task-specific relationship cues between each pair of AUs. Importantly, we learn edge features for both connected and un-connected node pairs defined in Sec. \ref{subsec: node_feature}. Even when a pair of nodes have low similarity, their relationship may still contain crucial cues for AU recognition, which are ignored during the node feature learning. Since an AU's activation may also influence other AUs' status, the relationship between a pair of AUs can be reflected by not only their features but also AUs defined by other facial regions. Thus, the MEFL module consists of two blocks: a \textbf{Facial display-specific AU representation modelling (FAM)} block that first locates activation cues of each AU from the full face representation, and an \textbf{AU relationship modelling (ARM)} block that further extracts features from these located cues, which relate to both AUs activation. This is also illustrated in Figure \ref{fig:method_GEM}. \begin{figure}[htb] \centering \includegraphics[height=0.8\columnwidth]{graph_edge_modeling.pdf} \caption{Illustration of the MEFL module. The \textbf{FAM} first independently locates activation cues related to $i_{th}$ and $j_{th}$ AU-specific feature maps $\bm{U}_i$ and $\bm{U}_j$ in the full face representation $\bm{X}$ (activated face areas are depicted in red and yellow). Then, the \textbf{ARM} further extracts cues related to both $\bm{U}_i$ and $\bm{U}_j$ (depicted in white), based on which multi-dimensional edge features $\bm{e}_{i,j}$ and $\bm{e}_{j,i}$ are produced.} \label{fig:method_GEM} \end{figure} \paragraph{FAM.} As illustrated in Figure \ref{fig:method_GEM}, for a pair of AUs, the FAM takes their AU-specific feature maps $\bm{U}_i$, $\bm{U}_j$, and the full face representation $\bm{X}$ as the input. It first conducts cross attention between $\bm{U}_i$ and $\bm{X}$ as well as $\bm{U}_j$ and $\bm{X}$, respectively, where AU-specific feature maps $\bm{U}_i$ and $\bm{U}_j$ are individually used as queries, while the full face representation $\bm{X}$ is treated as the key and value. This process can be formulated as: \begin{equation} \bm{\mathcal{F}}^{AS}_{i,x}, \bm{\mathcal{F}}^{AS}_{j,x} = \text{FAM}(\bm{U}_i,\bm{X}), \text{FAM}(\bm{U}_j,\bm{X}), \\ \end{equation} with the cross attention operation in FAM defined as \begin{equation} \text{FAM}(\bm{A}, \bm{B}) = \text{softmax}(\frac{\bm{A} \bm{W}_q (\bm{B} \bm{W}_k)^T}{\sqrt{d_k} })\bm{B} \bm{W}_v, \label{eq:fam} \end{equation} where $\bm{W}_q$, $\bm{W}_k$ and $\bm{W}_v$ are learnable weights, and $d_k$ is a scaling factor equalling to the number of the 'key's' channels. As a result, the produced $\bm{\mathcal{F}}^{AS}_{i,x}$ and $\bm{\mathcal{F}}^{AS}_{j,x}$ extract and highlight the most important facial cues from all facial regions of the target facial display for AU $i$ and AU $j$'s recognition, respectively, which consider the influence of the unique facial display on AUs relationships. \paragraph{ARM.} After encoding task-specific facial cues for each AU's recognition independently, the ARM block further extracts the facial cues related to both AUs' recognition. It also conducts the cross-attention (has the same form as Eq.~\ref{eq:fam} but independent weights) between $\bm{\mathcal{F}}^{AS}_{i,x}$ and $\bm{\mathcal{F}}^{AS}_{j,x}$, and produces features $\bm{\mathcal{F}}^{AR}_{i,j,x}$ and $\bm{\mathcal{F}}^{AR}_{j,i,x}$, where $\bm{\mathcal{F}}^{AR}_{i,j,x}$ is generated by using $\bm{\mathcal{F}}^{AS}_{j,x}$ as the query and $\bm{\mathcal{F}}^{AS}_{i,x}$ as the key and value, while $\bm{\mathcal{F}}^{AR}_{j,i,x}$ is generated by using $\bm{\mathcal{F}}^{AS}_{i,x}$ as the query and $\bm{\mathcal{F}}^{AS}_{j,x}$ as the key and value. As a result, the $\bm{\mathcal{F}}^{AR}_{i,j,x}$ summarizes $\bm{\mathcal{F}}^{AS}_{j,x}$-related cues in the $\bm{\mathcal{F}}^{AS}_{i,x}$, and $\bm{\mathcal{F}}^{AR}_{j,i,x}$ summarizes $\bm{\mathcal{F}}^{AS}_{i,x}$-related cues in the $\bm{\mathcal{F}}^{AS}_{j,x}$. Finally, we feed $\bm{\mathcal{F}}^{AR}_{i,j,x}$ and $\bm{\mathcal{F}}^{AR}_{j,i,x}$ to a GAP layer to obtain multi-dimensional edge feature vectors $\bm{e}_{i,j}$ and $\bm{e}_{j,i}$, respectively. Mathematically speaking, this process can be represented as \begin{equation} \bm{e}_{i,j}, \bm{e}_{j,i} = \text{GAP}( \text{ARM}(\bm{\mathcal{F}}^{AS}_{j,x},\bm{\mathcal{F}}^{AS}_{i,x}), \text{ARM}(\bm{\mathcal{F}}^{AS}_{i,x},\bm{\mathcal{F}}^{AS}_{j,x}) ). \end{equation} In short, the features encoded in edge features $\bm{e}_{i,j}$ and $\bm{e}_{j,i}$ summarize multiple facial cues that relate to both $i_{th}$ and $j_{th}$ AUs' recognition, from all facial regions of the target face. Once the AUs relation graph $\bm{G}^0=(\bm{V}^0,\bm{E}^0)$ that consists of $N$ node features and $N \times N$ multi-dimensional directed edge features is learned, we feed it to a GCN model to jointly recognize all target AUs. In this paper, we use a model that only consists of $L$ gated graph convolution layers (GatedGCN) \cite{bresson2017residual}, and thus the output $\bm{G}^L=(\bm{V}^L,\bm{E}^L)$ is also a graph that has the same topology as $\bm{G}^0$, where the $i_{th}$ node $\bm{v}_i^L$ represents the $i_{th}$ AU's activation status ($L = 2$ in this paper). We finally re-employ the SC module proposed in the FGG block to predict $N$ AUs' activation from the node features of $\bm{G}^L$. During the inference stage, only the well-trained AFG and MEFL are used to process the input full face representation and generate the AU relation graph. \begin{table*}[thb] \centering \small \setlength{\tabcolsep}{1.25mm}{ \begin{tabular}{lccccccccccccc} \Xhline{3\arrayrulewidth} \multicolumn{1}{l}{\multirow{2}{*}{Method}} & \multicolumn{12}{c}{AU} & \multirow{2}{*}{\textbf{Avg.}} \\ \cmidrule(lr){2-13} \multicolumn{1}{l}{} & 1 & 2 & 4 & 6 & 7 & 10 & 12 & 14 & 15 & 17 & 23 & 24 & \\ \midrule DRML \cite{zhao2016deep} &36.4 &41.8 &43.0 &55.0 &67.0 &66.3 &65.8 &54.1 &33.2 &48.0 &31.7 &30.0 &48.3 \\ EAC-Net \cite{li2018eac} &39.0 &35.2 &48.6 &76.1 &72.9 &81.9 &86.2 &58.8 &37.5 &59.1 &35.9 &35.8 &55.9 \\ JAA-Net \cite{shao2018deep} &47.2 &44.0 &54.9 &77.5 &74.6 &84.0 &86.9 &61.9 &43.6 &60.3 &42.7 &41.9 &60.0 \\ LP-Net \cite{niu2019local} &43.4 &38.0 &54.2 &77.1 &76.7 &83.8 &87.2 &63.3 &45.3 &60.5 &48.1 &54.2 &61.0 \\ ARL \cite{shao2019facial} &45.8 &39.8 &55.1 &75.7 &77.2 &82.3 &86.6 &58.8 &47.6 &62.1 &47.4 &[55.4] &61.1 \\ SEV-Net \cite{yang2021exploiting} &[\textbf{58.2}] &[\textbf{50.4}] &58.3 &[\textbf{81.9}] &73.9 &[\textbf{87.8}] &87.5 &61.6 &[52.6] &62.2 &44.6 &47.6 &63.9 \\ FAUDT \cite{jacob2021facial} &51.7 &[49.3] &[\textbf{61.0}] &77.8 &\underline{79.5} &82.9 &86.3 &[67.6] &51.9 &63.0 &43.7 &[\textbf{56.3}] &\underline{64.2} \\\midrule SRERL \cite{li2019semantic} &46.9 &45.3 &55.6 &77.1 &78.4 &83.5 &\underline{87.6} &63.9 &52.2 &[63.9] &47.1 &53.3 &62.9 \\ UGN-B \cite{song2021uncertain} &[54.2] &46.4 &56.8 &76.2 &76.7 &82.4 &86.1 &64.7 &51.2 &63.1 &48.5 &53.6 &63.3 \\ HMP-PS \cite{Song_2021_CVPR} &53.1 &46.1 &56.0 &76.5 &76.9 &82.1 &86.4 &64.8 &51.5 &63.0 &[49.9] &54.5 &63.4 \\ \midrule \rowcolor{gray!30} Ours (ResNet-50) &\underline{53.7} &\underline{46.9} &\underline{59.0} &\underline{78.5} &[80.0] &\underline{84.4} &[87.8] &\underline{67.3} &\underline{52.5} &\underline{63.2} &\textbf{50.6} &52.4 &[64.7] \\\rowcolor{gray!30} Ours (Swin Transformer-Base) &52.7 &44.3 &[60.9] & [79.9] &[\textbf{80.1}] &[85.3] &[\textbf{89.2}] &[\textbf{69.4}] &[\textbf{55.4}] &[\textbf{64.4}] &\underline{49.8} &\underline{55.1} &[\textbf{65.5}] \\ \Xhline{3\arrayrulewidth} \end{tabular}} \caption{F1 scores (in \%) achieved for 12 AUs on BP4D dataset, where the three methods (SRERL, UGN-B and HMP-PS) listed in the middle of the table are also built with graphs. The best, second best, and third best results of each column are indicated with brackets and bold font, brackets alone, and underline, respectively. } \label{ex:tab_BP4D_sota} \end{table*} \begin{table*}[thb] \centering \small \setlength{\tabcolsep}{1.3mm}{ \begin{tabular}{lccccccccc} \Xhline{3\arrayrulewidth} \multicolumn{1}{l}{\multirow{2}{*}{Method}} & \multicolumn{8}{c}{AU} & \multirow{2}{*}{\textbf{Avg.}} \\ \cmidrule(lr){2-9} \multicolumn{1}{l}{} & 1 & 2 & 4 & 6 & 9 & 12 & 25 & 26 & \\\midrule DRML \cite{zhao2016deep} &17.3 &17.7 &37.4 &29.0 &10.7 &37.7 &38.5 &20.1 &26.7 \\ EAC-Net \cite{li2018eac} &41.5 &26.4 &66.4 &50.7 &[\textbf{80.5}] &[\textbf{89.3}] &88.9 &15.6 &48.5 \\ JAA-Net \cite{shao2018deep} &43.7 &46.2 &56.0 &41.4 &44.7 &69.6 &88.3 &58.4 &56.0 \\ LP-Net \cite{niu2019local} &29.9 &24.7 &72.7 &46.8 &49.6 &72.9 &\underline{93.8} &\underline{65.0} &56.9 \\ ARL \cite{shao2019facial} &43.9 &42.1 &63.6 &41.8 &40.0 &\underline{76.2} &[95.2] &[66.8] &58.7 \\ SEV-Net \cite{yang2021exploiting} &\textbf{[55.3]} &[\textbf{53.1}] &61.5 &\underline{53.6} &38.2 &71.6 &[\textbf{95.7}] &41.5 &58.8 \\ FAUDT \cite{jacob2021facial} &46.1 &[48.6] &\underline{72.8} &[\textbf{56.7}] &50.0 &72.1 &90.8 &55.4 &\underline{61.5} \\\midrule SRERL \cite{li2019semantic} &45.7 &47.8 &59.6 &47.1 &45.6 &73.5 &84.3 &43.6 &55.9 \\ UGN-B \cite{song2021uncertain} &43.3 &\underline{48.1} &63.4 &49.5 &48.2 &72.9 &90.8 &59.0 &60.0 \\ HMP-PS \cite{Song_2021_CVPR} &38.0 &45.9 &65.2 &50.9 &\underline{50.8} &76.0 &93.3 &[\textbf{67.6}] &61.0 \\ \midrule \rowcolor{gray!30} Ours (ResNet-50) &[54.6] &47.1 &[72.9] &[54.0] &[55.7] &[76.7] &91.1 &53.0 &[\textbf{63.1}] \\\rowcolor{gray!30} Ours (Swin Transformer-Base) &\underline{52.5} &45.7 &[\textbf{76.1}] &51.8 &46.5 &76.1 & 92.9 &57.6 &[62.4] \\ \Xhline{3\arrayrulewidth} \end{tabular} } \caption{F1 scores (in \%) achieved for 8 AUs on DISFA dataset. The best, second best, and third best results of each column are indicated with brackets and bold font, brackets alone, and underline, respectively.} \label{ex:tab_DISFA_sota} \end{table*} \subsection{Training strategy} \noindent In this paper, we propose a two-stage training method to jointly optimize the proposed ANFL and MEFL modules with the backbone and classifier in an end-to-end manner. In the first stage, we train the backbone with the ANFL module, aiming to learn an AFG block that produces node features containing both AU activation status and their associations for each facial display. A priori, we notice that existing AU datasets usually have imbalanced labels, where some AUs occurred less frequently than others, and most AUs are inactivated for the majority of face images. To alleviate such issues, we propose a weighted asymmetric loss to compute the loss value between the ground-truth and predictions generated by the FGG block. It is inspired by the asymmetric loss \cite{ridnik2021asymmetric}, but has a unique weight for each sub-task (each AU's recognition) as well as fewer hyperparameters. The proposed weighted asymmetric loss is formulated as: \begin{equation} \label{eq:AUR_loss} \mathcal{L}_{\text{WA}} = -\frac{1}{N}\sum_{i = 1}^{N} w_i [y_i \text{log}(p_i)+(1-y_i) p_i \text{log}(1-p_i)], \end{equation} where $p_i$, $y_i$ and $w_i$ are the prediction (occurrence probability), ground truth and weight of the $i_{th}$ AU, respectively. Here, the $w_i = N(1/r_i)/\Sigma_{j=1}^N(1/r_j)$ is defined by the $i_{th}$ AU's occurrence rate $r_i$ computed from the training set. It allows loss values to account less for AUs that have higher occurrence rates in the training set, leading loss values caused by less frequently occurring AUs to have higher weights during the training. Additionally, the term '$p_i$' in the center of $(1-y_i) p_i \text{log}(1-p_i)$ down weights loss values caused by inactivated AUs that are easy to be recognized, whose predicted occurrence probabilities are close to zero ($p_i \ll 0.5$), enforcing the training process to focus on activated AUs and inactivated AUs that are hard to be correctly recognized. The second stage trains the MEFL module and classifier (GatedGCN) with the pre-trained backbone and AFG block. Here, we again employ the proposed weighted asymmetric loss (Eq.~\ref{eq:AUR_loss}) to compute the loss value $\mathcal{L}_{\text{WA}}$ between the outputs of the classifier and ground truth. Additionally, we also leverage the AUs co-occurrence patterns to supervise the training process. We feed multi-dimensional edge features $\bm{e}_{i,j}^L$ and $\bm{e}_{j,i}^L$ generated from the last GatedGCN layer to a shared FC layer, in order to predict the co-occurrence pattern of the $i_{th}$ and $j_{th}$ AUs of the target face. We define this task as a four-class classification problem, \emph{i.e.,} for a pair of nodes $\bm{v}_i$ and $\bm{v}_j$: (1) both $\bm{v}_i$ and $\bm{v}_j$ are inactivated; (2) $\bm{v}_i$ is inactivated and $\bm{v}_j$ is activated; (3) $\bm{v}_i$ is activated and $\bm{v}_j$ is inactivated; or (4) both $\bm{v}_i$ and $\bm{v}_j$ are activated. As a result, the categorical cross-entropy loss is introduced as: \begin{equation} \mathcal{L}_{\text{E}} = -\frac{1}{| \bm{E} |}\sum_{i = 1}^{| \bm{E} |} \sum_{j = 1}^{N_E} y_{i,j}^{e} \text{log}(\frac{e^{p_{i,j}^{e}}}{\sum_{k}e^{p_{i,k}^{e}}}), \end{equation} where $| \bm{E} |$ denotes the number of edges in the facial graph; $N_E$ is the number of co-occurrence patterns; $p_{i,j}^{e}$ is the co-occurrence prediction output from the shared FC layer. Consequently, The overall training loss of the second stage is formulated as the weighted combination of the two losses: \begin{equation} \mathcal{L} = \mathcal{L}_{\text{WA}} + \lambda \mathcal{L}_{\text{E}}, \label{method:loss_function} \end{equation} where $\lambda$ decides the relative importance of the two losses. \section{Experiments} \subsection{Experimental Setup} \paragraph{Datasets.} We evaluate the performance of our approach on two widely-used benchmark datasets: BP4D \cite{zhang2014bp4d} and DISFA \cite{mavadati2013disfa}. BP4D recorded 328 videos (about 140,000 facial frames) from 41 young adults (23 females and 18 males) who were asked to respond to $8$ emotion elicitation tasks. DISFA recorded $130,815$ frames from 27 subjects (12 females and 15 males) who were watching Youtube videos. Each frame in BP4D and DISFA is annotated with occurrence labels of multiple AUs. \paragraph{Implementation Details.} For both datasets, we use MTCNN \cite{yin2017multi} to perform face detection and alignment for each frame and crop it to $224 \times 224$ as the input for backbones. We then follow the same protocol as previous studies \cite{zhao2016deep,li2018eac,Song_2021_CVPR} to conduct subject-independent three folds cross-validation for each dataset, and report the average results over 3 folds. During the training, we employ an AdamW optimizer with $\beta_1 = 0.9$, $\beta_2 = 0.999$ and weight decay of $5e^{-4}$. The number K for choosing nearest neighbors in the FGG is set to 3 and 4 for BP4D and DISFA, respectively. For the hyperparameter $\lambda$ in Eq.~\ref{method:loss_function}, we set it to 0.05 and 0.01 for models based on ResNet and Swin Transformer, respectively. We totally train the proposed model for 40 epochs, including 20 epochs for the first stage (the initial learning rate of $1e^{-4}$) and 20 epochs for the second stage (the initial learning rate of $1e^{-6}$), with a batch size of 64. The cosine decay learning rate scheduler is also used. Both backbones are pre-trained on ImageNet \cite{deng2009imagenet}. All our experiments are conducted using NVIDIA A100 GPUs based on the open-source PyTorch platform. \paragraph{Evaluation Metric.} We follow previous AU occurrence recognition studies \cite{shao2021jaa,churamani2021aula,li2019self,Song_2021_CVPR} using a common metric: frame-based F1 score, to evaluate the performance of our approach, which is denoted as $F1 = 2 \frac{P \cdot R}{P+R}$. It takes the recognition precision $P$ and recall rate $R$ into consideration. \subsection{Results and Discussion} \label{subsec:results} \paragraph{Comparison to State-of-the-art Methods.} This section compares our best systems of two backbones with several state-of-the-art methods on both datasets. Table~\ref{ex:tab_BP4D_sota} reports the occurrence recognition results of 12 AUs on BP4D. We additionally provide the AUC results in Appendix \ref{sec:ex_ar}. It can be observed that the proposed AU relationship modelling approach allows both backbones (ResNet-50 and Swin Transformer-Base (Swin-B)) to achieve superior overall F1 scores than all other listed approaches, with $0.5\%$ and $1.3\%$ average improvements over the state-of-the-art \cite{jacob2021facial}. Specifically, our approach allows both backbones to achieve the top three performances for $9$ out of $12$ AUs' recognition (\emph{e.g.}, AU 4, AU 6, AU 7, AU 10, AU 12, AU 14, AU 15, AU 17, and AU 23) among all listed approaches. Similar results were also achieved on DISFA. According to Table~\ref{ex:tab_DISFA_sota}, our approach helps both backbones to achieve the state-of-the-art average F1 scores over $8$ AUs, which outperform the current state-of-the-art with 1.6\% and 0.9\% improvements, respectively. For fair comparisons, we only compare our approach with static face-based methods that did not remove any frame from the datasets. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{visualization.png} \caption{Visualization of association cues encoded in node features (only systems of the last two columns encode such cues). We connect each node to its K nearest neighbours, where nodes of activated AUs usually have more connections than nodes of inactivated AUs. Systems used such relationship cues have enhanced AU recognition results (predictions of the column 3 is better than the column 2).} \label{fig:ex_visualization} \end{figure} According to both tables, our ResNet-50-based system also clearly outperforms other graph-based AU recognition approaches which also use CNNs (ResNet (UGN-B, HMP-PS) or VGG (SRERL)) as backbones. Since SRERL only uses a pre-computed adjacent matrix to describe the relationship between AUs for all faces, our system shows a large advantage over it, with $1.8\%$ and $7.2\%$ F1 score improvements for the average results on BP4D and DISFA, respectively. Although UGN-B and HMP-PS assigned each facial display a unique adjacent matrix and achieved better performance than SRERL, they still use a single value to describe the relationship between each pair of AUs, without considering multiple relationship cues. Thus, our deep-learned task-specific multi-dimensional edge features lead our system to achieve more than $1.3\%$ and $2.1\%$ average F1 score improvements over UGN-B and HMP-PS on both datasets. \begin{table}[t] \centering \small \setlength{\tabcolsep}{1.8mm}{ \begin{tabular}{cccc|cc|rr} \Xhline{3\arrayrulewidth} Backbone &AFG &FGG &MEFL & $\mathcal{L}_{\text{WA}}$ & $\mathcal{L}_{\text{E}}$ & \textbf{Res} &\textbf{Swin}\\ \midrule \ding{51} & & & & & &59.1 &62.6 \\ \ding{51} & \ding{51} & & & & &60.4 &63.6 \\ \ding{51} & & & & \ding{51} & &61.8 &63.9 \\ \ding{51} & \ding{51} &\ding{51} & & & &63.1 &63.6 \\ \ding{51} & \ding{51} & &\ding{51} & & &63.2 &63.8 \\ \ding{51} & \ding{51} & & & \ding{51} & &63.0 &64.6 \\ \ding{51} & \ding{51} & \ding{51} & & \ding{51} & &63.7 &65.1 \\ \ding{51} & \ding{51} & & \ding{51} & \ding{51} & &63.9 &64.6 \\ \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & &64.5 &65.4 \\ \ding{51} & \ding{51} &\ding{51} & \ding{51} & \ding{51} & \ding{51} &64.7 &65.5 \\ \Xhline{3\arrayrulewidth} \end{tabular} } \caption{Average AU recognition results (F1 scores (in \%)) achieved by various settings using two backbones on the BP4D. The systems of the first two rows are trained with widely-used weighted binary cross-entropy loss.} \label{ex:tab_AblationStudy} \end{table} \paragraph{Ablation Studies.} Table~\ref{ex:tab_AblationStudy} evaluates the influence of each component of our pipeline on the average AU recognition results. It can be observed that simply using the AFG block to specifically learn a representation for each AU enhanced the performance for both backbones, indicating that the relationship between each AU's activation and the full face representation is unique. In particular, when a facial AU is activated, its movement usually affects other facial regions (\emph{i.e.}, the activation of other AUs) while inactivated AUs would not have such an effect. As visualized in Figure \ref{fig:ex_visualization}, our FGG simulates this phenomenon by connecting activated AUs to all other AUs (including activated and inactivated AUs). Building on the backbone-AFG system, we also found that individually adding the FGG block or MEFL module further increased the recognition performance for both backbones. These results suggest that (i) the FGG block allows the AFG block to encode additional AU recognition-related cues into node features, \emph{i.e.,} we hypothesize that the FGG can help the AFG to learn AUs' relationship cues for their recognition; and (ii) the multi-dimensional edge features learned by the MEFL module provide more task-specific AU relationship cues to improve the recognition performance, which further validates our hypothesis that a single value is not enough to carry all useful relationship cues between a pair of AUs. In short, the proposed approach can provide valuable relationship cues for AU recognition during both node and edge feature learning. More importantly, jointly using FGG and MEFL with our weighted asymmetric loss largely boosted both backbones' recognition capabilities, \emph{i.e.}, $5.6\%$ and $2.9\%$ F1 score improvements over the original backbones, as well as $1.7\%$ and $0.9\%$ improvements over the backbone-AFG systems. Besides the proposed relationship modelling approaches, we show that the two loss functions also positively improved the recognition performance. The weighted asymmetric loss clearly enhanced the performance over the widely-used weighted binary cross-entropy loss, illustrating its superiority in alleviating data imbalance issue. Meanwhile, the proposed AU co-occurrence supervision also slightly enhanced recognition results for both backbones. \section{Conclusion} \noindent This paper proposes to deep learn a graph that explicitly represents relationship cues between each pair of AUs for each facial display. These relationship cues are encoded in both node features and multi-dimensional edge features of the graph. The results demonstrate that the proposed node and edge feature learning methods extracted reliable task-specific relationship cues for AU recognition, \emph{i.e.}, both CNN and transformer-based backbones have been largely enhanced, and achieved state-of-the-art results on two widely used datasets. Since our graph-based relationship modelling approach can be easily incorporated with standard CNN/transformer backbones, it can be directly applied to enhance the performance of multi-label tasks or tasks whose data contains multiple objects, by explicitly exploring the task-specific relationship cues among labels or objects. \section{Acknowledgement} Cheng Luo, Weicheng Xie and Linlin Shen are supported by the National Natural Science Foundation of China under grants no. 61602315, 91959108, the Science and Technology Project of Guangdong Province under grant no. 2020A1515010707, the Science and Technology Innovation Commission of Shenzhen under grant no. JCYJ20190808165203670. Siyang Song is supported by the European Union's Horizon~$2020$ Research and Innovation programme under grant agreement No.~$826232$. Hatice Gunes is supported by the EPSRC under grant ref. EP/R$030782$/$1$.
1,108,101,563,761
arxiv
\section{Introduction} The idea of inflation (a period of rapid quasi-exponential expansion of the Universe) neatly solves several long-standing issues in cosmology \cite{Linde:2005ht, Linde:2005dd}, and has been spectacularly confirmed by observations of the Cosmic Microwave Background (CMB) anisotropies \cite{Komatsu:2010fb, Bennett:2010jb}. While the Universe is inflating, its contents is cold. But eventually, inflation has to end and the field driving the inflation must decay, depositing energy into high-energy particles. This process, known as reheating, ``boils'' the vacuum and starts the thermal history of the universe with the hot big bang. As universe continues to cool down, it could undergo more phase transitions, which would happen at symmetry breaking points of the theory. Very little is known about the fundamental physics at these energy scales, and cosmological observations could be our only source of information for the foreseeable future. No photons reach us directly from this epoch as the universe is filled with hot plasma and is opaque until recombination. Nevertheless, the expansion history of the early universe is imprinted on the sky in the form of primordial curvature fluctuations. With success of WMAP and the data from Planck soon to come, CMB observations are reaching precision required to disentangle other subdominant effects from Gaussian fluctuations due to simple inflation \cite{Komatsu:2009kd}. The most basic models of reheating involve inflaton decaying into one or more other scalar fields. Among the most interesting are the ones where decay is non-perturbative, for example proceeding through parametric resonance naturally happening in chaotic inflation models \cite{Dolgov:1989us, Traschen:1990sw, Kofman:1994rk, Shtanov:1994ce, Kofman:1995fi, Kofman:1997yn, Greene:1997fu}, or tachyonic instability in hybrid inflation models \cite{Linde:1993cn, GarciaBellido:1997wm, Felder:1998vq}. For all their simplicity, these models have surprisingly rich physics involving non-equilibrium phase transitions. While initial stages of preheating are linear and instability development can be understood analytically \cite{Kofman:1997yn, Greene:1997fu}, dynamics could be chaotic \cite{Podolsky:2002qv}, and the field evolution quickly becomes inhomogeneous and non-linear, so non-perturbative decay of the inflaton has to be studied numerically \cite{Khlebnikov:1996mc, Prokopec:1996rr, Kasuya:1998td, Tkachev:1998dc, Felder:2000hj, Felder:2001kt, Copeland:2002ku, GarciaBellido:2002aj, Podolsky:2005bw, Dufaux:2006ee, Felder:2006cc}. In this paper I briefly review the theory of parametric resonance, go over general results of numerical simulations of non-linear field evolution during preheating, and discuss signatures of preheating that could potentially be observed. \section{Analytical Theory of Preheating} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.8] \shade[top color=white,bottom color=LightSkyBlue] plot[id=R,domain=-2.34:2.34,samples=40] function{x**4/12}; \draw[very thick] plot[id=V,domain=-3:3,samples=50] function{x**4/12}; \draw[->] (-3,0) -- (3,0) node[right] {$\phi$}; \draw[->] (0,-0.25) -- (0,7) node[above] {$V(\phi)$}; \shade[ball color=red] (+2.535543701000,+5.034457448000) circle (.23) node[left] {\rotatebox{82}{slow roll}~~}; \draw[->] (2.492616906,4.733082574) -- (2.353564078,3.850196815); \shade[ball color=red] (+0.000000000000,+0.250000000000) circle (.23); \node[above] at (0,0.5) {~oscillations}; \node[above] at (0,2.5) {~~critical~~damping}; \draw[->] (+0.2977500911,0.2506648756) -- (+0.7579414588,0.2805700982); \draw[->] (-0.2977500911,0.2506648756) -- (-0.7579414588,0.2805700982); \node[above] at (9,-1.1) {\epsfig{file=expansion, height=6.5cm}}; \node[above] at (12,0) {$\ln a$}; \node[above] at (5.5,7) {$\ln\ell$}; \node[above] at (0,-1.5) {(a)}; \node[above] at (9.5,-1.5) {(b)}; \end{tikzpicture} \end{center} \caption{Dynamics of a single field chaotic inflation model with $\lambda\phi^4/4$ potential.} \label{fig:model} \end{figure} The basic model of reheating involves inflaton $\phi$ decaying into another scalar field $\chi$. The action describing two interacting scalar fields minimally coupled to gravity is \begin{equation}\label{eq:action} S = \int \left\{ \frac{R}{16\pi G} - \frac{1}{2}\, g^{\mu\nu}(\phi_{,\mu}\phi_{,\nu}+\chi_{,\mu}\chi_{,\nu}) - V(\phi,\chi) \right\} \sqrt{-g}\, d^4 x, \end{equation} with potential $V(\phi,\chi)$ containing the terms responsible for field masses and self-couplings, as well as their interaction. Polynomial field operators up to fourth order are renormalizable, so one usually takes $\frac{1}{2}\,m^2\phi^2$ or $\frac{1}{4}\,\lambda\phi^4$ inflaton potential for chaotic inflation, and $\frac{1}{2}\, g^2\phi^2\chi^2$ coupling term \cite{Kofman:1997yn, Greene:1997fu}. For simplicity, one can keep the decay field $\chi$ massless. Couplings like $\frac{1}{2} \sigma\phi \chi^2$ are also allowed and could be present \cite{Dufaux:2006ee}, although one would need a $\chi^4$ self-interaction to keep the potential bounded from below. Models with various combinations of these potential bits have been studied in the literature; in this review, I will mainly focus on the one with quartic potential \cite{Greene:1997fu} \begin{equation}\label{eq:V:L4G22} V(\phi,\chi) = \frac{1}{4}\, \lambda\phi^4 + \frac{1}{2}\, g^2 \phi^2 \chi^2. \end{equation} In a flat homogeneous isotropic universe with Friedmann-Robertson-Walker metric \begin{equation}\label{eq:frw} ds^2 = -dt^2 + a(t)^2 d\vec{x}^2, \end{equation} the field equations of motion are readily obtained from the action (\ref{eq:action}); they are \begin{equation}\label{eq:phi} \ddot\phi + 3H\dot\phi + \left(-\frac{\Delta}{a^2} + \lambda\phi^2 + g^2\chi^2\right)\phi = 0, \end{equation} \begin{equation}\label{eq:chi} \ddot\chi + 3H\dot\chi + \left(-\frac{\Delta}{a^2} ~ \phantom{+ \lambda\phi^2} + g^2\phi^2\right)\chi = 0. \end{equation} Hubble parameter $H\equiv \dot{a}/a$ plays the role of friction term in field dynamics. Its value is determined by the (average) total energy density according to Friedmann equation \begin{equation}\label{eq:H} H^2 = \frac{8\pi G}{3} \langle \rho \rangle, \end{equation} where the combined energy density of the two fields is \begin{equation}\label{eq:rho} \rho = \frac{1}{2} \dot\phi^2 + \frac{1}{2} \dot\chi^2 + \frac{1}{2} \frac{(\nabla\phi)^2}{a^2} + \frac{1}{2} \frac{(\nabla\phi)^2}{a^2} + V(\phi,\chi). \end{equation} During chaotic inflation, potential energy of the inflaton causes Hubble friction to be large, and the motion of the fields is over-damped. The inflaton $\phi$ slowly rolls down the potential until the damping becomes sub-critical, at which point it starts oscillating near the minimum of the potential with decreasing amplitude, as illustrated in Figure~\ref{fig:model}a. Evolution of the Hubble horizon size $L\equiv1/H$ and the physical wavelength of comoving modes $\lambda \equiv 2\pi\, a/k$ is shown on Figure~\ref{fig:model}b. During inflation $L$ changes slowly, so that slow roll parameter $\epsilon \equiv \frac{\partial\ln L}{\partial\ln a} \ll 1$. When field is oscillating, Hubble horizon size grows as $L\propto a^2$ according to the average equation of state, which is $1/3$ for $\lambda\phi^4$ oscillator. Comoving modes stop exiting and begin re-entering the horizon when $\epsilon=1$; this moment can be taken as the end of inflation. \begin{figure} \begin{center} \begin{tabular}{cc} \epsfig{file=growth-299, width=6cm} & \epsfig{file=growth-301, width=6cm} \\ (a) $g^2/\lambda = 2.99$, $\kappa^2=0$ & (b) $g^2/\lambda = 3.01$, $\kappa^2=0$ \\ \end{tabular} \end{center} \caption{Background oscillations of the inflaton $a\phi$ (black) and the zero mode of decay field $a\chi$ (red) for values of coupling slightly inside (a) and outside (b) of the first resonance band in Figure~\ref{fig:stab}b. Amplitude of $a\chi$ was scaled up by $3\cdot10^8$.} \label{fig:soln} \end{figure} Other fields coupled to inflaton feel its oscillations through modulation of parameters in their equations of motion, such as the effective mass term $g^2\phi^2$ in equation~(\ref{eq:chi}). Periodic modulation can lead to parametric resonance and exponential growth of inhomogeneous excitations in the fields coupled to inflaton. This is a fairly generic feature of chaotic inflation models; let's see how this happens in our model~(\ref{eq:V:L4G22}). It is very useful to scale variables so that the inflaton oscillations are periodic and of constant amplitude. This is particularly easy in model~(\ref{eq:V:L4G22}), which is conformally invariant apart from its coupling to gravity. Switching to conformal time $d\eta \equiv dt/a$ and scaling the field values according to their conformal weight, equation of motion for homogeneous inflaton (\ref{eq:phi}) becomes simply $(a\phi)'' + \lambda(a\phi)^3 - a''\phi = 0$. If one neglects the last term (which is small as $a\simeq\eta$), oscillating inflaton solution is \begin{equation}\label{eq:bg} \phi(\eta) = \frac{\Phi_0}{a(\eta)}\, f(\tau),\hspace{1em} \tau \equiv \lambda^{\frac{1}{2}} \Phi_0 (\eta-\eta_0) \end{equation} where $\Phi_0$ is the amplitude of inflaton oscillations, and $f(\tau)$ is a unit amplitude solution of the canonical anharmonic oscillator equation $d^2f/d\tau^2 + f^3 = 0$, which can be written exactly in terms of Jacobi elliptical cosine function, or its harmonic expansion \cite{Kiper:1984} \begin{equation}\label{eq:cn} f(\tau) \equiv \text{cn}(\tau,2^{-\frac{1}{2}}) = 2^{\frac{1}{2}}\, \frac{4\pi}{T} \sum\limits_{n=1}^{\infty} \frac{\cos(n-\frac{1}{2})\frac{4\pi}{T}\tau}{\cosh(n-\frac{1}{2})\pi}. \end{equation} Function $f(\tau)$ is periodic with period $T=\pi^{-\frac{1}{2}}\Gamma^2(\frac{1}{4})$, and its harmonic expansion is exponentially converging, so only a few terms are needed to accurately represent its shape. Substituting background inflaton solution (\ref{eq:bg}) into equation of motion (\ref{eq:chi}) for the coupled field $\chi$, and rescaling variables the same way we did for inflaton, one obtains the evolution equation for the Fourier mode of decay field $\chi_k$ with comoving wavenumber $k$. In terms of rescaled parameters $\kappa \equiv k/(\lambda^{\frac{1}{2}}\Phi_0)$ and $q \equiv g^2/\lambda$, it is \begin{equation}\label{eq:lame} \frac{d^2 (a\chi_k)}{d\tau^2} + \left[\kappa^2 + q\, \text{cn}^2\left(\tau,2^{-1/2}\right)\right](a\chi_k) = 0. \end{equation} Exact solution of the oscillating inflaton $a\phi$ is shown in Figure~\ref{fig:soln}, along with homogeneous solution of the field $a\chi$ coupled to it, for two slightly different values of the coupling $g^2/\lambda$. As you can see, depending on the value of the coupling, evolution of the field $\chi$ can be either exponentially unstable (\ref{fig:soln}a), or merely oscillatory (\ref{fig:soln}b). \begin{figure} \begin{center} \begin{tabular}{cc} (a) \hfill $\chi_k'' + \left[\kappa^2 + q \cos^2(\tau)\right]\chi_k = 0$~ & (b) \hfill $\chi_k'' + \left[\kappa^2 + q\, \text{cn}^2\left(\tau,2^{-1/2}\right)\right]\chi_k = 0$~~ \\ \rotatebox{90}{\hspace{10em}$\kappa^2=k^2/(m^2 a^2)$}% \epsfig{file=mathieu, width=6cm} & \rotatebox{90}{\hspace{10em}$\kappa^2=k^2/(\lambda\Phi_0^2)$}% \epsfig{file=lame, width=6cm} \vspace{-6pt}\\ \hspace{2.2em} $q = g^2\Phi_0^2/(m^2 a^3)$ & \hspace{2.2em} $q = g^2/\lambda$ \\ \end{tabular} \end{center} \caption{Stability diagram of Mathieu (a) and Lame (b) equations, using the same parametrization. White regions correspond to stable solutions, shaded regions are unstable, with brighter color corresponding to larger values of critical exponent $\mu$ (isolevels are spaced every $\Delta\mu = 0.01185$).} \label{fig:stab} \end{figure} Differential equations with periodic coefficients are studied in Floquet theory, and are often encountered in other branches of physics as well (for example, Bloch waves in condensed matter). Equation (\ref{eq:lame}), in particular, is known as Lame equation, while its counterpart for harmonic inflaton oscillations in $m^2\phi^2$ potential is Mathieu equation \cite{Bateman:1955}. According to Floquet's theorem, equation (\ref{eq:lame}) admits solution of the form $e^{\mu \tau} P(\tau)$, where $\mu$ is a complex number, and function $P(\tau)$ is periodic. Floquet exponent $\mu$ depends on the parameters $\kappa^2$ and $q$, and can be calculated by explicitly constructing such periodic function from the principal fundamental matrix solution \begin{equation}\label{eq:W} \mathbb{W}(\tau) = \left[\begin{array}{cc} \chi_1(\tau) & \chi_2(\tau) \\ \chi_1'(\tau) & \chi_2'(\tau) \\ \end{array}\right], \end{equation} made up from two independent solutions $\chi_1$ and $\chi_2$ with initial conditions $\mathbb{W}(0) = \mathbb{I}$. Integrating principal fundamental solution over a single period (which can be done efficiently and precisely using numerical integration), and fixing coefficients in a linear combination $c_1\chi_1(\tau) + c_2\chi_2(\tau) = e^{\mu \tau} P(\tau)$ to satisfy $P(T)=P(0)$ and $P'(T)=P'(0)$, one finds that the value of Floquet exponent $\mu$ is given by \begin{equation}\label{eq:mu:sqrt} e^{\mu T} = Q(T) + \sqrt{Q^2(T) - W(T)}, \end{equation} where $Q$ and $W$ are two invariants of the matrix $\mathbb{W}$ under a similarity transformation \begin{equation}\label{eq:Q} Q = \frac{1}{2} \tr \mathbb{W}, \hspace{1em} W = \det \mathbb{W} \equiv 1. \end{equation} The second invariant (Wronskian $W$) is conserved, so expression (\ref{eq:mu:sqrt}) simplifies to \begin{equation}\label{eq:mu:cosh} \cosh \mu T = Q(T). \end{equation} Stability diagrams of Mathieu and Lame equations showing contour plots of $\text{Re}\mu$ calculated in this fashion are presented on Figure~\ref{fig:stab}. Wide bands in parameter space are unstable, with values of Floquet exponent $\mu$ reaching as high as $0.237$ for Lame equation. If the value of coupling lands you into one of these bands, inflaton will decay very efficiently into $\chi_k$ particles for a wide range of wavenumbers $k$. This regime is known as a broad parametric resonance. Although not usually emphasized, the instability band structure of Lame and Mathieu equations is quite similar, as elliptic cosine (\ref{eq:cn}) differs from $\cos \tau$ by 4.3\% total harmonic distortion and 18\% larger period. The real difference between $m^2\phi^2$ and $\lambda\phi^4$ models is how parameters scale with expansion of the universe. While their values stay constant for $\lambda\phi^4$ model (which is nearly conformally invariant), for $m^2\phi^2$ model expansion drags the values of parameters toward the stable point $\kappa^2=q=0$, eventually shutting off the resonance. Thus for preheating to be efficient in $m^2\phi^2$ model, one needs a much larger initial value of $q$. \section{Tackling Non-linear Evolution: Numerical Methods and Issues} \begin{figure} \epsfig{file=pdf-phi, width=6.5cm} \hfill \epsfig{file=pdf-chi, width=6.5cm} \caption{Evolution of distributions of inflaton (left) and decay product (right) in a full non-linear simulation with coupling at the maximal resonance $g^2/\lambda=1.875$.} \label{fig:decay} \end{figure} Broad parametric resonance amplifies quantum fluctuations of the fields, creating real particles in a state far from thermal equilibrium. Instability is exponentially rapid, and develops within a few dozen of inflaton oscillation (as illustrated in Figure~\ref{fig:decay}), which is very fast on cosmological time scales. Once the energy density of created particles becomes comparable to that of the homogeneous inflaton, one can no longer treat the evolution perturbatively, and non-linearity of the coupling and back reaction of the created particles on the inflaton evolution has to be taken into account. The most straightforward way to do it is to solve field evolution equations numerically \cite{Khlebnikov:1996mc, Prokopec:1996rr}. \begin{figure} \begin{center} \footnotesize \begin{tikzpicture}[x=6em,y=6em,z={(2em,1.5em)}] \tikzstyle{rank0}=[circle,draw=black,fill=blue!70,thick] \tikzstyle{rank1}=[circle,draw=black,fill=blue!50,thick] \tikzstyle{rank2}=[circle,draw=black,fill=blue!30,thick] \tikzstyle{rank3}=[circle,draw=black,fill=blue!10,thick] \node (000) at ( 0, 0, 0) [rank0] {$c_0$}; \node (100) at ( 1, 0, 0) [rank1] {$c_1$}; \node (010) at ( 0, 1, 0) [rank1] {$c_1$}; \node (001) at ( 0, 0, 1) [rank1] {$c_1$}; \node (I00) at (-1, 0, 0) [rank1] {$c_1$}; \node (0I0) at ( 0,-1, 0) [rank1] {$c_1$}; \node (00I) at ( 0, 0,-1) [rank1] {$c_1$}; \node (110) at ( 1, 1, 0) [rank2] {$c_2$}; \node (101) at ( 1, 0, 1) [rank2] {$c_2$}; \node (011) at ( 0, 1, 1) [rank2] {$c_2$}; \node (I10) at (-1, 1, 0) [rank2] {$c_2$}; \node (I01) at (-1, 0, 1) [rank2] {$c_2$}; \node (0I1) at ( 0,-1, 1) [rank2] {$c_2$}; \node (1I0) at ( 1,-1, 0) [rank2] {$c_2$}; \node (10I) at ( 1, 0,-1) [rank2] {$c_2$}; \node (01I) at ( 0, 1,-1) [rank2] {$c_2$}; \node (II0) at (-1,-1, 0) [rank2] {$c_2$}; \node (I0I) at (-1, 0,-1) [rank2] {$c_2$}; \node (0II) at ( 0,-1,-1) [rank2] {$c_2$}; \node (111) at ( 1, 1, 1) [rank3] {$c_3$}; \node (11I) at ( 1, 1,-1) [rank3] {$c_3$}; \node (1I1) at ( 1,-1, 1) [rank3] {$c_3$}; \node (1II) at ( 1,-1,-1) [rank3] {$c_3$}; \node (I11) at (-1, 1, 1) [rank3] {$c_3$}; \node (I1I) at (-1, 1,-1) [rank3] {$c_3$}; \node (II1) at (-1,-1, 1) [rank3] {$c_3$}; \node (III) at (-1,-1,-1) [rank3] {$c_3$}; \draw (III) to (II0) to (II1); \draw (I0I) to (I00) to (I01); \draw (I1I) to (I10) to (I11); \draw (0II) to (0I0) to (0I1); \draw (00I) to (000) to (001); \draw (01I) to (010) to (011); \draw (1II) to (1I0) to (1I1); \draw (10I) to (100) to (101); \draw (11I) to (110) to (111); \draw (III) to (I0I) to (I1I); \draw (II0) to (I00) to (I10); \draw (II1) to (I01) to (I11); \draw (0II) to (00I) to (01I); \draw (0I0) to (000) to (010); \draw (0I1) to (001) to (011); \draw (1II) to (10I) to (11I); \draw (1I0) to (100) to (110); \draw (1I1) to (101) to (111); \draw (III) to (0II) to (1II); \draw (I0I) to (00I) to (10I); \draw (I1I) to (01I) to (11I); \draw (II0) to (0I0) to (1I0); \draw (I00) to (000) to (100); \draw (I10) to (010) to (110); \draw (II1) to (0I1) to (1I1); \draw (I01) to (001) to (101); \draw (I11) to (011) to (111); \end{tikzpicture} \hfill \footnotesize \raisebox{8em}{ \begin{tabular}{|@{~~}c@{~~}|@{~~}c@{~~}|@{~~}c@{~~}|@{~~}c@{~~}||c|c|} \hline $c_3$ & $c_2$ & $c_1$ & \hspace{-0.8em} $-c_0$ & cost & stability \\ \hline $(8)$ & $(12)$ & $(6)$ & $(1)$ & $(\times,+)$ & \rule[-0.6em]{0pt}{1.7em} $\frac{\Delta t}{\Delta x} < \ldots$ \\ \hline\hline \rule[-1em]{0pt}{2.5em} $0$ & $0$ & $1$ & $6$ & $1,6$ & $1/\sqrt{3}$ \\ \hline \rule[-1em]{0pt}{2.5em} $0$ & $\frac{1}{6}$ & $\frac{1}{3}$ & $4$ & $3,18$ & $1/\sqrt{2}$ \\ \hline \rule[-1em]{0pt}{2.5em} $\frac{1}{12}$ & $0$ & $\frac{2}{3}$ & $\frac{14}{3}$ & $3,14$ & $3/\sqrt{21}$ \\ \hline \rule[-1em]{0pt}{2.5em} $\frac{1}{30}$ & $\frac{1}{10}$ & $\frac{7}{15}$ & $\frac{64}{15}$ & $4,26$ & $\sqrt{30}/8$ \\ \hline \end{tabular}} \end{center} \vspace{-1em} \caption{Three-dimensional spatial discretization stencil (left) and summary of coefficients (right) for minimal (top) and three isotropic discretization schemes.} \label{fig:disc} \end{figure} Several codes are available for this purpose, most notably LATTICEEASY by Gary Felder and Igor Tkachev \cite{Felder:2000hq, Felder:2007nz} which is widely used and modified, my own DEFROST~\cite{Frolov:2008hy} which offers improved performance and visualization capabilities, and a new GPU-accelerated CUDAEASY by Jani Sainio \cite{Sainio:2009hm}. A pseudo-spectral code PSpectRE has just been released as well~\cite{Easther:2010qz}. Most of the implementations (with the exception of PSpectRE) opt for a finite difference method to solve non-linear partial differential equations (\ref{eq:phi},\ref{eq:chi}). The fields are discretized on a cubic spatial grid of spacing $dx$, and the spatial differential operators are approximated by finite differences \begin{equation}\label{eq:disc} \Delta X = \frac{D[X]}{(dx)^2}, \hspace{1em} (\vec{\nabla} X)^2 = \frac{G[X]}{(dx)^2}. \end{equation} Discretizations of Laplacian operator involving only 26 nearest neighbours of a point \begin{equation}\label{eq:D} D[X] \equiv \underbrace{\sum\limits_{x-1}^{x+1}\sum\limits_{y-1}^{y+1}\sum\limits_{z-1}^{z+1}}\limits_\alpha c_{\text{d}(\alpha)} X_\alpha \end{equation} is going to be second order accurate, but truncation error can be made isotropic to fourth order \cite{Patra:2005} by taking coefficients $c_\alpha$ as summarized in Figure~\ref{fig:disc}. A critical issue for accuracy of long-term cosmological simulations is discretization of gradient terms in energy density (\ref{eq:rho}). Discretized energy is not necessarily conserved by discretized equations of motion, and gradient energy leaking off the grid affects equation of state and leads to large cumulative errors in expansion history of the universe \cite{Chambers:2007se,Bond:2009xx}. The best way to avoid this pitfall is to discretize Lagrangian (\ref{eq:action}) directly. One can show that the proper discretization of gradient square, variation of which leads to Laplacian discretization (\ref{eq:D}) in equations of motion, is \begin{equation}\label{eq:G} G[X] \equiv \frac{1}{2} \underbrace{\sum\limits_{x-1}^{x+1}\sum\limits_{y-1}^{y+1}\sum\limits_{z-1}^{z+1}}\limits_\alpha c_{\text{d}(\alpha)} (X_\alpha-X_0)^2. \end{equation} Once the spatial operators are discretized, time evolution problem becomes a system of coupled ordinary differential equations, which can be integrated using any of the usual methods. DEFROST uses leapfrog scheme, which is simple, fast, and second order accurate in time. Higher order scheme could be used if needed. Symplectic integrator developed in \cite{Bond:2009xx} is capable of reaching machine precision levels, or hybrid integrator of Huang \textit{et.\ al.}\ \cite{Barnaby:2009wr} could be used if time operator splitting is not possible. \section{Non-Linear Dynamics, Thermalization and Universality within Horizon} Non-linear dynamics that soon takes over the evolution of scalar fields can be rather non-trivial. Preheating is essentially a non-equilibrium phase transition, and a lot of interesting things can happen. Over the years, detailed numerical studies have been carried out for many parametric resonance \cite{Khlebnikov:1996mc, Prokopec:1996rr, Tkachev:1998dc, Podolsky:2005bw, Dufaux:2006ee, Felder:2006cc} and tachyonic \cite{Felder:2000hj, Felder:2001kt, Copeland:2002ku, GarciaBellido:2002aj} preheating models. In this section, I highlight some features of non-linear dynamics in preheating on horizon scales. This is local physics as far as cosmology is concerned, as the Hubble horizon size at the end of chaotic inflation is tiny (roughly $1$m redshifted to the present day). \begin{figure} \begin{center} \begin{tabular}{cc} \epsfig{file=rho-L4G22-147, width=6cm} & \epsfig{file=rho-L4G22-512, width=6cm} \\ (a) $t=36.75$ & (b) $t=128.00$ \\ \end{tabular} \end{center} \caption{Energy density $\rho$ inside the simulation box soon after onset of instability (a) and during subsequent evolution (b) in preheating model (\ref{eq:V:L4G22}).} \label{fig:rho} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \epsfig{file=strings, width=6cm} & \epsfig{file=rho-L4U1V2-512, width=6cm} \\ (a) $t=103.00$ & (b) $t=512.00$ \\ \end{tabular} \end{center} \caption{Transient formation of topological defects (a) and late-time energy density configuration (b) in preheating model (\ref{eq:V:L4U1V2}) with global $O(2)$ symmetry.} \label{fig:strings} \end{figure} Quantum fluctuations that fall into unstable bands of Figure~\ref{fig:stab} are amplified and become classical with large occupation numbers. Once linear instability develops, the field configuration becomes very inhomogeneous and particle description gets complicated by non-linear coupling. A useful tracer of field dynamics is the evolution of the total energy density (\ref{eq:rho}), which is an adiabatic invariant for rapidly oscillating fields. Evolution of energy density $\rho$ in preheating model (\ref{eq:V:L4G22}) with $g^2/\lambda=1.875$ is shown in Figure~\ref{fig:rho}. For that value of the coupling, long wavelength fluctuations grow fastest, leading to formation of large blobs as shown in Figure~\ref{fig:rho}a. Once density contrast increases to about one, non-linear interaction kicks in and the density fragments to much smaller scales as shown in Figure~\ref{fig:rho}b. In particle description, this could be viewed as upscattering of modes to higher momenta by non-linear interaction term. Just as it happens in phase transitions, one can also produce topological defects during preheating, if the theory allows them. For example, in preheating model with $O(2)$-symmetric potential with a small field vacuum expectation value $v$ \begin{equation}\label{eq:V:L4U1V2} V(\phi,\chi) = \frac{1}{4}\, \lambda (\phi^2 + \chi^2 - v^2)^2, \end{equation} global cosmic strings can form \cite{Kasuya:1998td, Tkachev:1998dc}. Initial instability in this model develops similarly to (\ref{eq:V:L4G22}) with $g^2/\lambda=2$, but once the energy density dilutes enough to fill the ring at the bottom of potential, cosmic strings are produced. String cores ($\phi^2+\chi^2 < v^2/40$) soon after formation are shown in Figure~\ref{fig:strings}a. String loops within horizon are transient, and eventually will collapse and annihilate. After a while, energy density configuration once again reaches a highly fragmented state shown in Figure~\ref{fig:strings}b. An important open question is exactly how and when thermalization after preheating happens. Characteristic momentum of particles produced during linear stage of preheating is determined by the instability band structure. Typically, it is significantly less than that of thermal equilibrium value, so particles need to upscatter through non-linear interactions, and thermalization can be delayed by a long time \cite{Kofman:1997yn}. This is supported by numerical simulations, which show slow scaling regime in evolution of field occupation numbers in $\lambda\phi^4$ model \cite{Micha:2002ey,Micha:2004bv}. \begin{figure} \begin{center} \begin{tabular}{rr} ~~(a)\hfill $V = \frac{1}{4}\, \lambda \phi^4 + \frac{1}{2}\, g^2 \phi^2 \chi^2$ & ~~(b)\hfill $V = \frac{1}{2}\, m^2 \phi^2 + \frac{1}{2}\, g^2 \phi^2 \chi^2$ \\ \epsfig{file=PDF-L4G22-128.eps, width=6cm} & \epsfig{file=PDF-M2G22-256.eps, width=6cm} \\ ~~(c)\hfill $V = \frac{1}{4}\, \lambda (\phi^2 + \chi^2)^2$ & ~~(d)\hfill $V = \frac{1}{2}\, m^2 \phi^2 + \frac{1}{2}\, \sigma \phi \chi^2 + \frac{1}{4}\, \lambda \chi^4$ \\ \epsfig{file=PDF-L4-U1-2048.eps, width=6cm} & \epsfig{file=PDF-M2S12L4-256.eps, width=6cm} \\ \end{tabular} \end{center} \caption{Universality of lognormal density distribution in various two-field preheating models with inflaton decaying via broad parametric resonance.} \label{fig:universality} \end{figure} While dynamics of the non-equilibrium phase transition can be rather varied, late stages of preheating appear to have a certain universality to them. After initial transient, the field evolution leads to a highly inhomogeneous state similar to that shown in Figures~\ref{fig:rho}b and \ref{fig:strings}b which persists on long time scales, with slow fragmentation going on. A striking feature of this regime is that one-point probability distribution function of energy density contrast $\delta\equiv\rho/\bar{\rho}$ appears to be statistically stationary, and universal across a class of preheating models \cite{Frolov:2008hy}. Figure~\ref{fig:universality} shows late-time energy density PDFs (with dilution due to expansion scaled out) for four different preheating models ending via broad parametric resonance. All of them fit lognormal distribution \begin{equation}\label{eq:lornormal} P(\rho)\, d\rho = \frac{1}{\sqrt{2\pi}\,\sigma}\, \exp\left[ - \frac{(\ln\rho - \mu)^2}{2 \sigma^2}\right] \frac{d\rho}{\rho}. \end{equation} It is very tempting to attribute scaling and universality of the late stages of preheating to scalar field turbulence \cite{Micha:2002ey,Micha:2004bv}, especially since lognormal density distributions are known to arise in relativistic fluid turbulence \cite{Nordlund:1998wj}, but the subject needs further investigation. \section{Large-Scale Primordial Fluctuations from Preheating} \begin{figure} \centerline{\epsfig{file=chik, width=8cm}} \caption{Primordial fluctuation spectrum of field $\chi$ produced by inflation \cite{Bond:preview}.} \label{fig:spectrum} \end{figure} As interesting as non-equilibrium field evolution could be, thermalization wipes out most of the details in the final state after the phase transition. However, dynamics of the transition could affect the expansion history of the universe, and leave an imprint in the observable large-scale curvature fluctuations produced during preheating \cite{Chambers:2007se, Bond:2009xx}. Inflation transforms sub-horizon quantum vacuum fluctuations in all the light fields into super-horizon classical fluctuations. These fluctuations are statistically homogeneous and isotropic Gaussian random fields, and are completely described by their spectra with amplitude $P(k) \sim H^2/4\pi^2$ evaluated at horizon crossing $H=k/a$. Causally disconnected patches on super-horizon scales evolve essentially independently, and large-scale curvature fluctuations $\Phi$ are the difference $\delta N$ in amount of expansion $N\equiv\ln a$ different patches experience from the constant curvature hypersurface at the end of inflation to the constant density (and temperature) hypersurface once thermalization occurred \cite{Starobinsky:1982ee, Salopek:1990jq, Sasaki:1995aw}. Fluctuations of the inflaton $\delta\phi$ are usually the main source of metric curvature fluctuations $\Phi$, with their amplitude enhanced by the slow-roll parameter $P_\Phi(k) = P_\phi(k)/(2m_{\text{pl}}^2\epsilon)$. It is also entirely possible to convert isocurvature modes from subdominant light fields into observable curvature perturbations, for example as it happens in curvaton-type scenarios \cite{Linde:1996gt, Lyth:2001nq} and modulated reheating \cite{Dvali:2003em, Kofman:2003nx}. Resonant preheating dynamics can create and significantly amplify curvature fluctuations from isocurvature modes of light fields, as suggested by \cite{Chambers:2007se, Chambers:2008gu} and calculated in \cite{Bond:2009xx, Bond:preview}. For simple preheating model (\ref{eq:V:L4G22}) with small values of coupling $g^2/\lambda$, the second field $\chi$ is light during inflation, and acquires fluctuation spectrum with power on super-horizon scales comparable to inflaton, as shown in Figure~\ref{fig:spectrum}. Super-horizon fluctuations of $\chi$ are converted to curvature fluctuations through preheating dynamics. The basic mechanism is that the flat $\chi$-direction of potential (\ref{eq:V:L4G22}) is suddenly lifted due to expectation value of inhomogeneous terms like $\langle\delta\phi^2\rangle$ when parametric resonance instability develops \cite{Bond:2009xx}. This will modulate equation of state based on the value of homogeneous mode in field $\chi$ at a time inhomogeneity develops, and create curvature fluctuations dependent on initial value of $\chi$ on super-horizon scales. \begin{figure} \centerline{\epsfig{file=brd-lna-2, width=\textwidth}} \caption{Non-linear transfer function $F_{\text{NL}}(\chi)$ connecting initial value of super-horizon mode $\chi_{\text{ini}}$ with curvature fluctuation $\delta N$ it produces \cite{Bond:2009xx}. Thick red line shows result of averaging over substructure not resolved in CMB observations.} \label{fig:fnl} \end{figure} Calculating curvature fluctuations generated by preheating involves tracing minute differences in expansion history, from the end of inflation to thermalization, in non-linear three dimensional simulations of regions of the universe corresponding to different initial values of $\chi$ on super-horizon scales. This is a very demanding numerical problem, not only in terms of computing power, but precision required. The first attempt encountered numerical difficulties \cite{Chambers:2007se, Chambers:2008gu}, and it required development of new numerical integration techniques to obtain the answer \cite{Bond:2009xx}. Skipping a lot of technical details which will be discussed elsewhere \cite{Bond:preview}, the total curvature fluctuation $\Phi$ produced in the inflation model (\ref{eq:V:L4G22}) is \begin{equation}\label{eq:Fnl} \Phi(\vec{x}) = \Phi_{\text{G}}(\vec{x}) + F_{\text{NL}}\big(\chi_{\text{G}}(\vec{x})\big), \end{equation} where $\Phi_{\text{G}}$ is the usual nearly Gaussian contribution from inflaton fluctuations $\delta\phi$, and the second \textit{uncorrelated} term is generated by preheating from the super-horizon mode of the field $\chi$. The exact distribution of the field $\chi$ sampled on observable part of the sky depends on inflation history in extreme version of cosmic variance. The transfer function $F_{\text{NL}}$ shown in Figure~\ref{fig:fnl} is quite non-linear and could lead to non-Gaussian fluctuations of the form very different from the usual weak non-Gaussianity parametrization \begin{equation}\label{eq:fnl} \Phi(\vec{x}) = \Phi_{\text{G}}(\vec{x}) + f_{\text{NL}}\Phi_{\text{G}}^2(\vec{x}). \end{equation} The amplitude of curvature fluctuations produced by preheating in model (\ref{eq:V:L4G22}) is $10^{-5}$, which is comparable to the curvature fluctuations from inflaton, so the two could potentially be disentangled by searching for non-Gaussian component in the observed CMB temperature anisotropy. \section{Discussion: Going after Observable Signatures of Preheating} Very little is known about how inflation actually ended, and what was the high energy physics like at those energy scales (or even what is the inflaton itself). Traces of reheating are hidden from us by opaque plasma in nearly thermal state, and are unobservable directly. One must seek signatures of preheating that survive thermalization and could be detected. These could include stable relics like topological defects \cite{Kasuya:1998td, Tkachev:1998dc, Battye:1998xe} or primordial black holes \cite{Rubin:2000dq, Suyama:2004mz, Suyama:2006sr}, stochastic gravitational wave background produced by inhomogeneities during reheating \cite{Easther:2006gt, Easther:2006vd, GarciaBellido:2007dg, GarciaBellido:2007af, Dufaux:2007pt, Caprini:2007xq}, or anomalies in the expansion history of the universe imprinted in the primordial curvature fluctuations \cite{Dvali:2003em, Kofman:2003nx, Chambers:2007se, Chambers:2008gu, Bond:2009xx, Bond:preview, Kohri:2009ac, Chambers:2009ki}. Of these, the last effect appears to be the most promising observationally, as stable relic formation is difficult without spoiling cosmology, and stochastic gravitational waves are very hard to detect. Curiously enough, the simple model of preheating (\ref{eq:V:L4G22}) could generate non-Gaussian curvature fluctuations of observable amplitude which are intermittent, producing primordial ``cold spots'' \cite{Bond:2009xx}. Although arguable \cite{Bennett:2010jb}, the CMB temperature map appears to have slight statistical anomalies in the form of a cold spot in the southern hemisphere \cite{Cruz:2009nd}, and slight discrepancy in north-south temperature anisotropy spectra \cite{Eriksen:2007pc}. Exciting possibility is that primordial ``cold spot'', whether from preheating or some other early universe source, could potentially explain both. A tell-tale signature of \textit{primordial} non-Gaussian cold spot is the associated $E$-mode polarization pattern around it, which might be possible to test with Planck data \cite{Vielva:2010vn}. Primordial potential ``dips'' would also manifest in formation of large scale structure. \section*{References}
1,108,101,563,762
arxiv
\section{Introduction}\label{sec:int} The Randall-Sundrum II braneworld model (RSII, for short) describes the Universe as a single brane embedded in a five-dimensional (5D) nonfactorazible background geometry \cite{randall/1999}. It reproduces properly the Newtonian and General Relativity theories of gravity in the referred regimes \cite{maartens/2004,garriga/2000,figueras/2011,kim/2004}. RSII has been applied to different areas, yielding remarkable results. Several aspects of RSII cosmology were investigated, for example, in References \cite{ramirez/2004,holanda/2013,hebecker/2001,meehan/2014}. Some constrains to the braneworld quantities have been put from different approaches \cite{tsujikawa/2004,liddle/2003,yagi/2011,mm/2014}. The astrophysics of stellar objects was also deeply analysed in RSII. C. Germani and R. Maartens have firstly shown that in RSII the vacuum exterior of a spherical star is not in general a Schwarzschild space-time, but presents radiative-type stresses generated by 5D graviton effects \cite{germani/2001}. In \cite{visser/2003}, the 4D Gauss and Codazzi equations were solved for an arbitrary static spherically symmetric star. It was shown how the 4D boundary data should be propagated into the 5D bulk in order to get the full space-time geometry. Further properties of compact stars in RSII were studied in \cite{la/2017}. It was found a new branch of stellar configurations that can violate the general relativistic causal limit and that may have an arbitrarily large mass. Moreover, the properties of quark and hadronic stars were analysed in \cite{la/2015}. RSII has also been widely applied in the astrophysics of black holes (BHs), also yielding remarkable results. For instance, the properties of gravitational lensing by BHs in RSII were explored in \cite{bin-nun/2010}. In Reference \cite{wang/2016}, the authors studied the process of gravitational collapse driven by a massless scalar field which is confined to the brane. Further BH analysis in RSII may be checked in References \cite{abdolrahimi/2013,abdolrahimi/2013b,tanahashi/2008}. In the year of 2016, the first detection of gravitational waves was reported \cite{abbott/2016} by the Advanced LIGO (Laser Interferometry Gravitational Wave Observatory) Team. It was claimed that the detected gravitational wave sign was generated at a redshift $z\sim0.09$ by a BH binary system. Later in the same year, it was argued that the signal-to-noise and quality of the referred data were such that there was some room to alternatively interpret such an event, as a gravastar (gravitationally vacuum stars) binary system \cite{chirenti/2016}. Gravastars were proposed in \cite{mazur/2004} by Mazur and Mottola as systems of gravitational collapse which are alternative to BHs. The external region of a gravastar is described by a Schwarzschild space-time, such that $p=\rho=0$, with $p$ and $\rho$ being the pressure and matter-energy density, respectively, whereas its surface is a thin shell of ultrarelativistic matter, with $p=\rho$. Its internal region is filled by dark energy, with $p=-\rho$. In \cite{chirenti/2007}, it was shown that it is possible to discern a gravastar from a BH of the same mass due to their different quasi-normal modes. The problem of ergoregion instability for the viability of gravastars was also investigated in \cite{chirenti/2008,cardoso/2008}. In \cite{harko/2009}, the possibility of distinguishing BHs and gravastars using the properties of their accretion disks was considered. Observational constraints were put in gravastars from well-known BH candidates \cite{broderick/2007}. Further observational distinguishment between BHs and gravastars were discussed in \cite{sakai/2014}, in which it was argued that high-resolution very-long-baseline-interferometry observations can contribute on this regard in near future. Moreover, it should be remarked that by means of the usual Tolman-Oppenheimer-Volkoff equation, it was shown that gravastars material content cannot be described by perfect fluids \cite{cattoen/2005}. Instead, they should have anisotropic pressures. On this regard, it is well-known that the 5D set up of RSII and other braneworld models induce anisotropy in brane objects, such that it might be interesting and valuable to investigate gravastars in the braneworld. In the present paper we will obtain and investigate gravastars solutions in RSII. \section{Braneworld field equations}\label{sec:bw} \subsection{Basic equations} The field equations of RSII in terms of an effective energy-momentum tensor read \cite{germani/2001,shiromizu/2000,maartens/2000} \begin{equation}\label{bw1} G_{\mu\nu}=k^{2}T_{\mu\nu}^{\rm eff}, \end{equation} with $k^{2}=8\pi G$, $G$ is the newtonian gravitational constant, the speed of light $c=1$ and such that the effective total energy density and pressure, anisotropic stress and energy flux read, respectively: \begin{eqnarray} &&\rho^{\rm eff}=\rho+\frac{1}{2\lambda}\left(\rho^{2}+\frac{12}{k^{4}}\mathcal{U}\right),\label{bw2} \\ &&p^{\rm eff}=p+\frac{1}{2\lambda}\left[\rho(\rho+2p)+\frac{4}{k^{4}}\mathcal{U}\right],\label{bw3} \\ &&\pi_{\mu\nu}^{\rm eff}=\frac{6}{k^{4}\lambda}\mathcal{P}_{\mu\nu},\label{bw4} \\ &&q_\mu^{\rm eff}=\frac{6}{k^{4}\lambda}\mathcal{Q}_\mu,\label{bw5} \end{eqnarray} with $\lambda$ being the brane tension and the bulk cosmological constant was taken such that the brane cosmological constant is null. Moreover, $\mathcal{U}$, $\mathcal{Q}_\mu$ and $\mathcal{P}_{\mu\nu}$ represent respectively the nonlocal energy density, the nonlocal energy flux and the nonlocal anisotropic stress. For a static spherically symmetric space-time, the nonlocal energy flux and nonlocal anisotropic stress become: \begin{eqnarray} &&\mathcal{Q}_\mu=0,\\ &&\mathcal{P}_{\mu\nu}=\mathcal{P}\left(r_\mu r_\nu-\frac{1}{3}h_{\mu\nu}\right),\label{bw6} \end{eqnarray} with $r^{\mu}$ being a unit radial vector and $h_{\mu\nu}=g_{\mu\nu}+u_\mu u_\nu$, such that $g_{\mu\nu}$ is the metric and $u_\mu$ is the four-velocity. \subsection{Static structure equations} With the aim of describing the properties of a spherically symmetric static fluid distribution, it is considered the line element in Schwarzschild coordinates: \begin{equation}\label{bwg1} ds^{2}=-A^{2}(r)dt^{2}+B^{2}(r)dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}), \end{equation} with $A(r)$ and $B(r)$ being metric potentials. The nonzero components of the Einstein's field equations on the brane for the metric above are: \begin{eqnarray} &&\frac{1}{r^2}-\frac{1}{r^2B^2}\left[1-2r\frac{B'}{B}\right]=k^{2}\rho^{\rm eff},\label{bwg2}\\ &&\frac{1}{r^2}-\frac{1}{r^2B^2}\left[1+2r\frac{A'}{A}\right]=-k^{2}p^{\rm eff}_r,\label{bwg3}\\ &&\frac{1}{B^{2}}\left[\frac{A''}{A}+\frac{A'}{rA}-\frac{A'B'}{AB}-\frac{B'}{rB}\right]=k^2\,p^{\rm eff}_t. \end{eqnarray} with primes denoting radial derivatives. The functions $\rho^{\rm eff}$, $p^{\rm eff}_r$ and $p^{\rm eff}_t$ are given by the equalities: \begin{eqnarray} &&\rho^{\rm eff}=\rho\left(1+\frac{\rho}{2\lambda}\right)+\frac{6\,\mathcal{U}}{k^4\,\lambda},\\ &&p^{\rm eff}_r=p+\frac{\rho}{2\lambda}\left(\rho+2p\right)+\frac{2\,\mathcal{U}}{k^4\,\lambda}+\frac{4\,\mathcal{P}}{k^4\,\lambda},\\ &&p^{\rm eff}_t=p+\frac{\rho}{2\lambda}\left(\rho+2p\right)+\frac{2\,\mathcal{U}}{k^4\,\lambda}-\frac{2\,\mathcal{P}}{k^4\,\lambda}. \end{eqnarray} From $p^{\rm eff}_r\neq p^{\rm eff}_t$, it can be understood that the effects of the extra dimensions produce anisotropy in the fluid contained in the star. For our purposes, it will also be of great importance to know the covariant derivative of both the energy-momentum tensor and effective energy-momentum tensor on the brane. The conservative equations $\nabla^{\nu}T_{\mu\nu}=0$ and $\nabla^{\nu}T_{\mu\nu}^{\rm eff}=0$ for the metric (\ref{bwg1}) read, respectively \begin{eqnarray} &&\hspace{-0.5cm}p'+\frac{A'}{A}(\rho+p)=0,\label{bwg4} \\ &&\hspace{-0.5cm}\mathcal{U}'+\frac{2\,A'}{A}(2\mathcal{U}+\mathcal{P})+2\mathcal{P}'+\frac{6\mathcal{P}}{r}=-\frac{k^{4}}{2}\rho'(\rho+p). \label{bwg5} \end{eqnarray} We remark here, for the sake of completeness, that $\rho(r)$, $p(r)$, $\mathcal{P}(r)$ and $\mathcal{U}(r)$, as well as $A(r)$ and $B(r)$, depend on the radial coordinate only. \section{Gravastar in the Braneworld}\label{sec:bwg} \subsection{General remarks} The static structure of the gravastar under study is envisaged in the following form: the interior of the object is surrounded by a thin shell of ultra-relativistic fluid, while the outer space-time is described by a vacuum exterior solution. The three regions aforementioned are structured considering the following equations of state: \begin{itemize} \item Interior: \hspace{0.3cm}$0\leq r<r_1$; \hspace{0.3cm}$p=-\rho$, \item Shell: \hspace{0.55cm}$r_1<r<r_2$; \hspace{0.3cm}$p=\rho$, \item Exterior: \hspace{0.0cm}$r_2<r$; \hspace{1.1cm}$p=\rho$=0, \end{itemize} with $r_1$ and $r_2$ being the interior and exterior radii of the gravastar, respectively. In addition, with the aim of comparing our results to those obtained by Mazur and Mottola \cite{mazur/2004}, we consider $\mathcal{U}=0$ in both the interior and shell of the gravastar. Moreover, we regard that the outer space-time is described by a Schwarzschild vacuum solution. \subsection{Interior of the gravastar} First of all, it is important to say that in the interior region of the gravastar, from Eq.~(\ref{bwg4}), $p=-\rho=-\rho_0={\rm cte.}$ . Now, following \cite{mazur/2004}, we define the potential metric $B^2$ in such a form that: \begin{equation}\label{def_b} B^{-2}=\left(1-I^{2}_{0}r^2\right), \end{equation} with $I_0$ being a constant. Considering Eq.~(\ref{def_b}) in Eq.~(\ref{bwg2}), one obtains that $I_0$ and $\rho_0$ are connected through \begin{equation}\label{eq_ho} I_0=\sqrt{\frac{k^2}{3}\left(\rho_0+\frac{\rho_0^2}{2\lambda}\right)}. \end{equation} On the other hand, by replacing the nonlocal pressure \begin{equation}\label{P_1} \mathcal{P}=-\left(\frac{k^2\lambda}{2r}\right)\frac{k_1}{1-k_1}\frac{B'}{B^3}, \end{equation} with constant $k_1$, in the sum of the Eqs.~(\ref{bwg2}) and (\ref{bwg3}), we can obtain an equation that relates the two potential metrics $A$ and $B$ as \begin{equation}\label{eq_a_b_k} A=k_2\,B^{-\frac{1}{1-k_1}}, \end{equation} where $k_2$ represents an integration constant. From Eq.~(\ref{eq_a_b_k}), for $k_1=0$ we determine that the interior region is described by the de Sitter metric. We note that another form for the function $\mathcal{P}$ can be found by integrating Eq.~(\ref{bwg5}), resulting in \begin{equation}\label{eq_p_k1} \mathcal{P}=\frac{k^{3}_{3}}{A\,r^3}, \end{equation} with constant $k_3$. Evidently, the functions presented in (\ref{P_1}) and (\ref{eq_p_k1}) must be equal, thus, we find that the constants $k_1$ and $k_3$ are related through: \begin{equation}\label{k_1_and_k_3} k_1=\frac{k_{3}^{3}}{k_{3}^{3}-k^2\lambda\,r^3I_0^2\left(1-r^2I_0^2\right)^{\frac{1}{2(1-k_1)}}}. \end{equation} In order to obtain regular solutions we need to have: \begin{equation} k^2\lambda\,r^3I_0^2\left(1-r^2I_0^2\right)^{\frac{1}{2(1-k_1)}}\neq k_{3}^{3}. \end{equation} From Eq.~(\ref{k_1_and_k_3}), note that if $k_3=0$ then $k_1=0$ and if $\lambda\to\infty$ then $k_1=0$ ($k_3=0$). In both cases we derive $\mathcal{P}=0$, indicating that the Mazur-Mottola case \cite{mazur/2004} is obtained. \subsection{Shell} Such as aforementioned, we consider that the pressure and energy density of the fluid contained in the shell are related through $p=\rho$. In order to determine $\mathcal{P}$, we replace (\ref{bwg4}) in (\ref{bwg5}) yielding: \begin{equation}\label{p_rho2} \mathcal{P}=-\frac{k^4}{3}\rho^2. \end{equation} Now, as well as it is considered by Mazur and Mottola \cite{mazur/2004}, let us introduce a dimensionless variable $\xi$ as \begin{equation}\label{anzat} \xi=k^2r^2\rho. \end{equation} Replacing Eq.~(\ref{anzat}) together with Eq.~(\ref{p_rho2}) and Eq.~(\ref{bwg4}) into Eqs.~(\ref{bwg2}) and (\ref{bwg3}), respectively, these can be rewriten as: \begin{eqnarray} &&\frac{dr}{r}=\frac{d\left(B^{-2}\right)}{1-\frac{1}{B^2}-\xi-\frac{\xi^2}{2\lambda\,k^2r^2}},\label{eq_B_r}\\ &&\frac{d\left(B^{-2}\right)}{B^{-2}}=-\left[\frac{1-\frac{1}{B^2}-\xi-\frac{\xi^2}{2\lambda\,k^2r^2}}{1-\frac{3}{B^2}+\xi+\frac{\xi^2}{6\lambda\,k^2r^2}}\right]\frac{d\xi}{\xi}.\label{eq_B_2} \end{eqnarray} We can note that it is difficult to obtain analytical solutions from these field equations. Nevertheless, this can be achieved by taking into account that in the thin shell limit, $0<B^{-2}<<1$. Under this limit, the integration of Eq.~(\ref{eq_B_2}) yields \begin{equation}\label{eq_B_2_aprox} B^{-2}\simeq\epsilon\frac{\left[\frac{\xi^2}{6\lambda\,k^2r^2}+\xi+1\right]^2}{\xi}, \end{equation} where $\epsilon$ is an integration constant. Moreover, since $B^{-2}<<1$ we need that $\epsilon<<1$ also. Finally, by making use of Eqs.~(\ref{eq_B_r}) and (\ref{eq_B_2_aprox}), we obtain: \begin{equation}\label{dr_dw_w2} dr\simeq-\epsilon\,r\left[\frac{\xi^2}{6\lambda\,k^2r^2}+\xi+1\right]\frac{d\xi}{\xi^2}. \end{equation} \subsection{Exterior of the gravastar} For this region, we consider that $\mathcal{U}=\mathcal{P}=0$. This ensures that the exterior space-time is described by the Schwarzschild vacuum metric. Thus, the gravastar outer space-time is depicted by the line element: \begin{equation} ds^2=-Fdt^2+F^{-1}dr^2+r^2\left(d\theta^2+\sin^2\theta d\phi^2\right), \end{equation} with \begin{equation}\label{Eq_F} F=1-\frac{2MG}{r}, \end{equation} where $M$ represents the total mass of the gravastar. \section{Junction conditions} As previously shown, the study of gravastars involves an inner region and an outer region separated by a shell of matter which we shall denominate $\Sigma$. In order to realign the physical and geometric quantities of the inner and outer regions with the magnitudes of surface, we will use the conditions of continuity of Israel-Darmois \cite{Israel/1966,Darmois/1927}. They say that the metric coefficients are continuous in $\Sigma$ ($r=R$), but their derivatives are not continuous at this point. It is possible to determine the surface energy-momentum tensor with the help of the Lanczos equation \cite{Lanczos/1924}: \begin{equation}\label{surface_TEM} {\cal S}^{i}_{j}=\frac{1}{8\pi}\left(\kappa^{i}_{j}-\delta^{i}_{j}\kappa^{k}_{k}\right), \end{equation} with the Latin indexes running as $i, j=t, \theta, \phi$. The factor $\kappa_{ij}$ depicts the discontinuity in the extrinsic curvature $K_{ij}$, with $\kappa_{ij}=K^{+}_{ij}-K^{-}_{ij}$, where the signs $-$ and $+$ correspond respectively to the interior and exterior regions. The extrinsic curvature is defined by: \begin{equation} K_{ij}^{\pm}=-n_{\beta}^{\pm}\left(\partial_{j}e_{i}^{\beta}+\Gamma_{\mu\nu}^{\beta}e_{i}^{\mu}e_{j}^{\nu}\right), \end{equation} with $e_{i}^{\mu}=\frac{\partial x^{\mu}}{\partial\zeta^{i}}$, where $\zeta^{i}$ represents the coordinate on the shell, $n_{\beta}^{\pm}$ depicts the normal vector to the surface and $\Gamma^{\beta}_{\mu\nu}$ refers to the Christoffel symbols. Once considered ${\cal S}^{i}_{j}={\rm diag}(\sigma,-v,-v)$, with $\sigma$ and $v$ being respectively the surface energy density and the surface pressure, the Lanczos equation can be placed on the form: \begin{eqnarray} &&\sigma=-\frac{1}{4\pi}\kappa^{\theta}_{\theta},\label{sigma}\\ &&v=\frac{1}{8\pi}\left(\kappa^{t}_{t}+\kappa^{\theta}_{\theta}\right). \end{eqnarray} Using Eqs.~(\ref{def_b}), (\ref{Eq_F}) and (\ref{sigma}), the thin shell mass can be found using the equality: \begin{equation}\label{def_ms} m_s=4\pi\,R^2\sigma=-R\sqrt{1-\frac{2M}{R}}+R\sqrt{1-I^{2}_{0}R^2}. \end{equation} Considering (\ref{eq_ho}), from Eq.~(\ref{def_ms}) we obtain that the total mass of the gravastar is given by: \begin{equation}\label{tota_mass} M=\frac{R}{2}-\frac{R}{2}\left[\sqrt{1-\frac{k^2R^2}{3}\left(\rho_0+\frac{\rho_0^2}{2\lambda}\right)}-\frac{m_s}{R}\right]^2. \end{equation} It is important to mention that if we consider $\lambda\to\infty$, Eq.~(\ref{tota_mass}) is reduced to the same equation than those found in \cite{das/2017} and \cite{Banerjee/2016}, in their particular cases. \section{Some physical features of the model} It is important to highlight that the shell of the gravastar is limited by the interfaces $R_1=R$ e $R_2=R+\epsilon$, thus connecting the inner space-time with the outer space-time. In order to analyze the principal physical characteristics of the matter in the shell, some definitions used by Mazur and Mottola \cite{mazur/2004} are considered throughout this section. \subsection{The proper thickness of the shell} The proper thickness of the shell is determined from \begin{equation} \ell=\int_{R_1}^{R_2} Bdr. \end{equation} Using (\ref{eq_B_2_aprox}) and (\ref{dr_dw_w2}) in the equation above, it becomes \begin{equation}\label{shell} \ell\simeq\epsilon^{1/2}r\int_{\xi_1}^{\xi_2}\xi^{-3/2}d\xi\simeq2\epsilon^{3/2}R, \end{equation} where we notice that $\ell$ is very small in relation to $R$. This last result is the same found by Mazur and Mottola \cite{mazur/2004}. In this way we understand that the 5D bulk has no effects on the proper thickness of the shell. \subsection{Energy within the shell} The energy inside the thin shell is determined as \begin{equation}\label{energy_shell} {\cal E}=4\pi\int_{R_1}^{R_2}\rho^{\rm eff}r^2dr. \end{equation} Considering Eqs.~(\ref{anzat}) and (\ref{dr_dw_w2}) in Eq.~(\ref{energy_shell}), we obtain \begin{equation} {\cal E}=\frac{4\pi\epsilon R}{k^2}\int_{\xi_2}^{\xi_1}\left[1+\frac{\xi}{2\lambda k^2r^2}\right]\left[1+\frac{1}{\xi}+\frac{\xi}{6\lambda k^2r^2}\right]d\xi, \end{equation} which integrated yields \begin{eqnarray} \fl&&{\cal E}=\frac{4\pi\epsilon R}{k^2}\left[\ln(\xi)+\frac{1}{9}\left[9\left(\frac{1}{2\lambda R^2k^2}+1\right)\xi+6\left(\frac{1}{2\lambda R^2k^2}\right)\xi^2+\left(\frac{1}{2\lambda R^2k^2}\right)^2\xi^3\right]\right]_{\xi_2}^{\xi_1}. \end{eqnarray} Once $\epsilon<<1$, the energy ${\cal E}$ until first order of $\epsilon$ is given by \begin{eqnarray}\label{resulting_eq} {\cal E}\simeq\frac{8\pi\epsilon^2 R}{k^2}\left[1+\left(\frac{1}{2}+\frac{2\xi_2}{3}\right)\left(\frac{1}{2\lambda R^2k^2}\right)+\frac{\xi_2^2}{6}\left(\frac{1}{2\lambda R^2k^2}\right)^2\right]. \end{eqnarray} The resulting nonlinear equation provides information about the energy within the shell. It indicates that the energy is directly proportional to $\epsilon^2$. Note that the energy in the shell, Eq.~(\ref{resulting_eq}), is larger than the one derived by Mazur and Mottola \cite{mazur/2004}. This is due to the effects of the 5D bulk on the brane which helps to increase the energy within the shell. In comparison with an alternative gravity gravastar, such as the one found in $f(R,T)$ gravity for instance, it can be noted a stronger dependence of the energy with $\epsilon$ in the braneworld model with the obtained by $f(R,T)$ gravity \cite{das/2017}. \subsection{Entropy of the shell} We calculate the entropy in the shell as \begin{equation}\label{entropy} S=4\pi\int_{R_1}^{R_2} s\,r^2Bdr, \end{equation} where $s$ represents the local specific entropy density, given by: \begin{equation}\label{entropy2} s=\alpha\frac{k_B}{\hbar}\sqrt{\frac{p}{2\pi G}}, \end{equation} with $\alpha$ being a dimensionless constant, $k_B$ representing the Boltzmann constant and $\hbar=h/(2\pi)$ where $h$ is the Planck constant. Considering Eqs.~(\ref{anzat}), (\ref{eq_B_2_aprox}) and (\ref{dr_dw_w2}) in (\ref{entropy}), (\ref{entropy}) becomes: \begin{equation}\label{entropy3} S\simeq\frac{\alpha k_B}{\hbar G}\epsilon^{1/2}R^2\ln\left(\frac{\xi_1}{\xi_2}\right). \end{equation} As in Ref.\cite{mazur/2004}, by taking into account that $\xi_1/\xi_2=1+{\cal O}(\epsilon)$ as well as Eq.~(\ref{shell}), we have that Eq.~(\ref{entropy3}) yields: \begin{equation}\label{entropy4} S\simeq\frac{\alpha k_B R\ell}{2\hbar G}. \end{equation} From Eq.~(\ref{entropy4}), we can see that the entropy depends directly on the proper thickness of the shell. This result is equal to the one derived by Mazur and Mottola in \cite{mazur/2004}. This shows that the 5D bulk does not affect the entropy of the shell. Comparing with gravastars obtained in $f(R,T)$ gravity \cite{das/2017}, we note that the entropy has a substantial dependence on $\epsilon$ as $\epsilon^{3/2}$ while in $f(R,T)$ gravity, it goes approximately with $\epsilon^{1/2}$. \section{Discussion} With recent advances in gravitational wave observational astronomy, several studies of gravastars as possible sources of gravitational radiation have been made. Let us briefly review some of those important contributions. In \cite{pani/2009}, it was deeply discussed how the presence or absence of an event horizon can produce qualitative differences in the gravitational waves emitted by ultracompact objects. In \cite{pani/2010}, it was shown that the gravitational sign emitted by a gravastar could provide a unique signature of the horizonless nature of such an object. In \cite{uchikata/2016} it was discussed how a measurement of the tidal deformability from the gravitational-wave detection of a compact-binary inspiral can be used to constrain gravastars. A further alternative to detect gravastars was proposed in \cite{kubo/2016}, through their gravitational lensing. Naturally, the recently reported results regarding the M87 BH shadow \cite{event_horizon_collaboration/2019} can also in near future help us to distinguish BHs and gravastars. We have constructed in the present paper gravastar solutions in RSII. Besides \cite{das/2017,Banerjee/2016}, gravastars have also been constructed in alternative gravity theories in References \cite{bhar/2014,rahaman/2015}. Alternative gravity theories have been mostly used to try to account for the cosmological dark sector of the universe. In fact, it is possible, through modified gravity, to describe the galactic and cosmological scales of the universe without dark matter and dark energy \cite{capozziello/2007,zlosnik/2007,nojiri/2006,joyce/2016,kase/2019,nojiri/2005,woodard/2007,cognola/2006,kase/2018,amendola/2007}. Although RSII has been proposed as an alternative to the hierarchy problem, it also works pretty well in cosmology. On this regard, besides \cite{ramirez/2004,holanda/2013,hebecker/2001,meehan/2014}, one can also check References \cite{nozari/2009,barros/2016}. The investigation of gravastars in braneworld scenarios can give us some new insights on both the geometrical and physical features of these objects and the braneworld setup itself. Particularly, here we have derived different gravastar physical parameters in the RSII, as Mazur and Mottola have done in standard gravity scenario. We noted that both the mass and shell energy are altered with respect to the Mazur-Mottola original results due to the presence of the brane tension. On the other hand, the proper thickness of the shell and the shell entropy are not altered due to RSII configuration. The present model can be used to investigate the possible 5D effects in some physical phenomenal that arise in the study of gravastars. An extent of the present formalism can be used to analyze the ergoregion instability in rotating gravastars \cite{chirenti/2008,cardoso/2008}, since the projections on the brane could help the stability of gravastars against the ergoregion instability, such as they help the fluid pressure of compact stars to support more mass against the gravitational collapse \cite{la/2017,la/2015}. Let us further analyse the physical features of our model. It should also be interesting to compare our results with the present literature in alternative gravity gravastars, such as the model presented in \cite{das/2017} within the $f(R,T)$ gravity \cite{harko/2011}, for which $f(R,T)$ is a general function of the Ricci scalar $R$ and trace of the energy-momentum tensor $T$. The proper thickness $\ell$ of the shell found in our model is directly proportional to $R$, the radius of the inner shell. It is also proportional to the thickness of the shell as $\ell\sim\epsilon^{3/2}$. Since $\epsilon<<1$, the latter proportionality indicates that $l<<R$. The entropy we have obtained is gradually increasing with respect to $\epsilon$, a result also obtained in $f(R,T)$ gravity \cite{das/2017}. On the other hand, the energy within the shell has a stronger dependence on $\epsilon$ as $\sim\epsilon^2$ when compared to the $f(R,T)$ gravity result, which reads $\sim\epsilon$. To finalize we remark that our results retrieve the Mazur-Mottola results in the regime $\lambda^{-1}\rightarrow{\infty}$, as it is expected in braneworld features. \section*{Acknowledgment} PHRSM would like to thank Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP), grant $2018/20689-7$. The authors thank FAPESP for financial support under the projects $2013/26258-4$. \section*{References}
1,108,101,563,763
arxiv
\section{Motivation} Let us consider the braid group~$B_n$ acting on the left on the curve complex of the $n$~times punctured disk, which we denote $\mathcal{CC}$. This complex is equipped with a base point~$c_0$, which we take to be a round curve in the disk. There is an obvious map $$ B_n \longrightarrow \mathcal{CC}, \ x\mapsto x.c_0 $$ Now, consider the classical Garside structure of the braid group: permutation braids (or simple braids) are chosen as a preferred set of generators. For any element $x$~of~$B_n$, the Garside \emph{mixed normal form} (as defined in~\cite{Thurston}) gives rise to a path in the Cayley graph, which is actually a geodesic~\cite{Charney}. We shall look at the image of this geodesic in $\mathcal{CC}$ -- thus, if $x$~has Garside mixed normal form $x=x_1\cdot\ldots\cdot x_l$, we consider the path $c_0$, $x_1.c_0$, $x_1x_2.c_0, \ \ldots\ $, $x_1\cdot\ldots\cdot x_l.c_0$ in the curve complex. Not much is known about this family of paths. For instance, it is not known whether it forms a uniform family of unparametrized quasi-geodesics. We conjecture that this is true, but this is by no means obvious: it is definitely \emph{not} true that any quasi-geodesic in $B_n$ projects to an unparametrized quasi-geodesic in $\mathcal{CC}$~\cite{SchleimerWiest}. Even if we suppose that this first conjecture is true, i.e.\ normal form words in~$B_n$ project to quasi-geodesics in~$\mathcal{CC}$, another question remains. Indeed, let us look at a triangle in the Cayley graph of $B_n$ with vertices $1_{B_n}$ and positive braids $x, y\in B_n^+$, and with edges the mixed normal forms of $x$, of $y$, and of $x^{-1}y$. Projecting this triangle to the curve complex as above, and assuming the first conjecture to be true, we must obtain a $\delta$-thin triangle (since the curve complex is Gromov-hyperbolic~\cite{MM1,HPW,PrzSisto}). Now the obvious question is: how can we characterise, in terms of the three normal forms, the position of the quasi-center (the point which is close to all three edges)? There is an obvious conjecture how to answer this question: the quasi-center should be at $(x\wedge y).c_0$, where $x\wedge y$ denotes the greatest common divisor of $x$ and $y$, in the sense of Garside theory~\cite{B-G-GM}. Moreover, the edges from $c_0$ to~$x.c_0$ and from $c_0$ to~$y.c_0$ should stay close to each other (and to the path from $c_0$ to $(x\wedge y).c_0$) up to length \ $length(x\wedge y)$, and diverge afterwards. This is our second conjecture. The aim of the present paper is not to prove either of the above two conjectures, but rather to show what happens if we ``squash down'' the Cayley graph of $B_n$ in such a way that the second conjecture is forced to hold. It turns out that the resulting space, which we call the "additional length complex'' $\mathcal C_{AL}$, is $\delta$-hyperbolic, and shares many properties with the curve complex - we conjecture that the two are actually quasi-isometric. What is remarkable is that our construction of $\mathcal C_{AL}$ does not actually mention curves on a surface, and can be carried out analogously for any finite type Garside structure of a finite type Garside group. Garside groups are a family of groups with good combinatorial and algorithmic properties, containing e.g.\ Artin groups of spherical type~\cite{DehornoyParis,DehornoyGarside,GarsideFoundations}. For a particularly readable introduction which contains almost all prerequisites for this paper, see~\cite[Section 1.1]{B-G-GM}. For the rest of the paper, whenever we talk about a Garside group, we mean a Garside group of finite type equipped with a specific Garside structure. Thus any Garside group $G$ acts on a metric space $\mathcal{C}_{AL}(G)$. The results of this paper can be summarized as follows. {\bf Theorem } {\sl (A) For any Garside group~$G$, the space $\mathcal C_{AL}(G)$ is $60$-hyperbolic. Moreover, normal form words in~$G$ give rise to paths in $\mathcal C_{AL}$ which are at distance at most 39 from geodesics connecting the endpoints.} {\sl (B) If $G$ is the braid group $B_n$, equipped with the classical Garside structure, then $\mathcal C_{AL}$ is of infinite diameter. Moreover, periodic and reducible braids act elliptically, and there exists a pseudo-Anosov braid which acts loxodromically.} \bigskip The plan of the paper is as follows: in Section~\ref{S:MainResult}, after recalling a few basic facts about Garside groups, we construct the additional length complex and prove that it is $\delta$-hyperbolic. In Section~\ref{S:B_n} we prove that the additional length complex associated with the classical Garside structure of the braid group is moved elliptically by the action of periodic and reducible braids, and that it is of infinite diameter. \section{The main result}\label{S:MainResult} In this section we shall prove that every Garside group~$G$ acts on a $\delta$-hyperbolic complex which we call the \emph{additional length complex} of $G$ (\emph{complexe des longueurs suppl\'ementaires} in French). The key ingredient for proving hyperbolicity is a ``Guessing Geodesics Lemma'' of Bowditch~\cite{Bowditch}. The definition of the complex rests on the technical notion of \emph{absorbable element}; we start with the definition and first properties of those. \subsection{Absorbable elements}\label{SS:Absorbable} In what follows, $(G,P,\Delta)$ is a Garside group with positive monoid~$P$, Garside element~$\Delta$, and $\tau$~denotes the inner automorphism of $G$ given by $\tau(x)=\Delta^{-1}x\Delta$. In particular $P$ is \emph{atomic}, i.e. it is generated by the set of elements $a\in P$ such that the relation $a=uv$ with $u,v\in P$ implies $u=1$ or $v=1$; these elements are called \emph{atoms}. We assume the reader to be familiar with the prefix and suffix orders $\preccurlyeq$ and $\succcurlyeq$, the left/right-weightedness, the left/right gcd ($\wedge$/$\wedge^{\hspace{-0.7mm}\Lsh\hspace{0.7mm}}$) and lcm ($\vee$/$\vee^{\hspace{-0.7mm}\Lsh\hspace{0.7mm}}$) and the left/right normal form -- see e.g.\ \cite[Section 1.1]{B-G-GM}. We recall that to each element $x$ of $G$ are associated three relative integers: its infimum $\inf(x)=\max\{r \in \mathbb Z, \Delta^r\preccurlyeq x\}$, its supremum $\sup(x)=\min\{s\in \mathbb Z, x\preccurlyeq \Delta^r\}$ and its canonical length $\ell(x)=\sup(x)-\inf(x)$. These are related to the left normal form as follows: if $x$ has left normal form $x=\Delta^p x_1\ldots x_r$, $p$, $p+r$ and $r$ are the infimum, the supremum and the canonical length of $x$, respectively. We also recall the notion of rigidity: an element $x$ of $G$ with left normal form $x=\Delta^p x_1\ldots x_r$ is said to be \emph{rigid} if the pair $\left(x_r,\tau^{-p}(x_1)\right)$ is left-weighted; roughly speaking, this means that the left normal form written cyclically is left-weighted everywhere. Also, we recall that to each \emph{simple element} $s$ of~$G$ (that is, $s$~is a positive left and right divisor of~$\Delta$), is associated its \emph{right complement}: $\partial s=s^{-1}\Delta$, which is also a simple element. We extend this notion of right complement to each element $y$ of~$G$ with infimum~0: $\partial y=y^{-1}\Delta^{\sup(y)}$. In terms of the left normal form, if $y=y_1\ldots y_r$, then the normal form of $\partial y$ is $y'_r\ldots y'_1$ where $y'_i=\tau^{r-i}(\partial y_i)$, for $i=1,\ldots, r$. The following formulae will be helpful and are well-known, see~\cite{Elrifai-Morton}. For any $x,y\in G$, $p\in \mathbb Z$, $$\inf(\Delta^p x)=p+\inf(x),\ \ \sup(\Delta^px)=p+\sup(x).$$ $$\inf(xy)\geqslant \inf(x)+\inf(y),\ \ \ \sup(xy)\leqslant \sup(x)+\sup(y).$$ $$\inf(y^{-1})=-\sup(y),\ \ \ \sup(y^{-1})=-\inf(y).$$ \begin{definition}\label{D:Absorbable} We say that an element $y$ of $G$ is \emph{absorbable} if two conditions are satisfied: \begin{itemize} \item $\inf(y)=0$ or $\sup(y)=0$, \item there exists some $x\in G$ such that $$\begin{cases} \inf(xy)=\inf(x)\ \text{ \ and} \\ \sup(xy)=\sup(x). \end{cases}$$ \end{itemize} In this case we also say more precisely that $y$ is absorbable by $x$ or that $x$ \emph{absorbs} $y$. \end{definition} \begin{remark} Definition~\ref{D:Absorbable} is very practical for our purposes, but it might not be the most suitable one for generalizing our techniques to other frameworks. We suggest another possible definition: say an element $y$ of $G$ is \emph{absorbable}$'$ if there exists an $x\in G$ such that for every initial segment $y^{(i)}=y_1\ldots y_i$ of the mixed normal form $y=y_1\ldots y_l$ we have: $$\inf(xy^{(i)})=\inf(x)\ \text{ \ and \ } \sup(xy^{(i)})=\sup(x).$$ (Note that we dropped the requirement that $\inf(y)=0$ or $\sup(y)=0$.) This alternative definition is not quite equivalent to Definition~\ref{D:Absorbable}, but almost: every absorbable element is also absorbable$'$, and conversely, every absorbable$'$ element is the product of at most two absorbable elements, namely the positive and the negative parts of its mixed normal form. \end{remark} The following are immediate consequences of Definition~\ref{D:Absorbable}: \begin{lemma}\label{L:BasicAbsorb} Let $y$ be an element of $G$. \begin{itemize} \item[(i)] If $y$ is absorbable then there exist $k\in \mathbb N$ and $y_1,\ldots, y_k$ simple elements so that the left normal form of $y$ is $y_1\ldots y_k$ or $\Delta^{-k}y_1\ldots y_k$. \item[(ii)] $y$ is absorbable if and only $y^{-1}$ is absorbable. This is also equivalent to $\tau(y)$ and $\tau(y^{-1})$ being absorbable. \end{itemize} \end{lemma} \begin{proof} (i) This is just a rewriting of the condition that $\inf(y)=0$ or $\sup(y)=0$ from Definition~\ref{D:Absorbable}. (ii) Because $\inf(y^{-1})=-\sup(y)$ and $\sup(y^{-1})=-\inf(y)$, the first condition for absorbability is satisfied by both $y$ and $y^{-1}$ or none. Moreover, if $y$ is absorbable by $x$, then $\inf(xy)=\inf(x)=\inf((xy)y^{-1})$ and $\sup(xy)=\sup(x)=\sup((xy)y^{-1})$. This shows that $y^{-1}$ is also absorbable, by $xy$. For later reference, we make the additional observation that $x$ and $xy$, which absorb $y$ and $y^{-1}$, respectively, have the same sup and the same inf. For the rest of statement (ii), just note that $y$ is absorbable by $x$ if and only if $\tau(y)$ is absorbable by $\tau(x)$. \end{proof} We shall see in Example~\ref{I:sInvDelta}(5) that the complement~$\partial y$ of an absorbable element~$y$ is not necessarily absorbable. The following observation indicates that being absorbable may be a fairly rare property. \begin{lemma}\label{L:subword} Any positive subword of a positive absorbable element is absorbable. That is, suppose that a positive absorbable element~$y$ of~$G$ can be written as a product of three positive elements $y=uvw$ (with possibly $u=1$ or $w=1$). Then $v$ is absorbable. \end{lemma} \begin{proof} First notice that $\inf(v)=0$. Let $x$ be such that $\inf(xy)=\inf(x)$ and $\sup(xy)=\sup(x)$. Then we claim that $\inf((xu)v)=\inf(xu)$ and $\sup((xu)v)=\sup(xu)$, implying that $v$ is absorbable. In order to prove the claim, we recall the inequalities $\inf(a)\leqslant \inf(ab)$ and $\sup(a)\leqslant \sup(ab)$ for any $a,b\in G$ with $b$ positive. They imply $$\inf(x)\leqslant \inf(xu)\leqslant \inf(xuv)\leqslant \inf(xuvw)=\inf(x),$$ $$\sup(x)\leqslant \sup(xu)\leqslant \sup(xuv)\leqslant \sup(xuvw)=\sup(x).$$ \end{proof} \begin{lemma}\label{L:absorblength} Let $y$ be an absorbable element with canonical length $k$. Then there exists $x$ with infimum 0 and supremum $k$ which absorbs $y$. Moreover $k$ is the smallest possible number of factors in an element with infimum 0 absorbing $y$. \end{lemma} Before giving the proof, we mention that Lemma~\ref{L:absorblength} yields, in principle, an algorithm for testing whether any given element~$y$ of $G$ is absorbable. It suffices to test, for every~$x\in G$ with $\inf(x)=0$ and $\sup(x)=\ell(y)$, whether $x$ absorbs~$y$. We do not know if there exists a polynomial-time algorithm for testing absorbability. \begin{proof}[Proof of Lemma~\ref{L:absorblength}] If $k=0$ there is nothing to prove. Let $y$ be absorbable by $\hat x=\Delta^px$, with $\inf(x)=0$; then $\inf(xy)=\inf(\hat xy)-p=\inf(\hat x)-p=\inf(x)$ and similarly for the supremum, showing that $y$ is absorbable by $x$. Thus $y$ is absorbed by an element $x$ with $\inf(x)=0$. We have to show that we can take $x$ with the same length~$k$ as~$y$, and that this $k$ is minimal. We can restrict our attention to the case where $y$ is positive, i.e.\ $\inf(y)=0$; this is because $y$ and $y^{-1}$ can be absorbed by elements of the same length, as seen in the proof of Lemma~\ref{L:BasicAbsorb}. From now on we assume that $y$ is positive and $y=y_1\ldots y_k$ is its left normal form. The absorbing element $x$ with $\inf(x)=0$ is at least of length~$k$, because $\sup(x)=\sup(xy)\geqslant \sup(y)=k$. We have to prove the existence of such an~$x$ with length exactly~$k$. More precisely, if $x=x_1\ldots x_l \, x_{l+1}\ldots x_{l+k}$ absorbs~$y$, we will show that so does $\tilde x=x_{l+1}\ldots x_{l+k}$. First, by hypothesis $\inf(xy)=\inf(x)=0$, so $$0= \inf(\tilde x)\leqslant\inf(\tilde x y)\leqslant \inf(x y)=\inf(x)=0$$ and the condition on the infima is satisfied. It remains to be shown that for all $i=1,\ldots, k$ the left normal form of $\tilde x y_1\ldots y_i$ has only $k$ letters. This is based on the following two observations. Firstly, if $z_1\ldots z_r$ is a left normal form with $z_1\neq \Delta$ and if $s$ is a simple element with $\inf (zs)=0$, then the left normal form of $zs$ also has $r$ letters if and only if $z_rs$ is simple. Otherwise this left normal form has $r+1$ letters. Moroever, in the former case, if $r\geqslant 2$, for $j=2,\ldots, r$, the $j$th letter of the left normal form of $zs$ is fully determined by $s$ and $z_{j-1},\ldots, z_r$ (all the preceding letters of~$z$do not enter into consideration); this is our second observation. These two facts follow by inspection of the procedure for calculating normal forms explained in~\cite{Gebhardt-GM}, Proposition~1. Since $\sup(xy_1)=\sup(x)$, the first observation tells us that $x_{l+k}y_1$ must be simple, which in turn implies that $\sup(\tilde xy_1)=\sup(\tilde x)$. This terminates the proof if $k=1$. Moreover, if $k\geqslant 2$, the second observation implies that for $j= 2,\ldots, k$, the $j$th letter of the left normal form of $\tilde x y_1$ coincides with the $l+j$th letter of the left normal form of $x y_1$. Applying again the first observation together with the absorbability of $y$ in $x$, hence of $y_2$ in $xy_1$, we see that $\sup(\tilde x y_1y_2)=k$ and we are done if $k=2$. Moreover (if $k\geqslant 3$), for $j=3,\ldots, k$, thanks to the second observation, the $j$th letter of the normal form of $\tilde x y_1 y_2$ coincides with the $l+j$th letter of the normal form of $x y_1 y_2$. Continuing inductively, we obtain the desired result that $\sup(\tilde x y_1\ldots y_i)=k$, for all $i=1,\ldots, k$. \end{proof} \begin{example} \begin{itemize} \item[(1)] Whenever $n\geqslant 3$, in the ``classical" Garside structure on the free abelian group $(\mathbb Z^n,\mathbb N^n,(1,1,\ldots,1))$, any multiple of a standard generator is absorbable. \item[(2)] In the braid group $B_4$ with its classical Garside structure, the braid $y=\sigma_1^2\sigma_2^2\sigma_3^2\sigma_2^2\sigma_1$ is absorbable, e.g.\ by $x=\sigma_1\sigma_2^4\sigma_1^2\sigma_2\sigma_3$: we calculate $$\sigma_1\sigma_2\ .\ \sigma_2\ .\ \sigma_2\ .\ \sigma_2\sigma_1\ .\ \sigma_1\sigma_2\sigma_3 \ \cdot \ \ \sigma_1\ .\ \sigma_1\sigma_2\ .\ \sigma_2\sigma_3\ .\ \sigma_3\sigma_2\ .\ \sigma_2\sigma_1 =\phantom{OOOOOOO}$$ $$\phantom{OOOOOOOO}= \sigma_1\sigma_2\sigma_1\ .\ \sigma_1\sigma_2\sigma_1\sigma_3\ .\ \sigma_1\sigma_2\sigma_3\sigma_2\ .\ \sigma_2\sigma_3\sigma_2\ .\ \sigma_2\sigma_3\sigma_2\sigma_1$$ Notice that $y$ is pseudo-Anosov and rigid. This is the most surprising example of an absorbable braid we know, and the longest non-reducible one. \item[(3)] The length 2 braid $(\sigma_1\sigma_3)^2$ in~$B_4$ is not absorbable, as shows Lemma~\ref{L:absorblength} together with an inspection of all braids with infimum 0 and supremum 2. Nor is absorbable any 4-braid with infimum 0 and left normal form $x_1\ldots x_r$ such that for some $i=1,\ldots, r-1$, $x_i\succcurlyeq \sigma_1\sigma_3$ and $\sigma_1\sigma_3\preccurlyeq x_{i+1}$, by Lemma~\ref{L:subword}. \item[(4)]\label{I:sInvDelta} In any Garside group, if $s$ is an atom, then the simple element $y=s^{-1}\Delta$ is not absorbable. Indeed, if $y$~was absorbable then, by Lemma~\ref{L:absorblength}, it could be absorbed by a \emph{simple } element~$x\neq 1$. We would then have $xy \prec \Delta$. Since left divisors of $\Delta$ are also right divisors of~$\Delta$, this means that there exists a simple element $a\neq 1$ satisfying $axy=\Delta$. By combining this with the equality $sy=\Delta$, we obtain $ax=s$, contradicting the hypothesis that~$s$ is an atom. \item[(5)] As an application of the previous example, in the braid group $B_n$ with its classical Garside structure, the braid $\sigma_i^{-1}\Delta$, for any $i$ between 1 and $n-1$, is not absorbable (even though it is the complement of the absorbable braid $\sigma_i$). \end{itemize} \end{example} \subsection{The additional length complex} \begin{definition}\label{D:AddLengthCx} Suppose $G$ is a Garside group, the group of fractions of a Garside monoid $(P,\Delta)$. We define the \emph{additional length complex} $\mathcal C_{AL}(G,P,\Delta)$ (generally abbreviated as $\mathcal C_{AL}(G)$, or even $\mathcal C_{AL}$) to be the following (usually locally infinite) connected graph. \begin{itemize} \item The vertices are in correspondence with $G/\langle \Delta\rangle$, that is the cosets $g\Delta^{\mathbb Z}=\{g\Delta^z \ | \ z\in\mathbb Z\}$. For each vertex $v$ we have a unique distinguished representative with infimum 0, which we denote~$\underline v$. \item Two vertices $v=\underline{v}\Delta^{{\mathbb Z}}$ and $w=\underline{w}\Delta^{{\mathbb Z}}$ of $\mathcal C_{AL}$ are connected by an edge if one of the following happens: \begin{enumerate} \item There exists a non-trivial, non-$\Delta$ simple element $m$ so that the element $\underline{v} m$ represents the coset $w$. This is equivalent to saying that there is a simple element $m'\neq 1,\Delta$ such that $\underline{w}m'$ belongs to the coset $v$. (This first type of edges is as in Bestvina's normal form complex, see~\cite{CMW}.) \item There exists an absorbable element $y$ of~$G$ so that $\underline{v}y$ belongs to the coset~$w$. This is equivalent to saying that there is an absorbable element $y'$ of $G$ so that $\underline{w}y'$ belongs to the coset $v$. \end{enumerate} \end{itemize} As usually, a metric structure on the above complex is given simply by declaring that every edge is of length 1. We call this metric the \emph{additional length metric}. The distance between two vertices $v$ and $w$ in $\mathcal C_{AL}$ will be denoted $d_{AL}(v,w)$. The group~$G$ acts on the left by isometries on this complex. \end{definition} \begin{remark} (a) The idea of this definition is that in the additional length complex, a group element $y$ is close to the identity if ``multiplying by the element~$y$ does not necessarily add any length'' -- hence the name of the complex. (b) If, in Definition~\ref{D:AddLengthCx}, we leave out the second type of edges, then we obtain precisely the $1$-skeleton of the Bestvina normal form complex as described in~\cite{CMW}. Thus the additional length complex can be thought of as a squashing of the Bestvina normal form complex. \end{remark} Next we shall associate to each pair of vertices $v,w$ of $\mathcal C_{AL}$ a preferred path $A(v,w)$ between $v$ and $w$: \begin{definition} (See Definition 6.1. in~\cite{CMW}). Let $v=\underline{v}\Delta^{{\mathbb Z}}$ and $w$ be two vertices of~$\mathcal C_{AL}$. \begin{itemize} \item The \emph{preferred path} $A(1,v)$ is the connected subgraph of $\mathcal C_{AL}$ given by the left normal form of $\underline v$. That is, if $v_1\ldots v_{\sup(\underline v)}$ is the left normal form of $\underline v$, $A(1,v)$ is the path starting at $1$ whose edges are successively labeled $v_1,\ldots,v_{\sup(\underline v)}$; for $i=0,\ldots, \sup(\underline v)$, the distinguished representative of the $i$th vertex along $A(1,v)$ is $\Delta^{i}\wedge \underline v$. \item The preferred path $A(v,w)$ from $v$ to $w$ is given by the translation on the left by $\underline{v}$ of the preferred path $A(1,(\underline{v}^{-1}\underline{w})\Delta^{\mathbb Z})$. That is, if $x=x_1\ldots x_r$ is the left normal form of the distinguished representative of $({\underline v}^{-1}\underline w)\Delta^{{\mathbb Z}}$, $A(v,w)$ is the path of length $r$ starting at $v$ whose edges are successively labeled $x_1,\ldots, x_r$. \end{itemize} \end{definition} Note that the path $A(v,w)$ uses only the edges of $\mathcal C_{AL}$ coming from the Cayley graph of~$G$ (with respect to the divisors of $\Delta$), not those coming from absorbable elements, and that the length of the path may well be much larger than the distance between $1$ and $v$. As normal forms are unique, if two vertices $v$ and $w$ are connected by a path $\gamma$ whose edges are labeled by simple elements $s_1,\ldots, s_r$ satisfying that for $i=1,\ldots, r-1$, $(s_i,s_{i+1})$ is a left-weighted pair, then $\gamma=A(v,w)$. In order to get a more detailed picture of this family of paths, we claim the following: \begin{lemma}\label{L:pgcd} Let $v=\underline{v}\Delta^{\mathbb Z}$ and $w=\underline{w}\Delta^{{\mathbb Z}}$ be two vertices of $\mathcal C_{AL}$. Then $A(v,w)$ is the concatenation of the paths $A(v,(\underline{v}\wedge\underline{w})\Delta^{\mathbb Z})$ and $A((\underline{v}\wedge\underline{w})\Delta^{\mathbb Z},w)$. I.e., the preferred path between $v$ and $w$ passes through the vertex $(\underline{v}\wedge \underline{w})\Delta^{{\mathbb Z}}$ \end{lemma} \begin{proof} Set $d=\underline v \wedge \underline w$. We have positive elements $a$ and $b$ such that $\underline v=da$, $\underline w=db$ and $a\wedge b=1$. By definition $A(v,w)$ is the left translate by $\underline v$ of the path $A(1,(\underline v^{-1}\underline w)\Delta^{\mathbb Z})$, which connects the identity vertex with the vertex represented by $\underline v^{-1}\underline w$. We shall see that the distinguished representative of the latter vertex is the element $\partial a\cdot \tau^r(b)$, where $r$ is the supremum of $a$. Indeed, we have $$\partial a\cdot\tau^r(b)=a^{-1}\Delta^r\cdot\tau^r(b)=a^{-1}b\Delta^r=(a^{-1}d^{-1})(db)\Delta^r=\underline v^{-1}\underline w\Delta^r,$$ which shows that our element represents the correct vertex. Moreover, if we write the left normal forms as $a=a_1\ldots a_r$ and $b=b_1\ldots b_s$, we have $$\partial a\cdot\tau^r(b)= \partial a_r\ldots \tau^{r-1}(\partial a_1)\cdot\tau^r(b_1)\ldots \tau^r(b_s),$$ which is in left normal form as written because, as $a\wedge b=1$, $$(\tau^{r-1}(\partial a_1),\tau^r(b_1))$$ is a left-weighted pair. Thus $\inf(\partial a\cdot\tau^r(b))=0$ and this shows that $\partial a\cdot\tau^{r}(b)$ is the desired distinguished representative. This says moreover that the path $A(1,\underline v^{-1}\underline w\Delta^{\mathbb Z})$ is the concatenation of the paths $A(1,\partial a\Delta^{{\mathbb Z}})$ and $A(\partial a\Delta^{\mathbb Z},\partial a\tau^r(b)\Delta^{{\mathbb Z}})$, that is, of $A(1,a^{-1}\Delta^{{\mathbb Z}})$ and $A(a^{-1}\Delta^{{\mathbb Z}},{\underline{v}}^{-1}\underline w \Delta^{{\mathbb Z}})$. After translation by $\underline v$, using the equality $\underline va^{-1}=d$, we see that our path $A(v,w)$ is the concatenation of $A(v,d\Delta^{{\mathbb Z}})$ and $A(d\Delta^{{\mathbb Z}},w)$, as we wanted to show. \end{proof} \begin{lemma}\label{L:Symmetry} The preferred paths are symmetric: for any vertices $v,w$ of $\mathcal C_{AL}$, we have $A(v,w)=A(w,v)$. \end{lemma} First, note that the lemma has nothing to do with our strange metric, the analogue result is also true in Bestvina's normal form complex -- see Lemma 6.4 in~\cite{CMW}). \begin{proof} As in the proof of Lemma~\ref{L:pgcd}, set $d=\underline v\wedge \underline w$. We have two elements $a,b$ of $G$, with $\inf(a)=\inf(b)=0$, $\underline v=da$, $\underline w=db$ and $a\wedge b=1$. Set moreover $r=\sup(a)$ and $s=\sup(b)$. By definition, $A(v,w)$ is the left translate by $\underline v$ of the path $A(1,(\underline v^{-1}\underline w)\Delta^{{\mathbb Z}})$ and we have seen in the proof of Lemma~\ref{L:pgcd} that the latter is given by the left normal form of $\partial a\cdot \tau^r(b)$. Similarly, $A(w,v)$ is the left translate by $\underline w$ of the normal form of $\partial b\cdot \tau^s(a)$. First we note that both paths have the same length, namely $r+s$. For $0\leqslant i\leqslant r+s$, we will show that $\underline v(\Delta^i\wedge \partial a\cdot\tau^r(b))$ represents the same vertex as $\underline w(\Delta^{r+s-i}\wedge \partial b\cdot\tau^s(a))$, hence showing the lemma. In other words, when traveling along the path $A(v,w)$ or along the path $A(w,v)$, one meets exactly the same vertices of $\mathcal C_{AL}$, but in the reverse order. First, assume $0\leqslant i<r$. On the one hand, $$\underline v(\Delta^i\wedge \partial a\cdot\tau^r(b))=da(\Delta^i\wedge \partial a)=da_1\ldots a_{r-i}\Delta^i.$$ On the other, $$\underline w(\Delta^{r+s-i}\wedge \partial b\cdot\tau^s(a))=db\partial b(\Delta^{r-i}\wedge \tau^s(a))=da_1\ldots a_{r-i}\Delta^s.$$ Next, assume that $r< i\leqslant r+s$, that is $i=r+j$, for $0< j\leqslant s$. On the one hand, $$\underline v(\Delta^{r+j}\wedge \partial a\cdot\tau^r(b))=da\partial a(\Delta^j\wedge \tau^r(b))=d b_1\ldots b_j\Delta^r.$$ On the other, $$\underline w(\Delta^{r+s-(r+j)}\wedge \partial b\cdot\tau^s(a))=db(\Delta^{s-j}\wedge \partial b)=db_1\ldots b_j\Delta^{s-j}.$$ Finally, if $i=r$, we have $\underline v(\Delta^r\wedge \partial a\cdot \tau^r(b))=da\partial a=d\Delta^r$ and $\underline w(\Delta^{s}\wedge \partial b \cdot \tau^s(a))=db\partial b=d\Delta^s$. \end{proof} Here is our main result \begin{theorem}\label{T:main} For any Garside group $(G,P,\Delta)$, the complex $\mathcal C_{AL}$ is 60-hyperbolic. Moreover, the family of paths $A(v,w)$ with $v,w\in G/\langle \Delta\rangle$, forms a family of uniform unparametrized quasi-geodesics in the complex: for any $v,w\in G/\langle \Delta\rangle$, the Hausdorff distance between $A(v,w)$ and a geodesic from $v$ to $w$ is bounded above by~39. \end{theorem} \begin{remark} Note that the hyperbolicity-constant is bounded independently of~$(G,P,\Delta)$. \end{remark} \begin{proof}[Proof of Theorem \ref{T:main}] First recall Proposition 3.1 in~\cite{Bowditch} (the ``guessing geodesics lemma''): \begin{proposition}\label{P:GuessingGeodesics} Given $h\geqslant 0$, there is some $k\geqslant 0$ with the following property. Suppose that $X$ is a connected graph and that for each pair of vertices $x,y$ of $X$, we have associated a connected subgraph $A(x,y)\subseteq X$, with $x,y \in A(x,y)$. Suppose that \begin{itemize} \item For all vertices $x,y$ of $X$ connected by an edge, $A(x,y)$ has diameter in $X$ at most $h$. \item For all vertices $x,y,z$ of $X$, $A(x,y)$ is contained in an $h$-neighborhood of the union $A(x,z)\cup A(y,z)$. \end{itemize} Then $X$ is $k$-hyperbolic. Moreover, if $m$ is any positive real number so that $2h(6+\log_2(m+2))\leqslant m$, we can take any number $k\geqslant \frac{3}{2}m-5h$. Moreover, for all vertices $x,y$ of $X$, the Hausdorff distance between $A(x,y)$ and any geodesic between $x$ and $y$ is bounded above by $m-4h$. \end{proposition} We will show that the hypotheses of Proposition~\ref{P:GuessingGeodesics} are satisfied with $X=\mathcal C_{AL}$, and $h=2$. Then the inequality $2\cdot 2\cdot (6+\log_2(m+2))\leqslant m$ holds for the positive number $m=$46,{\small 5}. This yields the estimate $k=60$ and the statement about the unparametrized quasi-geodesic paths. First we look at the first condition: preferred paths between adjacent vertices have uniformly bounded diameter in $\mathcal C_{AL}$. \begin{lemma}\label{L:PreferredPathsNoLoops} Let $v,w$ be two vertices of $\mathcal C_{AL}$ such that $d_{AL}(v,w)=1$. Then the diameter in $\mathcal C_{AL}$ of $A(v,w)$ is equal to~1. \end{lemma} \begin{proof} We may assume that $v=1$. If $\sup(\underline w)=1$, then there is nothing to prove: $A(1,w)$ just consists of an edge with two vertices. Otherwise, $\sup(\underline w)>1$. As there is an edge between 1 and $w$, there exists an absorbable element $y$ so that $y=\underline w\Delta^{k}$, for some $k\in {\mathbb Z}$. By definition of absorbable elements, this implies either $k=0$ (in which case $y=\underline w$ is absorbable and positive), or $k=-\sup(\underline w)$. In the first case, $A(1,w)$ is given by the left normal form of $y=\underline w$; by Lemma~\ref{L:subword}, this has diameter 1 in $\mathcal C_{AL}$. In the second case, we look at the path $A(w,1)$ which is the translate by $\underline w$ of the path $A(1,\underline w^{-1}\Delta^{{\mathbb Z}})$. The latter corresponds to the left normal form of the element $\partial \underline w$. But~$y$, and thus $y^{-1}$, are absorbable; and the equality $\partial \underline w=\tau^{-k}(y^{-1})$ shows that $\partial \underline w$ is also absorbable. Therefore, again by Lemma~\ref{L:subword}, the path $A(1,\underline w^{-1}\Delta^{{\mathbb Z}})$ has diameter~1 in $\mathcal C_{AL}$ as we needed to show. \end{proof} We now proceed to show the second condition: the 2-thinness of any triangle whose edges are our preferred paths. \begin{lemma}\label{L:2-thinness} Let $u,v,w$ be three vertices of $\mathcal C_{AL}$. The triangle in $\mathcal C_{AL}$ with vertices $u$, $v$ and $w$, and with edges $A(u,v)$, $A(v,w)$ and $A(u,w)$ is $2$-thin: each edge is at Hausdorff distance at most 2 from the union of the other two edges. \end{lemma} \begin{proof}[Proof of Lemma~\ref{L:2-thinness}] For the proof, first notice that without loss of generality we can assume that $u=1$. We then set, as in the above proofs, $d=\underline v\wedge \underline w$. We consider the elements $a,b$ of $G$ satisfying $\underline v=da$, $\underline w=db$ and $a\wedge b=1$. We also set $k=\sup(\underline v)$, $l=\sup(\underline w)$, $r=\sup(a)$, $s=\sup(b)$ and $p=\sup(d)$. \begin{figure}[htb] \begin{center}\includegraphics[width=13cm]{Thinness.pdf} \end{center} \caption{(a) A triangle with vertices $1$, $v$ and $w$ and normal form edges. \ (b) How the triangle is squashed in $\mathcal C_{AL}$.} \label{F:2-thinness} \end{figure} \begin{lemma}\label{L:InitialSegments} The initial segments of length $p$ of $A(1,v)$ and $A(1,w)$ are at Hausdorff distance at most 2 in $\mathcal C_{AL}$. \end{lemma} \begin{proof} First recall that for any integer $i=1,\ldots,k$, the $i$th step on the preferred path $A(1,v)$ is at the vertex $(\underline v\wedge \Delta^{i})\Delta^{{\mathbb Z}}$, whose distinguished representative is exactly $\underline v\wedge \Delta^i$. Notice that $d$ itself is the distinguished representative of $d\Delta^{\mathbb Z}$. It is sufficient to prove that the initial segment of $A(1,v)$ of length $p$ is at Hausdorff distance at most 1 from $A(1,d\Delta^{{\mathbb Z}})$ in $\mathcal C_{AL}$. Specifically, we claim that the respective $i$th steps of $A(1,v)$ and of $A(1,d\Delta^{{\mathbb Z}})$ are at distance at most 1 for any $i=1,\ldots p$, that is $$d_{AL}((\underline v \wedge \Delta^i)\Delta^{\mathbb Z},(d\wedge \Delta^i)\Delta^{\mathbb Z})\leqslant1.$$ But now observe that $d\wedge \Delta^i\preccurlyeq \underline v\wedge \Delta^i$, so that we can find a positive element~$y$ such that $(d\wedge \Delta^i)y=\underline v \wedge \Delta^i$. This element $y$ is absorbable by $d\wedge \Delta^i$ as $\sup(d\wedge \Delta^i)=\sup(\underline v \wedge \Delta^i)=i$ and $\inf(d\wedge \Delta^i)=\inf(\underline v \wedge \Delta^i)=0$. This shows the claim. \end{proof} Lemma~\ref{L:InitialSegments} says that in our triangle, the two edges emanating from any vertex have distinguished initial segments (possibly consisting of a single vertex) which stay at distance at most 2 from each other; moreover, the respective end points of these initial segments are at distance at most 1 from a common vertex on the third edge. We shall now see that for each edge of our triangle, the respective distinguished initial segments emanating from its two extremities actually overlap (or at least share a common vertex on the given edge). This is a consequence of the following lemma. \begin{lemma}\label{L:Overlap} We have $\sup(\partial \underline v\wedge \partial a\cdot \tau^r(b))\geqslant r$. \end{lemma} \begin{proof} It suffices to exhibit a common prefix of $\partial \underline v$ and $\partial a$ of length $r$. Our candidate is $U$, which we define to be the product of the $r$ first factors in the \emph{right} normal form of $\partial \underline v$. In other words, we have $U=\partial(\Delta^r\wedge^{\hspace{-0.7mm}\Lsh\hspace{0.7mm}} \underline v)$ (where $\wedge^{\hspace{-0.7mm}\Lsh\hspace{0.7mm}}$ denotes the right gcd in~$G$). It is by construction a prefix of $\partial \underline v$ of length $r$. It remains to be shown that it is also a prefix of $\partial a$. But notice that $a$, as a suffix of $\underline v$ of length r, is certainly a suffix of $\Delta^r\wedge^{\hspace{-0.7mm}\Lsh\hspace{0.7mm}} \underline v$, so that we can find a positive $R$ satisfying $\Delta^r\wedge^{\hspace{-0.7mm}\Lsh\hspace{0.7mm}} \underline v=Ra$. But now, $$\partial a=a^{-1}\Delta^r=(\Delta^r\wedge^{\hspace{-0.7mm}\Lsh\hspace{0.7mm}} \underline v)^{-1}R\Delta^r=U\tau^r(R).$$ This shows that $U$ is also a prefix of $\partial a$. \end{proof} Along the edge $A(1,v)$, we have on the one hand a distinguished initial segment emanating from 1 which has length $p$. On the other hand, Lemma \ref{L:Overlap} says that the distinguished initial segment emanating from $v$ has length at least $r$. Because $\underline v=da$, the length $k$ of the edge $A(1,v)$ is at most $p+r$. Hence the two distinguished initial segments at least meet in a point along $A(1,v)$. This shows that any point of $A(1,v)$, and hence by symmetry any point on any edge of our triangle, is at distance at most 2 from some point in the union of the other two edges. \end{proof} Lemmas~\ref{L:PreferredPathsNoLoops} and~\ref{L:2-thinness} guarantee that the hypotheses of the Guessing Geodesics Lemma~\ref{P:GuessingGeodesics} are satisfied. This completes the proof of Theorem~\ref{T:main}. \end{proof} \begin{openproblems} \begin{enumerate} \item What is the boundary at infinity of~$\mathcal C_{AL}(G)$? \item One of the most powerful tools for studying mapping class groups are \emph{subsurface projections} in curve complexes \cite{MM2}. Is there a good analogue notion in $\mathcal C_{AL}$? \item\label{Q:InfDiam} Under which conditions on~$G$ does $\mathcal C_{AL}(G)$ have infinite diameter? In Theorem~\ref{T:InfDiam} we shall prove that this is the case if $G$ is the braid group, equipped with the classical Garside structure; however, the condition that $G/Z(G)$ is infinite may actually be sufficient. The special case of Artin-Tits groups of spherical type deserves particular attention. \item\label{Q:WPD} Does $G$ act acylindrically on~$\mathcal C_{AL}(G)$? Recall that mapping class groups act acylindrically on curve complexes~\cite{Bowditch08,PrzSisto}. \item Is it true that ``generic'' elements of~$G$ act loxodromically on~$\mathcal C_{AL}$, and thus are analogue to pseudo-Anosov elements in mapping class groups? If the word ``generic'' is used in the sense of ``a random element in a large ball in the Cayley graph'', then the answer is positive in the special case of braid groups with the classical Garside structure -- see~\cite{CarusoWiestGeneric2,WAutomLoxGeneric}. The question is closely related to question~(\ref{Q:InfDiam}) above. If, by contrast, the word ``generic'' is used in the sense of ``the result of a long random walk in the Cayley graph'', then a positive answer would essentially be implied by a positive answer to question~(\ref{Q:WPD}) above, using~\cite{SistoGeneric}. \item Consider the braid group $B_n$, equipped with its classical Garside structure. Is it true that $\mathcal C_{AL}(B_n)$ is quasi-isometric to $\mathcal{CC}(D_n)$, the curve complex of the $n$-times punctured disk (see Section~\ref{SS:QiWithCC})? (This conjecture is the reason why we think of $\mathcal C_{AL}$ as an analogue of the curve complex.) \item If $G$ is a Garside group with two different Garside structures $(G,P,\Delta)$ and $(G,Q,\delta)$, are the additional length complexes $\mathcal C_{AL}(G,P,\Delta)$ and $\mathcal C_{AL}(G,Q,\delta)$ quasi-isometric? In particular, are the additional length complexes associated with the classical (respectively, dual) Garside structure of the braid group $B_n$ quasi-isometric? We conjecture that they are, since both should be quasi-isometric to $\mathcal{CC}(D_n)$. \item Is the automorphism group of~$\mathcal C_{AL}(G)$ commensurable with~$G$? Recall that the automorphism group of the curve complex is commensurable with the mapping class group, by Ivanov's theorem~\cite{Ivanov}. \item Is there a fast algorithm for finding parametrized quasi-geodesics, or even geodesics, between any two given points in~$\mathcal C_{AL}$? (Note that the Garside normal form yields a fast algorithm for constructing \emph{unparametrized} quasi-geodesics.) To start with, is there a fast algorithm for deciding absorbability? \item Is the construction principle of $\mathcal{C}_{AL}$ useful in contexts other than Garside groups, for instance for general mapping class groups, or for $\mathrm{Out}(F_n)$? \end{enumerate} \end{openproblems} \section{The special case of the braid groups}\label{S:B_n} Throughout this section we consider the special case where $G=B_n$, the braid group on $n$~strands, equipped with the classical Garside structure. For an excellent introduction to this structure, see~\cite{Elrifai-Morton}. \begin{example}\label{E:InfDiam} \begin{enumerate} \item For $n=2$, $B_2$ is the infinite cyclic group generated by $\Delta_2=\sigma_1$, so $B_2/\langle\Delta_2\rangle$ as well as the associated additional length complex are trivial. \item For $n=3$, the only absorbable braids are $\sigma_1$, $\sigma_2$ and their respective inverses. Therefore the additional length complex in that special case is nothing but Bestvina's normal form complex, which has infinite diameter. \end{enumerate} \end{example} \subsection{Periodic and reducible braids act elliptically}\label{SS:PeriodReduc} \begin{example}\label{E:DecompositionDelta} In $B_n$ with $n\geqslant 4$, the braid $\Delta^{k}$ (with $k\in \mathbb Z-\{0\}$) is the product of three absorbable braids. Indeed, suppose $k\geqslant 1$ and let $A=\sigma_1^k$, $B=\sigma_3^k$, and $C=A^{-1}B^{-1}\Delta^k$. Then $\Delta^k=A\cdot B\cdot C$. Moreover, $A$ and~$B$ can absorb each other, and $C$ can be absorbed by~$A$. It follows from Lemma \ref{L:BasicAbsorb}(ii) that $A^{-1}$, $B^{-1}$ and $C^{-1}$ are absorbable. Thus $\Delta^{-k}=C^{-1}B^{-1}A^{-1}$ is the product of three absorbable braids, hence showing the claim for negative powers, too. \end{example} Recall that the braid group acts, on the left, on the set of isotopy classes of simple closed curves in the $n$-times punctured disk. In what follows, we shall take these punctures to be lined up horizontally. Also, by a \emph{round} curve we shall mean the isotopy class of a geometric essential circle (i.e. enclosing more than 1 and less than $n$ punctures). \begin{lemma}\label{L:ReducibleSmallDistance} Suppose that $n\geqslant 4$. Any $n$-braid which sends a round curve to a round curve is a product of at most nine absorbable braids. In particular, every reducible braid with round reduction curves is a product of at most nine absorbable braids. \end{lemma} \begin{proof} Let $y$ be a braid sending a round curve to a round curve. As in the computation of a left normal form, we can get rid of the possible negative factors in $y$ at the cost of at most three absorbable braids (see Example \ref{E:DecompositionDelta}). As powers of $\Delta$ send round curves to round curves we may suppose that $y$ is a positive braid sending the round curve $\mathcal C$ to a round curve. We recall~\cite{BGN,CalvezStandard,GonzalezMenesesRed} that in any braid~$y$ which sends a round curve~$\mathcal C$ to a round curve, pushing the curve~$\mathcal C$ along the braid gives rise to a ``tube'' that stays round all along the braid~$y$. Thus $y$ can be written as the product $y=y_{\rm int}\cdot y_{\rm tub}$ of an interior braid $y_{\rm int}$ and a tubular braid $y_{\rm tub}$: in the interior braid the tube just goes straight down and only the strands inside the tube can cross each other. By contrast, the tubular braid~$y_{\rm tub}$ looks just like~$y$, except that all crossings between pairs of strands living in the tube have been removed. Figure \ref{F:RedAbsorbable} shows an example in $B_5$. We shall show that each of $y_{\rm int}$ and $y_{\rm tub}$ can be written as a product of three absorbable braids. We start with some notation. Firstly, we denote by $\Delta_\mathcal C$ the simple braid in which two strands cross if and only if they both start at punctures enclosed by $\mathcal C$. Secondly, let $i$ be an integer such that punctures number $i$ and~$i+1$ are enclosed by~$\mathcal C$. In order to prove the claim concerning $y_{\rm int}$, let us first suppose that $\mathcal C$ encloses strictly less than $n-1$ punctures, thus at least two punctures are not enclosed by $\mathcal C$. If there is a $j$ such that the punctures $j$ and $j+1$ are not enclosed by $\mathcal C$, then $y_{\rm int}$ can be absorbed by an appropriate power of $\sigma_j$ (namely, $\sup(y_{\rm int})$). Otherwise, only the first and the $n$th puncture are not enclosed by $\mathcal C$; then there is an appropriate value of $p$ (namely, $\sup(y_{\rm int})$) so that $\prod_{\iota=1}^{p}\tau^{\iota}(\sigma_1\ldots \sigma_{n-1})$ absorbs $y_{\rm int}$. \begin{figure}[htb] \begin{center}\includegraphics[width=9cm]{RedAbsorbable.pdf} \end{center} \caption{The braid $y=\sigma_1\sigma_2\sigma_1\sigma_4\sigma_3\sigma_2\sigma_1\cdot \sigma_1\sigma_2\sigma_1\sigma_3\sigma_2\sigma_4\cdot \sigma_4\sigma_3\sigma_2\sigma_2\sigma_1\in B_5$, the round curve $\mathcal C$ sent by $y$ to a round curve and the corresponding braids $y_{\rm int}=\sigma_1\sigma_2\sigma_1\cdot \sigma_1\sigma_2$ and $y_{\rm tub}=\sigma_4\sigma_3\sigma_2\sigma_1\cdot\sigma_1\sigma_2\sigma_3\sigma_4\cdot\sigma_4\sigma_3\sigma_2\sigma_1$; interior strands are depicted in bold lines. In this example, $y_{\rm int}$ is absorbable by $\sigma_4^2$. On the other hand, with $i=1$, $\sigma_i^3$ absorbs $y_{\rm tub}$.} \label{F:RedAbsorbable} \end{figure} Suppose now that $\mathcal C$ encloses all the punctures but one. Up to conjugation by $\Delta$, which preserves absorbability (Lemma \ref{L:BasicAbsorb}(ii)), we may assume that the first puncture is not enclosed by~$\mathcal C$. We consider the decomposition $y_{\rm int}=\Delta_{\mathcal C}^k\cdot y'_{\rm int}$, where $k$ is a non-negative integer and $y'_{\rm int}$ is a positive braid not divisible by $\Delta_{\mathcal C}$. Then there is an appropriate value of $p$ (namely $\sup(y'_{\rm int})$) so that $y'_{\rm int}$ is absorbed by $\prod_{\iota=1}^{p} \tau^{p-\iota}(\sigma_{n-1}\ldots \sigma_1)$. The factor~$\Delta_{\mathcal C}^k$, on the other hand, can be further decomposed as $\Delta_{\mathcal C}^k=\sigma_i^k\cdot (\sigma_i^{-k}\Delta_{\mathcal C}^k)$. Both factors are absorbable by $\prod_{\iota=1}^{k} \tau^{k-\iota}(\sigma_{n-1}\ldots \sigma_{1})$. This completes the proof that $y_{\rm int}$ can be written as a product of three absorbable braids. The proof for $y_{\rm tub}$ is similar: the braid $y_{\rm tub}$ can be decomposed into at most three factors which can all be absorbed by an appropriate power of~$\sigma_i$.\end{proof} \begin{proposition}\label{P:PeriodReducActEllipt} We consider the action of the braid group $B_n$, equipped with its classical Garside structure, on its additional length complex $\mathcal C_{AL}(B_n)$ by left multiplication. Then periodic and reducible elements act elliptically. \end{proposition} \begin{proof} We recall that a braid is called periodic if it has some power which is also a power of $\Delta^2$. Since $\Delta^2$ acts trivially on the complex, periodic braids act as finite-order isometries on the complex: their action is thus elliptic. (Note that $\Delta$ does not act trivially: it sends any vertex $x\Delta^\mathbb Z$ to $\tau^{-1}(x)\Delta^\mathbb Z$.) If a braid $x$ is reducible with a round reducing curve, then so is any of its powers. As seen in Lemma~\ref{L:ReducibleSmallDistance}, the orbit of the trivial braid under the action of $x$ remains at distance at most~$9$ from the trivial braid. This means that $x$ acts elliptically. In order to deal with the case of braids which are reducible but without round reduction curves, we remark that such braids are conjugate to reducible braids with round reduction curves, and therefore they act elliptically, too. \end{proof} \subsection{$\mathcal C_{AL}(B_n)$ has infinite diameter}\label{SS:InfDiam} \begin{theorem}\label{T:InfDiam} Let $B_n$ be the braid group on $n$ strands ($n\geqslant 3$), equipped with the classical Garside structure. Then the complex $\mathcal C_{AL}(B_n)$ has infinite diameter. \end{theorem} \begin{proof} For $n=3$, this is the statement in Example \ref{E:InfDiam} (2). Our strategy for proving Theorem \ref{T:InfDiam} is to actually construct elements whose action on the additional length complex is loxodromic. For every braid index $n\geqslant 4$, we will construct a special braid $x_n$ of infimum~0 such that the vertex $x_n^{N}\Delta^{{\mathbb Z}}$ ($N\in \mathbb N$) is at a distance at least $\frac{N}{2}$ from the identity vertex of~$\mathcal C_{AL}$. (As an aside, we conjecture that the action of $x_n$ on $\mathcal C_{AL}(B_n)$ is weakly properly discontinuous, \cite{BestvinaFujiwara02, PrzSisto}.) We start with the construction of our special braids $x_n$. The rough idea is that $x_n$~should contain something like the ``blocking braids'' of~\cite{CarusoWiestGeneric2} in order to give~$x_n$ a very strong rigidity property (see Propositions~\ref{P:BetweenPowers} and \ref{P:InitialSegment}), but it should also contain pieces which prevent both $x_n$ and $\partial x_n$ from being absorbable. Here are the details. Recall the \emph{shift} morphism $\text{sh}$ from $B_{\infty}$ to $B_{\infty}$ given by $\sigma_i\mapsto \sigma_{i+1}$ and the reverse antiautomorphism $\text{rev}$, $\sigma_{i_1}\sigma_{i_2}\ldots\sigma_{i_l} \mapsto \sigma_{i_l}\ldots \sigma_{i_2}\sigma_{i_1}$; for fixed $n$ we also note $\tau_n$ the conjugation by $\Delta_n$ inside $B_n$: $x\mapsto \Delta_n^{-1}x\Delta_n$.\\ Let us first define the 4-strand braid $$x_4=\sigma_2\cdot\sigma_2\sigma_1\sigma_3\cdot\sigma_1\sigma_3\sigma_2\sigma_1\sigma_3\cdot\sigma_1\sigma_3\sigma_2\sigma_1\sigma_3\cdot\sigma_1\sigma_3\sigma_2\cdot\sigma_2.$$ Then for $n\geqslant 5$, we define $u_n,x_n\in B_n$ as follows: $$u_n=\text{sh}\left(\sigma_{\lfloor\frac{n-2}{2}\rfloor}^{-1}\Delta_{n-2}\right)\left(\sigma_{1}\ldots\sigma_{\lfloor\frac{n-1}{2}\rfloor}\right)\left(\sigma_{n-1}\ldots\sigma_{\lfloor\frac{n+3}{2}\rfloor}\right),$$ where $\lfloor \cdot\rfloor$ stands for the integer part, and \begin{multline*} x_n=\text{sh}^{\lfloor\frac{n-3}{2}\rfloor}\left(\sigma_2\cdot\sigma_2\sigma_1\sigma_3\right)\cdot \prod^n_{\mathclap{\substack{k=5\\ k\equiv n \pmod 2\\ }}} \left( \text{sh}^{\lfloor\frac{n-k}{2}\rfloor}\left(\tau_{k}^{\lfloor\frac{k+1}{2}\rfloor}\left(u_{k}\right)\right)\right) \cdot\\ \tau_n^{\lfloor\frac{n+1}{2}\rfloor}\left(\sigma_{\lfloor\frac{n+1}{2}\rfloor}^{-1}\Delta_n\right)\cdot \tau_n^{\lfloor\frac{n+1}{2}\rfloor}\left(\sigma_{\lfloor\frac{n}{2}\rfloor}^{-1}\Delta_n\right)\cdot\\ \text{rev}\left(\text{sh}^{\lfloor\frac{n-3}{2}\rfloor}\left(\sigma_2\cdot\sigma_2\sigma_1\sigma_3\right)\cdot \prod^n_{\mathclap{\substack{k=5\\ k\equiv n \pmod 2\\ }}} \text{sh}^{\lfloor\frac{n-k}{2}\rfloor}\left(\tau_{k}^{\lfloor\frac{k+1}{2}\rfloor}\left(u_{k}\right)\right)\right). \end{multline*} The braids $x_9$ and $x_{10}$ are depicted in Figure~\ref{F:bloquante}. Note that for each $n\geqslant 4$, we have $\inf(x_n)=0$ and $\ell(x_n)=\sup(x_n)=2\cdot\lfloor \frac{n+1}{2}\rfloor+2$. \begin{figure}[ht] \centerline{\includegraphics[height=11.65cm]{Bloquante.pdf}} \caption{(a) The braid $x_{9}$. (b) The braid $x_{10}$. Both are shown in left (and right) normal form; at the beginning and the end of each of the factors, green crosses indicate which pair of adjacent strands cross in the considered factor.} \label{F:bloquante} \end{figure} \begin{observation}\label{O:PropertiesOfx} For each $n\geqslant 4$, the braid $x_n$ has the following properties: \begin{enumerate} \item The left and right normal forms of~$x_n$ are the same, \item The first factor $s_{\rm first}$ and the last factor $s_{\rm last}$ of the left and right normal form of~$x_n$ are the same and consist of a single atom: $s_{\rm first}=s_{\rm last}=\sigma_{\lfloor\frac{n+1}{2}\rfloor}$; in particular the pair $(s_{\rm last},s_{\rm first})$ is both right and left-weighted. \item The left (and right) normal form of~$x_n$ contains a factor of the form $a^{-1}\Delta$, where $a$ is an atom. Specifically, $x_n$ contains the factor $\tau_n^{\lfloor\frac{n+1}{2}\rfloor}\left(\sigma_{\lfloor\frac{n+1}{2}\rfloor}^{-1}\Delta_n\right)$. \end{enumerate} \end{observation} \begin{remark} \begin{itemize} \item Properties (1), (2), and (3) of $x_n$ above are the only ones used in the proof that $\mathcal C_{AL}(B_n)$ has infinite diameter. \item Property (2) implies in particular that $x_n$ is rigid (see Section \ref{SS:Absorbable}). \item $x_n$ is not absorbable, and neither is $\partial x_n$. Indeed, Property (3), together with Lemma~\ref{L:subword}, implies that $x_n$ is not absorbable, as it contains a factor which by Example~\ref{I:sInvDelta} (5) is not absorbable. Similarly, the first and last factors of $x_n$ contain only one atom (Property (2)), so the corresponding factors of $\partial x_n$ are of the form $\sigma_i^{-1}\Delta$; in particular, $\partial x_n$ is not absorbable, either. \item Another possible definition of $x_n$ would have been $x_n'=\mathrm{rev}(\alpha)\cdot \sigma_{n-1}^{-1}\Delta\cdot \Delta\sigma_{n-1}^{-1}\cdot \alpha$, where $\alpha$ is the ``blocking braid'' from~\cite{CarusoWiestGeneric2}. This braid~$x'_n$ is longer than the one presented above. \end{itemize} \end{remark} From now on, we fix an arbitrary braid index $n\geqslant 4$ and we write $x=x_n$; we write $r$ for the number of factors in the normal form of $x$. Thus $r=2\cdot\lfloor \frac{n+1}{2}\rfloor+2$. \begin{lemma}\label{L:PropertyOfx} Suppose $v$ is a nontrivial positive suffix of~$x$, and $m$ a non-negative integer. (a) The product $v\cdot x^m$ is in left normal form as written, i.e.\ the left normal form of the product is just the juxtaposition of the respective left normal forms of $v$ and of~$x^m$. (b) For every nontrivial positive prefix $t$ of $vx^m$, we have $v\wedge t\neq 1$. \end{lemma} \begin{proof} As the last factor of the \emph{right} normal form of $x$ is $\sigma_{\lfloor\frac{n+1}{2}\rfloor}$ (Observation \ref{O:PropertiesOfx} (1-2)), the only simple suffix of $x$ is $\sigma_{\lfloor\frac{n+1}{2}\rfloor}$; in particular the last factor of the \emph{left} normal form of $v$ is $\sigma_{\lfloor\frac{n+1}{2}\rfloor}$. But this is also the first factor of the left normal form of $x^m$; because the pair $(\sigma_{\lfloor\frac{n+1}{2}\rfloor},\sigma_{\lfloor\frac{n+1}{2}\rfloor})$ is left-weighted, we obtain that the product $v\cdot x^m$ is in left normal form as written, proving~(a). In particular, $ \Delta\wedge vx^m=\Delta\wedge v$. Now let $\sigma$ be a letter which divides $t$. In order to prove (b), it is sufficient to show that $\sigma\preccurlyeq v$. But we have: $\sigma\preccurlyeq \Delta\wedge t\preccurlyeq \Delta\wedge vx^m =\Delta\wedge v\preccurlyeq v$. \end{proof} The lemma allows to show that every prefix of some positive power of $x$ lies exactly between two successive powers of $x$ with respect to the prefix order. We first introduce some notation. \begin{notation} For any braid $z$ with infimum 0 we define the non-negative integer $\lambda_x(z)=\max\{k\in \mathbb Z, \ x^k\preccurlyeq z\}$. \end{notation} \begin{proposition}\label{P:BetweenPowers} Let $z$ be a positive braid with infimum 0; let $\lambda=\lambda_x(z)$. Then the following are equivalent: \begin{itemize} \item[(a)] there exists a positive integer $m$ such that $z\preccurlyeq x^m$, \item[(b)] $x^{\lambda}\preccurlyeq z\preccurlyeq x^{\lambda+1}$. \end{itemize} In this case, the product of the $\lambda r$ first factors of the left normal form of $z$ is exactly~$x^{\lambda}$, that is $\Delta^{\lambda r}\wedge z=x^{\lambda}$. \end{proposition} \begin{proof} The direction (b) $\Longrightarrow$ (a) is obvious. To show the converse, we need to show that $z\preccurlyeq x^{\lambda+1}$. We may assume that $z$ is not a power of $x$. Consider the braid $d=z\wedge x^{\lambda+1}$; by definition there exist positive braids $t$ and $v\neq 1$ such that $z=dt$, $x^{\lambda+1}=dv$ and $t\wedge v=1$. Note that $x^\lambda$ is a prefix of $d$, so that $v$ is a suffix of $x$. Now since $z=d t$ is a prefix of $x^m = d v x^{m-\lambda-1}$, we deduce that $t$ is a prefix of $v x^{m-\lambda-1}$. If $t$ is non-trivial, then we obtain by Lemma \ref{L:PropertyOfx} (b) that $t\wedge v\neq 1$, which is absurd. Thus $t=1$, which means that $z=z\wedge x^{\lambda+1}$, as we wanted to prove. The second part of the statement follows from the calculation: $$x^\lambda=x^\lambda\wedge \Delta^{\lambda r}\preccurlyeq z\wedge \Delta^{\lambda r}\preccurlyeq x^{\lambda+1}\wedge \Delta^{\lambda r}=x^\lambda.$$ where the last equality is due to the rigidity of~$x$. \end{proof} We now see that even without the hypothesis that $z$ is a prefix of some power of $x$, provided that $\lambda_x(z)$ is big enough, there is an initial segment of the left normal form of $z$ which consists of a power of $x$. \begin{proposition}\label{P:InitialSegment} Let $z$ be a braid of infimum 0 and suppose that $\lambda=\lambda_x(z)\geqslant 2$. Then the product of the $(\lambda-1)r$ first factors of the left normal form of $z$ is exactly $x^{\lambda-1}$. \end{proposition} \begin{proof} We may assume that $x^\lambda\neq z$, otherwise the result is trivial. So there exists a non-trivial positive $A$ so that $z=x^\lambda A$. Write $s_1\ldots s_r$ for the normal form of $x$. Let $1<j<r$ be the biggest integer so that $s_j$ has the form $\sigma_i^{-1}\Delta$ (see Observation~\ref{O:PropertiesOfx}(3)). From the algorithm for computing left normal forms -- see \cite{Gebhardt-GM}, Proposition~1 --, and because $x$ is rigid, it follows that the left normal form of $x^\lambda A$ starts with $x^{\lambda-1}s_1\ldots s_j$; otherwise $\inf(z)=0$ would be contradicted. \end{proof} Propositions \ref{P:BetweenPowers} and \ref{P:InitialSegment} admit analogues "on the right", namely if $z$ is a suffix of some positive power of $x$, then $z$ lies between two successive powers of $x$ with respect to the suffix order. Moreover, if $k\geqslant 2$ is the maximal integer so that $x^{k}$ is a suffix of $z$, then the left normal form of $z$ has a final segment consisting of $x^{k-1}$. However, these facts will not be used in the proof of Theorem \ref{T:InfDiam} so we do not prove them. Instead, we state the easier: \begin{proposition}\label{P:FinalSegment} Let $z$ be a positive braid of infimum 0 and assume that there is a positive integer $k$ so that $x^{k+1}\succcurlyeq z\succcurlyeq x^k$. Then the final segment of length $kr$ in the left normal form of $z$ consists of $x^k$. \end{proposition} \begin{proof} Again, we may assume that $z$ is not a power of $x$. There exist by hypothesis some non-trivial positive braids $v$ and $w$ so that $x^{k+1}=wz$ and $z=vx^k$. Combining both, we get $x^{k+1 =wvx^k$; cancelling $x^k$ on the right, it follows that $x=wv$, so that $v$ is a suffix of $x$. By Lemma \ref{L:PropertyOfx}(a), $z=vx^k$ is in left normal form as written. This shows the result. \end{proof} \begin{proposition}\label{P:PathGoesThroughx} Suppose that $z_1,z_2$ are braids with infimum 0; let $v_i$ ($i=1,2$) be the vertex of $\mathcal C_{AL}$ whose distinguished representative is $z_i$. Let $\lambda_1=\lambda_x(z_1)$ and $\lambda_2=\lambda_x(z_2)$. Assume that $\lambda_2-\lambda_1\geqslant 3$. Then the path $A(v_1,v_2)$ contains $A(x^{\lambda_1+1}\Delta^{{\mathbb Z}},x^{\lambda_2-1}\Delta^{{\mathbb Z}})$. \end{proposition} \begin{proof} See Figure~\ref{F:PfLemma}. We look at the two paths $\gamma_1=A(v_1,x^{\lambda_2-1}\Delta^{{\mathbb Z}})$ and $\gamma_2=A(1,v_2)$. We claim that $\gamma_1$ and~$\gamma_2$ coincide along $A(x^{\lambda_1+1}\Delta^{{\mathbb Z}},x^{\lambda_2-1}\Delta^{{\mathbb Z}})$. Let us prove this claim. On the one hand, by Proposition \ref{P:InitialSegment}, $A(1,x^{\lambda_2-1})$ is an initial segment of $\gamma_2$. On the other hand, let $z_3=z_1\wedge x^{\lambda_2-1}$ and let $v_3$ be the vertex of $\mathcal C_{AL}$ whose distinguished representative is~$z_3$. By Lemma~\ref{L:pgcd}, $\gamma_1$ is the concatenation of $A(v_1,v_3)$ and $A(v_3,x^{\lambda_2-1}\Delta^{\mathbb Z})$. Note that $\lambda_x(z_3)=\lambda_1$. By Proposition~\ref{P:BetweenPowers}, $x^{\lambda_1}\preccurlyeq z_3\preccurlyeq x^{\lambda_1+1}$. It follows that $$x^{\lambda_2-\lambda_1-1}\succcurlyeq z_3^{-1}x^{\lambda_2-1} \succcurlyeq x^{\lambda_2-\lambda_1-2}.$$ By Proposition \ref{P:FinalSegment}, the left normal form of $z_3^{-1}x^{\lambda_2-1}$ terminates with $(\lambda_2-\lambda_1-2)r$ factors whose product is exactly $x^{\lambda_2-\lambda_1-2}$. In other words, $A(v_3,x^{\lambda_2-1}\Delta^{\mathbb Z})$ has a final segment equal to $A(x^{\lambda_1+1}\Delta^{\mathbb Z},x^{\lambda_2-1}\Delta^{\mathbb Z})$ and our claim is shown. \begin{figure}[ht] \centerline{\includegraphics{InfDiam.pdf}} \caption{Proof of Proposition~\ref{P:PathGoesThroughx}.} \label{F:PfLemma} \end{figure} Now consider the path $\gamma$ formed by the subpath of $\gamma_1$ between $v_1$ and $x^{\lambda_2-1}\Delta^{{\mathbb Z}}$, followed by the subpath of $\gamma_2$ between $x^{\lambda_2-1}\Delta^{{\mathbb Z}}$ and $v_2$. Observe that $\gamma$ connects $v_1$ and $v_2$ and that the product of the labels of the successive edges along $\gamma$ gives a left normal form. This says that $\gamma=A(v_1,v_2)$, hence showing the lemma. \end{proof} We are now ready to complete the proof of Theorem~\ref{T:InfDiam}. We will do this by proving the following claim. {\bf Claim } For all positive integers~$N$, the $N$th power of $x$ lies at distance at least $\frac{N}{2}$ from the identity vertex in~$\mathcal C_{AL}$. In order to prove this claim, we fix an $N$, and suppose for a contradiction that there exists a path of length $K$, with $K<\frac{N}{2}$, which connects the identity vertex with the vertex $x^N\Delta^{{\mathbb Z}}$ . Let $v_0=1, v_1,\ldots ,v_K=x^N\Delta^{\mathbb Z}$ be the vertices along this path. Notice that $\lambda_x(\underline{v_0})=0$ and $\lambda_x(\underline{v_K})=N$. Thus there is some integer~$i$ between 0 and $K-1$ such that $\lambda_x(\underline{v_{i+1}})\geqslant \lambda_x(\underline{v_i})+3$. This index~$i$ will play a key role in what follows. By Proposition~\ref{P:PathGoesThroughx}, the path $A(v_i,v_{i+1})$ contains the subpath $$A(x^{\lambda_x(\underline{v_i})+1}\Delta^{\mathbb Z},x^{\lambda_x(\underline{v_{i+1}})-1}\Delta^{{\mathbb Z}}).$$ We will see that this contradicts the equality $d_{AL}(v_i,v_{i+1})=1$. The vertices $v_i$ and $v_{i+1}$ cannot be connected by an edge labeled by a simple element, otherwise $A(v_i,v_{i+1})$ would be a path of length 1 which cannot contain $A(x^{\lambda_x(\underline{v_i})+1}\Delta^{\mathbb Z},x^{\lambda_x(\underline{v_{i+1}})-1}\Delta^{{\mathbb Z}})$ as a subpath. Thus it only remains to see that there cannot be any absorbable element~$y$ so that $\underline{v_i} y\in \underline{v_{i+1}}\Delta^{\mathbb Z}$. Actually, such a braid~$y$ must have the form $y=\underline {v_i}^{-1}\underline{v_{i+1}}\Delta^k$ for some integer $k$. In order for $y$ to be absorbable, we must have $k=k_1=-\inf(\underline {v_i}^{-1}\underline{v_{i+1}})$ or $k=k_2=-\sup(\underline {v_i}^{-1}\underline{v_{i+1}})$. In the first case, $y$ is the braid whose left normal form is given reading the edges along $A(v_i,v_{i+1})$ and thus cannot be absorbable, by Lemma~\ref{L:subword}. In the second case, $y$ is negative; its inverse $y^{-1}$ is a positive braid whose left normal form is obtained reading the edges along the path $A(v_{i+1},v_i)$. Again, this braid cannot be absorbable because its left normal form contains $\partial x$ as a subword. This contradicts the choice that $d_{AL}(v_i,v_{i+1})=1$, completing the proof of the claim and of Theorem~\ref{T:InfDiam}. \end{proof} \subsection{Quasi-isometry with the curve complex}\label{SS:QiWithCC} To conclude this paper, we turn to the question whether the additional length complex $\mathcal C_{AL}(B_n)$ is actually quasi-isometric to $\mathcal{CC}$, the curve complex of the $n$-times punctured disk. If the answer to this question is positive, as we conjecture, then our previous results show in particular that Garside normal forms in~$B_n$ project to unparametrized quasi-geodesics in $\mathcal{CC}$. We shall construct a Lipschitz map $$ \mathcal{CC} \longrightarrow \mathcal{C}_{AL}(B_n) $$ The most natural way to think of this map is to introduce first another model for the curve complex. Start with the Cayley graph of~$B_n$, with respect to any finite generating set, for instance Garside's. Next we recall that there are only finitely many \emph{round} simple closed curves in~$D_n$, the disk with $n$ punctures lined up horizontally. For each such curve~$c$, look at the set $S_c\subset B_n$ consisting of all braids which stabilise~$c$, and build a cone on the subset $S_c$ of the Cayley graph, i.e.\ introduce a new vertex and connect each element of~$S_c$ by an edge of length one to this new vertex. Also build copies of these finitely many cones all over the Cayley graph by translating them using the left action of $B_n$ on the Cayley graph. Let us denote the resulting space $\mathcal{CC}\hat{\phantom{I}}$; this is sometimes called the ``electric space". As proven by Masur and Minsky~\cite[Lemma 3.2]{MM1}, the space $\mathcal{CC}\hat{\phantom{I}}$ is quasi-isometric to the curve complex of $B_n$; a quasi-isometry $\mathcal{CC}\hat{\phantom{I}}\to \mathcal{CC}$ is given by sending the vertex of the Cayley graph corresponding to $x\in B_n$ to the curve $x.c_0$, where $c_0$ is any simple closed curve in $D_n$ (e.g.\ a round one). Now we can construct a very nice map $\phi\colon \thinspace \mathcal{CC}\hat{\phantom{I}} \to \mathcal C_{AL}$. It suffices to map vertices and edges of $\mathcal{CC}\hat{\phantom{I}}$ belonging to the Cayley graph by the identity map. Every cone vertex is mapped in the same way as an arbitrarily chosen one of its adjacent vertices. Finally, every cone edge can be sent to an arbitrarily chosen edge path in $\mathcal C_{AL}$ of length at most nine (this is possible by Lemma~\ref{L:ReducibleSmallDistance}). \begin{conjecture}\label{C:QI} The map $\phi\colon \thinspace \mathcal{CC}\hat{\phantom{I}} \to \mathcal C_{AL}(B_n)$ is a quasi-isometry. \end{conjecture} All that remains to be proven is that the map~$\phi$ does not shrink distances too much. More precisely, it suffices to prove that there exists a positive number $D$ with the following property: if $x\in B_n$ is such that $d_{AL}(1_G,x)=1$, then $d_{\mathcal{CC}\hat{\phantom{I}}}(1_G, x)\leqslant D$. This comes down to the very plausible claim that every absorbable braid is the product of at most~$D$ braids fixing some round curve. {\bf{Acknowledgements.}} Support by ANR LAM (ANR-10-JCJC-0110) is gratefully acknowledged. The first author is supported by the ``initiation to research'' project no.11140090 from Fondecyt, by MTM2010-19355 and FEDER, and Fondecyt Anillo 1103.
1,108,101,563,764
arxiv
\section{Introduction} The ability to extract vector representations of building polygons from aerial or satellite imagery has become a hot topic in numerous remote sensing applications, such as urban planning and development, city modelling, cartography, \etc. The interest in and the development of new methodologies was also motivated by the current existence of several public benchmark datasets, like INRIA~\cite{maggiori2017dataset}, SpaceNet~\cite{Urban3D2017}, and CrowdAI~\cite{Mohanty:2018}. The classical approaches in this research field mostly focused on the assignment of the semantic class to each pixel in the image, obtaining classification masks as output~\cite{tokarczyk2013beyond,yuan2016automatic,bittner2018building,paper1}. However, for many applications, the more advanced output in form of vector information is under demand. In this work, we aim to provide not only building segmentation results, which outlines follow the realistic building forms, mainly straight lines and right angles, but also to generate a polygonal vector structure for each building instance. \Glspl{gl:CNN} have brought significant contributions to the field of computer vision, establishing themselves as the basis of semantic and instance segmentation. However, while performing the pixel-wise classification with high accuracy, they have problems with delineating the exact and regular building boundaries. To overcome this issue, we apply geometry constraints in the pixel domain using an adversarial loss to regularize the boundaries. Specifically, the generative part of the proposed \gls{gl:GAN}-based architecture takes as input the segmentation results obtained from \gls{gl:R2UNet} or the ideal segments from the dataset's ground truth. By getting the gradient feedback from the discriminator which task is to verify if its input comes either from regularized segmentation mask or ideal one, the generator learns to output the improved outline contours of our initial segmentation. \begin{figure} \centering \includegraphics[width=1\linewidth]{imgs/first_page4.jpg} \caption{Building polygon results from our proposed methodology overlaid on top of a sample area from the Inria dataset.} \label{fig:firstpage} \end{figure} In the literature, several methodologies have already made an attempt to directly predict vertices of object boundaries using \gls{gl:CNN} paradigm. They are either based on iterative prediction of outline points for one object at a time~\cite{castrejon2017annotating,acuna2018efficient} with possible interaction by users for corrections, or predicting only 4-sided polygons~\cite{girard2018end}. However, real world buildings are not constrained to a certain amount of corners. Motivated by this ideas,~\citet{li2019topological} proposed a \gls{gl:RNN} above the \gls{gl:RPN} which step by step predicts the possible corners for a single building within every region of interest. In our method, we do not want to be limited to corners prediction for a single building centered inside the input patch. The proposed Mask2Poly\space network is trained to predict an arbitrary number of corners (depending on structure complexity) for random number of buildings in the image scene from the regularized segmentation results. Some results of polygonal representations after obtaining the corner predictions from Mask2Poly\space are shown in \cref{fig:firstpage}. In~\cref{sec:relatedwork}, we review state-of-the-art methodologies in the related field. The details of designed architectures and the intuition behind selected objective functions are then presented in~\cref{sec:method}. In~\cref{sec:Experiments}, we demonstrate the effectiveness showing qualitative and quantitative results of our approach on three publicly available datasets, \ie INRIA~\cite{maggiori2017dataset}, SpaceNet~\cite{Urban3D2017} and CrowdAI~\cite{Mohanty:2018}. \cref{sec:Conclusion} concludes the paper. \begin{figure*}[thbp] \centering \includegraphics[width=1.0\linewidth]{imgs/pipeline_light.jpg} \put (-75,6) {\scriptsize{Extracted polygons}} \put (-210,6) {\scriptsize{Regularized mask}} \put (-355,6) {\scriptsize{Segmentation mask}} \put (-490,6) {\scriptsize{Input image}} \put (-405,58) {\scriptsize{SEG}} \put (-265,58) {\scriptsize{REG}} \put (-124,58) {\scriptsize{M2P}} \caption{The schematic overview of the proposed pipeline for automatic extraction of regularized building polygons. Buildings are initially detected and segmented by a \gls{gl:FCN} (result shown in black). A footprint regularization network is then applied to the segmentation mask in the pixel domain (red). Finally, building polygons are extracted from the regularized mask (cyan, vertices highlighted in yellow).} \label{fig:pipeline} \end{figure*} \section{Related work} \label{sec:relatedwork} \textbf{Building segmentation} from top view images has been one of the main research topics in remote sensing for decades. Before the deep learning era, the traditional methodologies for building footprint extraction relied on multi-step workflows utilizing detected low-level features to form building hypotheses~\cite{huertas1988detecting,guercke2011building}, assumptions that buildings compose of regular rectangular shapes~\cite{kim1999uncertain,bredif2013extracting} and similarities of spectral reflectance values between building appearances\cite{huang2012morphological,baluyan2013novel}. After the introduction of more powerful hardware, recent approaches began to heavily utilize deep convolutional networks for automatic building delineation providing state-of-the-art results. The task is approached via pixel-wise semantic segmentation applying \glspl{gl:FCN} on satellite or airborne images using the benefit of their high-resolution spectral information~\cite{yuan2016automatic,hamaguchi2018building}. Some methodologies embedded additional information in forms of heights from \glspl{gl:DSM}~\cite{lagrange2015benchmarking,bittner2018building} or \gls{gl:OSM}~\cite{audebert2017joint} together with the spectral information to increase the evidence of buildings. In the last few years, UNet-based architectures became one of the most successful models for segmentation and detection tasks not only in medical images but also in remote sensing. Motivated by recently proposed UNet-based models that achieved state-of-the-art performances in different building extraction challenges~\cite{iglovikov2018ternausnetv2, hamaguchi2018building} , the variant of UNet with residual and recurrent layers~\cite{alom2018recurrent} is utilized in this work. \textbf{Building segmentation regularization} has been getting increased attention over the recent years. Because neural networks try to decide for each image pixel whether it belongs to a building or not, they do not consider its geometry. As a result, building segmentation results have very often a blob-like appearance. Therefore, a footprint regularization step is very important to enforce that the resulting outlines not only match the ground truth but also have realistic appearances. \Citet{zhao2018building} proposed to regularize building instances obtained from semantic segmentation networks applying multi-step polygon simplification methods. \Citet{marcos2018learning} proposed a more advanced architecture by integrating the classic active contour model of~\citet{kass1988snakes} into deep \gls{gl:CNN} to perform a joint end-to-end learning. In the following work, \citet{cheng2019darnet} introduced a network based on a polar representation of active contours which prevent self-intersections and enforces outlines to be even closer to the ground truth. Work most related to ours is \citet{paper1}, which looked at the problem differently. The authors of this paper trained the regularization network in an unsupervised manner using adversarial losses together with Potts~\cite{tang2018regularized, tang2018normalized} and normalized cut~\cite{tang2018normalized} regularization losses which embedded additional knowledge about building boundaries from the intensity image to the network. In our work, we extend the algorithm proposed in \cite{paper1} redefining the training procedure and the architecture of the regularization network to obtain better results both in qualitative and quantitative terms. \textbf{Polygon prediction} is a difficult but crucial step for multiple disciplines as it provides vector-based data representations. Typically, semantic segmentation results are vectorized employing Douglas-Peucker~\cite{douglas1973algorithms}, RANSAC~\cite{fischler1981random} or Hough transform~\cite{duda1971use} algorithms as a post-processing step. Recent approaches made an attempt to integrate a vectorization procedure into an end-to-end deep learning-based model. The approach of~\citet{castrejon2017annotating} and the followed work of ~\citet{acuna2018efficient} sequentially produce polygonal vertices around the object boundary based on \gls{gl:RNN}. Although these methodologies provided impressive results, they are different from our proposed algorithm in terms of the size and amount of polygonized objects (an image crop containing only one object is annotated per procedure). Moreover, a human annotator's interaction is allowed during the prediction of polygonal vertices to correct them if needed. In contrast, we propose a deep learning-based methodology which automatically predicts polygon vertices without any limitation on the amount of objects within an input image. \section{Proposed method} \label{sec:method} In this paper, we propose a pipeline for building extraction that not only aims to achieve state-of-the-art segmentation accuracy, but also tries to predict visually pleasing building polygons. The pipeline is composed by three consecutive and independent steps. As a first step, a \gls{gl:FCN} is used to detect and segment building footprints given an intensity image. The resulting segmentation can achieve great accuracy in terms of \gls{gl:IoU}, recall and completeness, but the predicted building boundaries do not have a regular shape since there are no constraints on the building geometry. In order to produce a more realistic segmentation, we further refine the result through a second \gls{gl:CNN} trained using a combination of adversarial, reconstruction and regularized losses. As a result, the extracted building footprints have a more regular shape, with sharp corners and straight edges. As we show later in \cref{sec:Experiments}, this step greatly increases the footprints quality without losing segmentation accuracy. Finally, we extract a polygon for each building instance detecting the corners from its regularized mask. In the subsequent sections, we describe in more detail each component of the pipeline. \subsection{Building detection and segmentation} The first step in the proposed method aims to detect and outline the boundaries of the buildings present in the satellite or aerial image. This task can be solved exploiting one of the many instance or semantic segmentation networks proposed in literature, trained using cross-entropy losses. Since the three stages of the pipeline are independent from each other, it is possible to choose the instance of semantic segmentation network which is best suited or which performs best on the specific dataset. In this work, we decided to use as segmentation baseline the \gls{gl:R2UNet} proposed in \cite{alom2018recurrent}, a simple but yet precise network which guarantees high building segmentation accuracy. \subsection{Regularization of the segmentation} \begin{figure*}[thbp] \centering \includegraphics[width=0.95\linewidth]{imgs/workflow3.png} \put (-188,15) {\scriptsize{conv 1$\times$1}, sigmoid} \put (-188,32) {\scriptsize{max pool 2$\times$2}} \put (-266,28) {\scriptsize{conv 3$\times$3},} \put (-266,34) {\scriptsize{batch norm, ReLU}} \put (-266,15) {\scriptsize{up-sampling 2$\times$2}} \put (-188,61) {\scriptsize{Either regularized or}} \put (-186,53) {\scriptsize{reconstructed mask}} \put (-108,32) {\scriptsize{residual layer}} \put (-468,99) {\scriptsize{Image}} \put (-429,99) {\scriptsize{Segmentation}} \put (-424,2) {\scriptsize{Ideal mask}} \put (-465,156) {\textbf{$z$}} \put (-410,156) {\textbf{$x$}} \put (-410,58) {\textbf{$y$}} \put (-335,100) {\textbf{$E_G$}} \put (-335,9) {\textbf{$E_R$}} \put (-282,110) {\textbf{$F$}} \put (-50,110) {\textbf{$D$}} \put (-22,94) {\scriptsize{$true$}} \put (-22,81) {\scriptsize{$false$}} \caption{Workflow of the proposed regularization framework. It is composed of two paths: the generator path ($E_G \rightarrow F$) produces the regularized building footprint mask; the reconstruction path ($E_R \rightarrow F$) encodes and decodes the ideal input mask ensuring to have the same real valued masks as input to the discriminator.} \label{fig:workflow} \end{figure*} The footprints predicted by the segmentation network typically have rounded corners and irregular edges due to the lack of geometric constraints during the prediction. Extracting building polygons from the initial building segmentation is a hard task that could lead to errors in the corners proposal procedure. For this reason, as a second step, we use a \gls{gl:CNN} for building regularization that aims to produce building footprints with regular and visually pleasing boundaries. This translation can be successfully achieved training a \gls{gl:GAN} network composed by two different models. One of these networks is a generator which tries to generate a regularized version of the segmentation mask and the other network is a discriminator that examines generated and ideal footprints and estimates whether they are real or fake. The goal of the generator is to fool the discriminator, and as both networks get better and better at their job over the training, eventually the generator is forced to generate building footprints which become more realistic with each iteration. The generator aims to learn a mapping function between the domain $X$, composed by segmented footprints, and the domain $Y$, made of ideal footprints, given the training samples $\{x_i\}^N_{i=1}$ where $x_i \in X$ and $\{y_i\}^M_{i=1}$ where $y_i \in Y$. To further improve the results we also exploit the intensity images, $\{z_i\}^N_{i=1}$ where $z_i \in Z$, training the model with an additional regularized loss. The generator performs the regularization $G : \{ X, Z \} \xrightarrow{} Y$ exploiting a residual autoencoder structure, as shown in \cref{fig:workflow}. The regularized footprint is produced through the path composed by the encoder $E_G$ and the residual decoder $F$, so the generator $G$ can be seen as their combination $G(x,z) = F(E_G(x,y))$. The discriminator network $D$ tries to estimate whether the presented images are regularized footprints, generated by $G$, or ideal ones. The reason behind this path is to derive a reconstructed version of $y$. However, the adversarial network can easily distinguish two distributions, since the ideal mask is one-hot encoded with zeros and ones and the output of the autoencoder can range between zero and one. Therefore, both reconstructed and regularized image samples are generated using the same network $F$. Due to the joint training of two autoencoders with the common decoder, the proposed architecture is ensured to be stable and, as a result, escapes the situation where the discriminator wins. \subsubsection{Objective Function} Three types of loss functions in the learning procedure are used motivated by the good building footprints produced in \cite{paper1}: \textit{adversarial loss}, \textit{reconstruction losses} and \textit{regularized loss}. The \textit{adversarial loss}, introduced in~\cite{goodfellow2014generative}, is used to learn the mapping function between the domain $X$ and $Y$, encouraging the generator $G$ to produce footprints similar to the ideal samples. This component of the objective function acts as a constraint for the geometry boundaries of the buildings and it is expressed as: \begin{equation} \label{eqation:loss_gan_G} \begin{split} \mathcal{L}_{GAN}(G,D) = {E}_{x,z}[\log(1-D(G(x,z))] \end{split} \end{equation} The discriminator $D$ is trained to distinguish regularized and reconstructed footprints and its objective function can be expressed as: \begin{equation} \label{eqation:loss_gan_D} \begin{split} \mathcal{L}_D(G,R,D) &= {E}_{y}[\log(1-D(R(y)))] \\&+ {E}_{x,z}[\log D(G(x,z)] \end{split} \end{equation} where the path $R(y)=F(E_R(y))$ encodes and reconstructs the ideal mask and the path $G(x,z)=F(E_\text{G}(x,z))$ generates the regularized footprints. The \textit{reconstruction} term is introduced to force the generator $G$ to produce building footprints having an overall shape and pose similar to the segmentations received as input. The loss is also computed through the reconstruction path $R$ to obtain a reconstructed version of the ideal mask. As reconstruction loss we simply use \textit{binary cross entropy} and two losses can be written as: \begin{equation} \begin{split} \mathcal{L}_{rec_G}(G) &= -{E}_{x,z} [x \cdot \log G(x,z)] \\ \mathcal{L}_{rec_R}(R) &= -{E}_{y} [y \cdot \log R(y)] \end{split} \end{equation} Alongside the adversarial and regularized losses, a soft version of the Potts and Normalized Cut criterions are used to exploit the information of the intensity image to further improve the regularization results. The Potts and the Normalized Cut methods are popular graph clustering algorithms originally proposed for image segmentation. As demonstrated in \cite{paper1}, these terms can be effectively minimized by the generator $G$. As a result, the final footprints are aligned to the building boundaries observed in the intensity image. The Potts and the normalized cut losses can be expressed as: \begin{equation} \label{eq:regularized} \begin{split} \mathcal{L}_{Potts}(G) = {E}_{x,z} \sum_{k}^{} S^{k\top} W (1-S^k) \\ \mathcal{L}_{ncut}(G) = {E}_{x,z} \sum_{k}^{} \frac{S^{k\top} \hat{W} (1-S^k)}{1^\top \hat{W} S^k} \end{split} \end{equation} where $S = G(x,z)$ is the k-way softmax mask generated by the network and $S^k$ describes the vectorization of its $k$-th channel. $W$ and $\hat{W}$ are matrices of pairwise discontinuity costs and each term describes the weight between two nodes (or pixels) and it is computed using a gaussian kernel over the RGBXY space. The full objective used to jointly train the generator path $G$ and the reconstruction path $R$ is a linear combination between the adversarial loss, the regularized loss and the reconstruction losses. \begin{equation} \label{eq:full_objective} \begin{split} \mathcal{L}_{}(G,R,D) &= \alpha \mathcal{L}_{GAN}(G,R,D)\\ &+ \beta \mathcal{L}_{rec_G}(G) + \gamma \mathcal{L}_{rec_R}(R) \\ &+ \delta \mathcal{L}_{Potts}(G) + \epsilon \mathcal{L}_{ncut}(G) \end{split} \end{equation} It's worth noting that these loss components are obtained by connecting the encoders $E_R$ and $E_G$ to the residual decoder $F$ one at a time. Once the full objective is computed, $E_G$, $E_R$ and $F$ are updated jointly. \subsection{Polygon extraction} \begin{figure}[thbp] \centering \includegraphics[width=1\linewidth]{imgs/mask2poly.png} \put (-246,72) {\scriptsize{prediction from CNN}} \put (-161,72) {\scriptsize{ordering and filtering}} \put (-77,72) {\scriptsize{final polygon}} \caption{Polygon extraction steps: given the regularized building footprint, a CNN model detects all the building corners candidates (yellow vertices). The vertices are then sorted to produce a valid set of polygon coordinates. Points which lie too close to a building edge are filtered (in red). The final set of coordinates which describes the polygon is highlighted in green.} \label{fig:mask2poly} \end{figure} Once the building footprints have been regularized, we extract a polygon for each building instance. This task is accomplished using a simple \gls{gl:CNN} for corner detection. The model receives the regularized mask as input and produces a corner proposal probability map. Pixels with a value higher than a certain threshold in the probability map can be considered valid corners for the building polygon. During inference each regularized footprint is evaluated by the corner detection network independently. The detected points are then ordered clockwise moving along the perimeter of the regularized footprint in order to produce a valid set of coordinates for the polygon. As a final step, we filter redundant points that lie close to an edge as shown in ~\cref{fig:mask2poly}. \section{Experiments} \label{sec:Experiments} \subsection{Experimental setup} \subsubsection{Dataset} The proposed pipeline has been evaluated on several aerial and satellite building segmentation datasets: INRIA~\cite{maggiori2017dataset}, CrowdAI~\cite{Mohanty:2018}, and SpaceNet~\cite{Urban3D2017}. The INRIA dataset is an aerial dataset which covers a wide range of urban settlement appearances from different geographic locations. The particularity of this dataset is that the cities included in the test set are different from those of the training set, and it is composed of 180 training and 180 testing $5000 \times 5000$ orthorectified images with a resolution of 30 cm. The CrowdAI dataset consists of 280,000 satellite images for training and 60,000 images for testing with an image resolution of $300 \times 300$ pixels. During the test set inference over 500,000 building instances are extracted and regularized. The SpaceNet dataset is composed of 30-50 cm pan-sharpened RGB satellite images from two cities in Florida: Jacksonville and Tampa. The dataset is split into 62 images for the test set and 174 images for the training set. The provided images have $2048 \times 2048$ pixels size. All these datasets have a wide variety of buildings with different sizes, shapes and complexities that make the extraction of regularized polygons challenging. \subsubsection{Network Architecture} The \textbf{regularization network} has a residual autoencorder structure as shown in~\cref{fig:workflow}. The encoders $E_G$ and $E_R$ are a sequence of $3 \times 3$ convolutional layers followed by batch normalization~\cite{ioffe2015batch} and $2 \times 2$ max-pooling layers. After every down-sampling operation the number of convolutional filters is doubled, while the tensor size is halved. The decoder $F$ is composed by a chain of 8 residual layers~\cite{he2016deep} followed by $3 \times 3$ convolutions, batch normalization layers and $2 \times 2$ up-sampling operations. Compared to the architecture proposed in \cite{paper1}, our encoders only have two pooling layers in order to keep trace of fine details of the input mask. As shown in \cref{sec:Experiments}, this choice allows the decoder $F$ to reconstruct with more accuracy the buildings received as input and at the same time it can regularize them effectively, regardless their shape and complexity. The discriminator $D$ shares the same layer combination of the encoders $E_G$ and $E_R$ but it has a deeper architecture, with 4 max-pooling operations in total. For the \textbf{corner detection network} we just simply exploit the architectural model of the network $G$ used for the building regularization but using only 4 residual layers. \begin{table*}[!htbp] \centering \begin{tabular}{lllllllllllll} \cline{2-13} & \multicolumn{12}{c}{INRIA} \\ \cline{2-13} & \multicolumn{2}{c|}{Bellingham} & \multicolumn{2}{c|}{Bloomington} & \multicolumn{2}{c|}{Innsbruck} & \multicolumn{2}{c|}{San Francisco} & \multicolumn{2}{c|}{Tyrol} & \multicolumn{2}{c}{Overall} \\ \cline{2-13} & \multicolumn{1}{c|}{IoU} & \multicolumn{1}{c|}{Acc} & \multicolumn{1}{c|}{IoU} & \multicolumn{1}{c|}{Acc} & \multicolumn{1}{c|}{IoU} & \multicolumn{1}{c|}{Acc} & \multicolumn{1}{c|}{IoU} & \multicolumn{1}{c|}{Acc} & \multicolumn{1}{c|}{IoU} & \multicolumn{1}{c|}{Acc} & \multicolumn{1}{c|}{IoU} & \multicolumn{1}{c}{Acc} \\ \hline \multicolumn{1}{l|}{R2UNet} & \multicolumn{1}{l|}{70.30} & \multicolumn{1}{l|}{\textbf{97.04}} & \multicolumn{1}{l|}{72.94} & \multicolumn{1}{l|}{\textbf{97.40}} & \multicolumn{1}{l|}{\textbf{73.48}} & \multicolumn{1}{l|}{\textbf{96.85}} & \multicolumn{1}{l|}{\textbf{76.29}} & \multicolumn{1}{l|}{\textbf{91.85}} & \multicolumn{1}{l|}{75.92} & \multicolumn{1}{l|}{\textbf{97.84}} & \multicolumn{1}{l|}{\textbf{74.57}} & \textbf{96.20} \\ \hline \multicolumn{1}{l|}{\Citet{paper1}} & \multicolumn{1}{l|}{63.90} & \multicolumn{1}{l|}{96.37} & \multicolumn{1}{l|}{63.65} & \multicolumn{1}{l|}{96.51} & \multicolumn{1}{l|}{60.20} & \multicolumn{1}{l|}{95.23} & \multicolumn{1}{l|}{55.97} & \multicolumn{1}{l|}{84.60} & \multicolumn{1}{l|}{65.56} & \multicolumn{1}{l|}{96.88} & \multicolumn{1}{l|}{59.81} & 93.92 \\ \hline \multicolumn{1}{l|}{Ours} & \multicolumn{1}{l|}{\textbf{70.36}} & \multicolumn{1}{l|}{96.99} & \multicolumn{1}{l|}{\textbf{73.01}} & \multicolumn{1}{l|}{97.36} & \multicolumn{1}{l|}{73.34} & \multicolumn{1}{l|}{96.77} & \multicolumn{1}{l|}{75.88} & \multicolumn{1}{l|}{91.55} & \multicolumn{1}{l|}{\textbf{76.15}} & \multicolumn{1}{l|}{\textbf{97.84}} & \multicolumn{1}{l|}{74.40} & 96.10 \\ \hline \end{tabular} \vspace{0.2cm} \caption{Quantitative evaluation of building extraction and regularization results on the INRIA dataset. Scores are obtained by submissions of the predictions to https://project.inria.fr/aerialimagelabeling/.} \label{tab:INRIA} \end{table*} \begin{table*}[] \centering \begin{tabular}{lcccccccccccc} \cline{2-13} & \multicolumn{12}{c}{SpaceNet} \\ \cline{2-13} & \multicolumn{4}{c|}{Jacksonville} & \multicolumn{4}{c|}{Tampa} & \multicolumn{4}{c}{Overall} \\ \cline{2-13} & \multicolumn{2}{c|}{IoU} & \multicolumn{2}{c|}{Acc} & \multicolumn{2}{c|}{IoU} & \multicolumn{2}{c|}{Acc} & \multicolumn{2}{c|}{IoU} & \multicolumn{2}{c}{Acc} \\ \cline{2-13} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\mu$} & $\sigma$ \\ \hline \multicolumn{1}{l|}{R2UNet} & \multicolumn{1}{c|}{\textbf{72.85}} & \multicolumn{1}{c|}{7.077} & \multicolumn{1}{c|}{\textbf{96.54}} & \multicolumn{1}{c|}{1.105} & \multicolumn{1}{c|}{\textbf{70.74}} & \multicolumn{1}{c|}{6.056} & \multicolumn{1}{c|}{\textbf{94.90}} & \multicolumn{1}{c|}{1.219} & \multicolumn{1}{c|}{\textbf{71.80}} & \multicolumn{1}{c|}{6.670} & \multicolumn{1}{c|}{\textbf{95.75}} & 1.406 \\ \hline \multicolumn{1}{l|}{\Citet{paper1}} & \multicolumn{1}{c|}{59.17} & \multicolumn{1}{c|}{5.348} & \multicolumn{1}{c|}{94.73} & \multicolumn{1}{c|}{1.693} & \multicolumn{1}{c|}{57.99} & \multicolumn{1}{c|}{6.892} & \multicolumn{1}{c|}{92.58} & \multicolumn{1}{c|}{2.317} & \multicolumn{1}{c|}{58.58} & \multicolumn{1}{c|}{6.197} & \multicolumn{1}{c|}{93.65} & 2.296 \\ \hline \multicolumn{1}{l|}{Ours} & \multicolumn{1}{c|}{70.90} & \multicolumn{1}{c|}{7.551} & \multicolumn{1}{c|}{96.29} & \multicolumn{1}{c|}{1.169} & \multicolumn{1}{c|}{69.04} & \multicolumn{1}{c|}{6.587} & \multicolumn{1}{c|}{\textbf{94.90}} & \multicolumn{1}{c|}{1.286} & \multicolumn{1}{c|}{69.97} & \multicolumn{1}{c|}{7.146} & \multicolumn{1}{c|}{95.50} & 1.463 \\ \hline \end{tabular} \vspace{0.2cm} \caption{Quantitative evaluation of building extraction and regularization results on the SpaceNet dataset} \label{tab:SpaceNet} \end{table*} \begin{table*}[!htbp] \centering \begin{tabular}{l|c|c|cccc} \hline \multicolumn{3}{c|}{\textbf{Dataset}} & \multicolumn{4}{c}{CrowdAI} \\ \hline \multicolumn{3}{c|}{\textbf{Method}} & \multicolumn{2}{c|}{IoU} & \multicolumn{2}{c}{Acc} \\ \hline \textbf{Baseline} & \textbf{Regularization} & \textbf{Polygonization} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\mu$} & $\sigma$ \\ \hline R2U-Net & - & - & 80.44 & 16.10 & 95.86 & 5.20 \\ \cline{1-3} R2U-Net & \textit{Zorzi et al.} & - & 76.95 & 15.34 & 94.75 & 5.47 \\ \cline{1-3} R2U-Net & Ours & - & 79.87 & 15.93 & 95.57 & 5.28 \\ \cline{1-3} R2U-Net & \textit{Zorzi et al.} & Ours & 76.67 & 13.37 & 94.62 & 5.14 \\ \cline{1-3} R2U-Net & Ours & Ours & 80.03 & 14.24 & 95.55 & 5.09 \\ \hline Mask R-CNN & - & - & 73.22 & 17.84 & 94.38 & 4.77 \\ \cline{1-3} Mask R-CNN & \textit{Zorzi et al.} & - & 71.72 & 17.32 & 93.88 & 4.82 \\ \cline{1-3} Mask R-CNN & Ours & - & 73.57 & 17.65 & 94.34 & 4.74 \\ \cline{1-3} Mask R-CNN & \textit{Zorzi et al.} & Ours & 72.13 & 13.82 & 92.57 & 4.80 \\ \cline{1-3} Mask R-CNN & Ours & Ours & 74.23 & 14.51 & 94.12 & 4.75 \\ \hline \end{tabular} \vspace{0.2cm} \caption{Quantitative evaluation of building regularization and polygonization results on the CrowdAI dataset} \label{tab:CrowdAI} \end{table*} \subsubsection{Training Details} Unlike the training approach proposed in \cite{paper1} where building instances are scaled and forward-propagated through the regularization network one by one, we train our \gls{gl:GAN} using $256 \times 256$ patches cropped from the dataset samples. This helps to learn a generator and discriminator aware of the shape differences between small, medium and big buildings. As ideal masks we exploit the accurate and good looking building footprints present in the ground truth of the chosen datasets. The model is trained with batch size of 4 for 140,000 iterations. We set $\alpha=3$, $\beta=1$, $\gamma=3$ in \cref{eq:full_objective}. $\epsilon$ and $\delta$ are kept to $0$ for the first 40,000 batches, then they are linearly increased to $1$ and $175$, respectively, in the following 40,000 batches to keep the learning more stable. The weight matrix $W$ and $\hat{W}$ for \textit{Potts loss} and \textit{normalized cut loss} in the \cref{eq:regularized} are computed using the same expression and hyper-parameters described in \cite{paper1}. Since the datasets we use for evaluation provide the ground truth already rasterized, the \gls{gl:CNN} used to detect building corners is trained using the building polygons available in OpenStreetMap for the cities of Chicago and Jacksonville. For the initial building segmentation we used \gls{gl:R2UNet} trained with $448 \times 448$ patches randomly cropped from the SpaceNet and INRIA image samples. In CrowdAI we directly train the model using the $300 \times 300$ images provided in the dataset. Also, we provide some results using Mask R-CNN~\cite{he2017mask} as baseline using the pre-trained weights available in ~\cite{Mohanty:2018}. During the training of all the networks, we applied standard data augmentation to the images (random rotations and flipping) and we trained all the pipeline models using Adam~\cite{kingma2014adam} optimizer with learning rate set to $0.0001$. \subsection{Results} \label{sec:Results} In the \textbf{INRIA} and the \textbf{SpaceNet} datasets we compare against the baseline method and \citet{paper1}. The baseline exploits \gls{gl:R2UNet} as backbone to perform the initial building segmentation. The results are then processed by the regularization method described in \cite{paper1} and by our building extraction method to produce the final footprints. The final scores, based on \gls{gl:IoU} and accuracy, are shown in~\cref{tab:INRIA} and~\cref{tab:SpaceNet}. Our building refinement can achieve quantitative results comparable or, in some test areas, even higher than the pure baseline. Our approach, in fact, gets the higher \gls{gl:IoU} values in the test areas of Bellingham, Bloomington and Tyrol from the INRIA dataset and demonstrates to achieve accuracies very close to the pure baseline solution in the SpaceNet dataset. This is a sign that the pipeline, made by multiple modules connected in cascade, does not lead to a significant drop in performance. It is worth noting that the method \citet{paper1} has a significant \gls{gl:IoU} drop in these two datasets. This is caused by the network architecture which is not capable to generalize well for big and complex buildings as shown in the results in~\cref{fig:results}. In \textbf{CrowdAI} we test both \gls{gl:R2UNet} and Mask R-CNN as baseline networks for the initial segmentation. Again, the proposed regularization can achieve results close to the pure segmentation network. The \gls{gl:IoU} and accuracy scores achieved by \citet{paper1} are explainable considering that the CrowdAI dataset is mainly composed of midsize and small size constructions, with a low number of corners. \subsubsection{Qualitative results} \begin{figure}% \centering {{\includegraphics[width=0.45\linewidth]{imgs/wrong/Screenshot_20191115_141259.png} }}% {{\includegraphics[width=0.45\linewidth]{imgs/wrong/Screenshot_20191115_141249.png} }}% \caption{On the left side: satellite image with occluded constructions. On the right side: result of the regularization network. Extracted footprints with wrong pose are highlighted in red.}% \label{fig:wrong}% \end{figure} We visualize some building footprints generated with different approaches in \cref{fig:results}. Building footprints extracted with \cite{paper1} are accurate and visually pleasing if the building has a low number of vertices. Vice versa, if the construction is complex, the network fails on producing a decent building boundary. The algorithm proposed in this paper overcomes this problem producing accurate and realistic footprints regardless of the building size and complexity. It is worth noting that our polygon extraction algorithm can also deal with inner courtyards creating a polygon for each building perimeter, as shown in the second row of \cref{fig:results}. Despite the good results obtained in most of the circumstances, the proposed method is still not capable to extract sufficient context information to perform a correct regularization in the presence of occlusions. In \cref{fig:wrong} is shown a residential area evaluated by Mask2Poly. The presence of the road in front of the constructions arranged in a line would suggest that the occluded buildings are also facing the street, in opposition with the extracted footprints. Embedding a constraint about the disposition and the orientation of all the constructions in the scene would help the regularization network producing a coherent cartographic map of the the buildings from satellite or aerial images. \glsresetall \section{Conclusion} \label{sec:Conclusion} In this paper, we presented an approach for building segmentation and regularized polygon extraction, composed of three different and independent neural network modules. The combination of the adversarial and the regularized losses results in a effective geometry constrain for the constructions, and encourages our predicted footprints to match building boundaries. Furthermore, the regularization allows us to extract precise building polygons using a simple but effective \gls{gl:FCN} for corners detection. The proposed method has proved to be capable not only of achieving equivalent or even higher results in terms of IoU and accuracy compared to state-of-the-art segmentation networks, but also of generating realistic and visually pleasing construction outlines that can be used in many cartographic and engineering applications. \begin{figure*} \centering \begin{subfigure}[t]{\dimexpr0.19\textwidth+20pt\relax} \makebox[20pt]{\raisebox{40pt}{\rotatebox[origin=c]{90}{Inria bellingham}}}% \includegraphics[width=\dimexpr\linewidth-20pt\relax] {imgs/inria/Screenshot_20191114_152549.jpg} \makebox[20pt]{\raisebox{40pt}{\rotatebox[origin=c]{90}{Inria bloomington}}}% \includegraphics[width=\dimexpr\linewidth-20pt\relax] {imgs/inria/Screenshot_20191114_165103.jpg} \makebox[20pt]{\raisebox{40pt}{\rotatebox[origin=c]{90}{CrowdAI}}}% \includegraphics[width=\dimexpr\linewidth-20pt\relax] {imgs/crowdai/Screenshot_20191114_175412.jpg} \makebox[20pt]{\raisebox{40pt}{\rotatebox[origin=c]{90}{SpaceNet}}}% \includegraphics[width=\dimexpr\linewidth-20pt\relax] {imgs/spacenet/Screenshot_20191114_184723.jpg} \caption{Segmentation} \end{subfigure} \begin{subfigure}[t]{0.19\textwidth} \includegraphics[width=\textwidth] {imgs/inria/Screenshot_20191114_152522.jpg} \includegraphics[width=\textwidth] {imgs/inria/Screenshot_20191114_164805.jpg} \includegraphics[width=\textwidth] {imgs/crowdai/Screenshot_20191114_175404.jpg} \includegraphics[width=\textwidth] {imgs/spacenet/Screenshot_20191114_184731.jpg} \caption{\Citet{paper1}} \end{subfigure} \begin{subfigure}[t]{0.19\textwidth} \includegraphics[width=\textwidth] {imgs/inria/Screenshot_20191114_152605.jpg} \includegraphics[width=\textwidth] {imgs/inria/Screenshot_20191114_164752.jpg} \includegraphics[width=\textwidth] {imgs/crowdai/Screenshot_20191114_175356.jpg} \includegraphics[width=\textwidth] {imgs/spacenet/Screenshot_20191114_184737.jpg} \caption{Proposed regularization} \end{subfigure} \begin{subfigure}[t]{0.19\textwidth} \includegraphics[width=\textwidth] {imgs/inria/Screenshot_20191114_152434.jpg} \includegraphics[width=\textwidth] {imgs/inria/Screenshot_20191114_164153.jpg} \includegraphics[width=\textwidth] {imgs/crowdai/Screenshot_20191114_175327.jpg} \includegraphics[width=\textwidth] {imgs/spacenet/Screenshot_20191114_184655.jpg} \caption{Extracted polygons} \end{subfigure} \caption{Buildings extraction results overlaid on top of a sample areas from Inria, CrowdAI and SpaceNet datasets.} \label{fig:results} \end{figure*} \printbibliography \end{document}
1,108,101,563,765
arxiv
\section{Introduction} \sloppy \subsection{Setup} Let $f$ be an unknown function observed on a regularly spaced grid of $N=2^J$ points $\{t_i\}$ in the regression model \begin{equation} \label{eq:reg} X_i = f(t_i) + \epsilon_i, \end{equation} where $\epsilon_i \stackrel{\textrm{i.i.d.}}{\sim} N(0,\sigma^2)$, and the noise level $\sigma^2$ is unknown. A popular approach to inference in this model relies on an application of the Discrete Wavelet Transform (DWT) to the data $\{X_i\}$, resulting in the normal means model \begin{equation} \label{eq:model} Y_{j,k}=\beta_{j,k}+\varepsilon_{j,k}, \end{equation} where $\{Y_{j,k}\}$ are the empirical wavelet coefficients, $\{\beta_{j,k}\}$ is the parameter vector of interest formed of the wavelet coefficients of $\{f(t_i)\}$, and $\varepsilon_{j,k} \stackrel{\textrm{i.i.d.}}{\sim} N(0,\sigma^2)$ are unobservable stochastic disturbances (we provide more details in Section~\ref{sec:methodology}). The observations $\{Y_{j,k}\}$ are then de-noised using one of the many possible techniques, yielding upon inversion of the wavelet transform the estimates $\{ \hat{f}(t_i) \}$ of $\{f(t_i)\}$. A rationale for a wavelet approach to regression consists in the following (see, e.g., \cite{donoho94}): DWT typically `sparsifies' the signal $\{f(t_i)\}$, in that many wavelet coefficients $\beta_{j,k}$'s are zero, or nearly so. Since the wavelet decomposition preserves the $L^2$-norm of the signal (\cite{percival00}, equation (95d)), this implies that the transformed signal $\{\beta_{j,k}\}$ will contain some large coefficients, and a contrast with small coefficients will typically be sharper than in the original signal $\{f(t_i)\}$ (cf.~\cite{percival00}, Section 10.1). On the other hand, due to the orthogonality property of DWT, the noise $\{\epsilon_i\}$ in the original observations $\{X_i\}$ gets spread out `uniformly' in the transformed observations $\{Y_{j,k}\}$, in that one still has $\varepsilon_{j,k} \stackrel{\textrm{i.i.d.}}{\sim} N(0,\sigma^2)$. Hence a small absolute magnitude of an observation $Y_{j,k}$ is likely to be an indicator of the fact that the corresponding $\beta_{j,k}$ is zero (exactly, or nearly), whereas a large value of $Y_{j,k}$ likely means that it predominantly consists of the signal $\beta_{j,k}$. This forms the basis of various wavelet thresholding or shrinkage methods, that produce estimates of $\beta_{j,k}$'s by thresholding or shrinking small $Y_{j,k}$'s to zero as containing pure noise, and keeping large $Y_{j,k}$'s (exactly or largely) unchanged (\cite{percival00}, Section 10.2). A wavelet-based approach to non-parametric regression leads to excellent practical results due to spatial adaptation properties of wavelets (see \cite{donoho94}). However, there are situations when other estimators are preferable. This can happen for signals that are better representable in bases other than the wavelet basis, e.g., `frequency domain' signals such as the sinusoid. % \subsection{Related work} Within the Bayesian paradigm, the notion of sparsity can be naturally modelled through imposing a sparsity-inducing prior distribution on the coefficients $\{\beta_{j,k}\}$. There are two main possibilities to that end. The first is based on discrete mixtures, that model the signal $\{\beta_{j,k}\}$ via a combination of a point mass at zero and an absolutely continuous component elsewhere. The corresponding prior is often referred to as the spike-and-slab prior (see, e.g., \cite{mitchell88}). In the second approach, absolutely continuous shrinkage priors are used instead; these put a mass around zero and also exhibit heavy tails (see, e.g., \cite{tipping01} or \cite{carvalho10}). While the former approach leads to a correct representation of sparse estimation problems by placing a point mass at zero, truly sparse solutions are not possible with the latter; in case they are desired, they require a further device, e.g.\ some form of thresholding. Nevertheless, with shrinkage priors the point estimates of zero coefficients are still strongly shrunk to zero. Also, shrinkage priors are attractive computationally and have been demonstrated to perform well in various circumstances. Whether the real life signals are truly sparse in the strict sense that their small wavelet coefficients are exactly equal to zero, might be debatable. Several Bayesian approaches to wavelet de-noising are discussed in \cite{percival00}, pp.~412--415 and 426--428. However, the method that gained the greatest acclaim in the wavelet de-noising context is the empirical Bayes method of \cite{johnstone05}, which we will refer to as EBayes. EBayes relies on the spike-and-slab prior, with its hyperparameters optimised by maximising the marginal likelihood, see \cite{silverman04}. The simulation studies in \cite{johnstone05} demonstrate overall excellent performance of EBayes, and Bayesian point estimates resulting from it possess a natural shrinkage property. In fact, the coefficients $\beta_{j,k}$'s can even be estimated exactly as zero if the posterior median is used as a point estimate, and in that case the solution to the estimation problem is truly sparse. We thus consider EBayes as a benchmark in this article. This is in line with earlier works in the sparse normal means model, see, e.g., \cite{carvalho10} and \cite{polson11}, who studied the horseshoe prior. \subsection{Structured sparsity} \label{sec:setup} It has been observed in the literature that with DWT the sparsification of the signal $\{f(t_i)\}$ occurs in a structured manner. By this we mean that non-zero wavelet coefficients tend to cluster instead of being scattered in a completely random fashion across the signal $\{\beta_{j,k}\}$; see, e.g., Section 10.8 in \cite{percival00}, or Appendix \ref{sec:literature}, where we have collected several relevant quotes from the literature. Here we illustrate the phenomenon on a simple but representative example (cf.~\cite{cai01}). Consider Figure \ref{fig:bumps}, where we plotted the wavelet coefficients computed from $N=512$ values of the Bumps function (see \cite{donoho95}). It is seen from the plot that when arranged according to levels of DWT, the wavelet coefficients with large absolute magnitudes occur in clusters, namely approximately at those locations where the function undergoes abrupt changes. Additionally, many coefficients are quite small or zero. \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{./img/bumps_dwt_coefficients.png} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{ DWT coefficients of $N=512$ values of the Bumps function arranged by levels of the transform. Periodic boundary conditions and the $\operatorname{LA}(8)$ filter corresponding to Daubechies' least asymmetric wavelet with $4$ vanishing moments (Symmlet $4$) are used to compute DWT. The number of computed levels of the transform is $J_0=4$. The scaling coefficients at level $4$ are displayed at the top (and can be ignored at present), followed by wavelet coefficients (from levels $4$ to $1$) and the original data. In each level, the coefficients are aligned via circular shifting so as to correspond to the events in the original signal (precise description of their arrangement is given in Sections 4.8 and 4.11 in \cite{percival00}); furthermore, the heights of vertical lines emanating from a horizontal zero line give relative sizes of coefficients, with zero coefficients not displayed. See Subsection \ref{sec:dwt} below for some additional details on DWT. For a better visibility and economy of space, the individual panels are made of the same size, so that their vertical scales are in fact different.} \label{fig:bumps} \end{figure} Given that wavelet coefficients typically exhibit structures beyond `mere sparsity', it appears natural to incorporate in inferential procedures some of their additional features. A closely related question that one may ask is: Does ignoring possible local structures in the signal produce scientifically satisfactory answers? In that respect, the domain expert knowledge in, e.g., audio signal processing indicates that a failure to account for the structure of the signal in de-noising applications may result in unacceptable solutions for a human ear. Likewise, \cite{donoho95:denoise} stresses importance of reducing the extent of undesirable noise-induced structures like `ripples', `blips' and oscillations in the inferred signal, citing geophysical and astronomical studies, where such effects may lead to interpretational difficulties. Somewhat disappointingly, frequentist estimation methods that account for clustering of non-zero wavelet coefficients via block thresholding, such as NeighBlock and NeighCoeff of \cite{cai01}, have been shown to perform worse in practice than EBayes, that does not assume any additional structure beyond sparsity. In this paper, we propose a Bayesian wavelet de-noising method that accounts for existence of special structures in wavelet coefficients. We compare it to EBayes and show via simulation and real data examples that our estimator, that we baptised the caravan estimator, measures up well to EBayes, often it substantially as far as estimation accuracy is concerned (in terms of the square estimation error). Nevertheless, the caravan estimator does not achieve a uniform improvement (i.e.\ over all simulation scenarios) upon EBayes. \subsection{Organisation} The rest of the paper is organised as follows: in Section \ref{sec:methodology} we introduce in detail the statistical problem and our Bayesian methodology to tackle it. Section \ref{sec:simulations} studies the performance of our method on synthetic data examples and compares it to the main alternative: EBayes. Section \ref{sec:nmr} deals with real data examples. Section \ref{sec:conclusions} summarises our findings and outlines directions for future research. In Appendix~\ref{sec:literature} a small compendium of quotes from the literature, illustrating some of the points we made in this paper, is presented. Appendix \ref{app:gibbs} gives details of the Gibbs sampler we use to evaluate the posterior, while Appendices~\ref{app:hyper} through \ref{app:addsim} contain further details on our simulation study. \subsection{Notation} $N(\mu,\sigma^2)$ denotes the normal distribution with mean $\mu\in\mathbb{R}$ and variance $\sigma^2>0$. $\operatorname{Exp}(\lambda)$ is the exponential distribution with rate parameter $\lambda>0$, whose density is $x \mapsto \lambda e^{-\lambda x},$ for $x>0$. $\operatorname{Gamma}(a,b)$ is the gamma distribution with shape parameter $a$ and rate parameter $b>0$, whose density is \[ x \mapsto \frac{b^a}{\Gamma(a)} x^{a-1}e^{-b x}, \quad x>0, \] where $\Gamma$ is the gamma function. The inverse gamma distribution with shape parameter $a>0$ and scale parameter $b>0$ is denoted by $\operatorname{IG}(a,b)$. Its density is \[ x \mapsto \frac{b^a}{\Gamma(a)} x^{-a-1}e^{-b / x}, \quad x>0. \] In conformance with standard Bayesian notation, we often denote random variables with lowercase letters, such as $x$, and write the corresponding density as $p(x)$. Conditioning of $x$ on $y$ is denoted by $x\mid y$, with $p(x\mid y)$ standing for the conditional density of $x$ given $y$. \section{Methodology} \label{sec:methodology} In this section we provide a detailed description of our Bayesian methodology for wavelet de-noising. \subsection{Discrete wavelet transform} \label{sec:dwt} DWT is an orthogonal transformation applied on a finite dyadic sequence of numbers (that the data length $N$ is a dyadic number, $N=2^J$, say, is a restriction, although there are some ad hoc ways to deal with it; see, e.g., pp.~141--145 in \cite{percival00}). Starting with the data $x=(x_0,\ldots,x_{N-1})$, DWT can be conveniently described through successive applications of special low- and high-pass filters $\mathcal{H}=\{h_k\}$ and $\mathcal{G}=\{g_k\}$ (referred to as quadrature mirror filters) in combination with dyadic decimation or downsampling steps; jointly, these constitute the so-called pyramid algorithm. Care has to be exercised when computing DWT coefficients at the boundaries; we use periodic boundary conditions throughout. Define $v_0$ to be the original data $x$, and let $(\downarrow 2)$ be the downsampling operator. \cite{percival00} use odd decimation, retaining odd-indexed entries of a given sequence; thus for $y=(\ldots,y_{-2},y_{-1},y_0,y_1,y_2,\ldots)$, say, $(\downarrow 2)y=(\ldots,y_{-3},y_{-1},y_1,y_3,\ldots)$. This is a matter of convention, and the even decimation would have been an equally valid choice. The scaling coefficients at level $1$ are $v_1 = (\downarrow 2) \mathcal{H} v_0$, whereas the wavelet or detail coefficients are given by $w_1=(\downarrow 2) \mathcal{G} v_0.$ Here the notation $\mathcal{H} v_0$ stands for circular convolution of $v_0$ with $\mathcal{H}$, and similarly for $\mathcal{G}$. Then one proceeds inductively: with $v_j$ and $w_j$ being already defined, one sets $v_{j+1} = (\downarrow 2)\mathcal{H} v_j$ and $w_{j+1} = (\downarrow 2) \mathcal{G} v_j$. The process can be either brought to completion, the final processed level being $j=J$, or stopped at level $j=J_0 < J$; in this last case one talks about a partial DWT (a partial DWT does not require $N$ to be a dyadic number: it is enough to have that $N$ is an integer multiple of $2^{J_0}$). For a fixed $j$, the vectors $v_j$ and $w_j$ have length $N/2^j$, and their elements can be enumerated as $v_{j,k}$ and $w_{j,k}$, respectively, for $k=0,1,\ldots,N/2^j-1$. The scaling coefficients $v_{J_0}$ can be thought of as corresponding to a low-frequency component of the signal $x$, whereas the wavelet coefficients $w_1,\ldots,w_{J_0}$ to the high-frequency components. When stacked together, $v_{J_0}$ and $w_{J_0},\ldots,w_1$ constitute an orthogonal transform of the data $x$; the latter can be easily recovered via the inverse pyramid algorithm. Both DWT and its inverse can be evaluated efficiently in $O(N)$ multiplications. Conceptually, the wavelet detail coefficients $w_j$ can be associated with changes in $x$ at the scale $2^{j-1}$, i.e., loosely speaking, with differences of averages formed of $2^{j-1}$ successive values in $x$. On the other hand, $v_{J_0}$ is associated with changes in $x$ at scale $2^{J_0}$ and higher; in fact, if $J_0=J$, $v_{J_0}$ is a (rescaled) sample mean of $x$. Let $W$ be a matrix corresponding to DWT applied on data $x$. Then the vector $w = (w_1,\ldots,w_{J_0},v_{J_0})$ of wavelet and scaling coefficients can be obtained as $w = {W} x$ (analysis equation), and furthermore, due to orthogonality of $W$, $x = {W}^T w$ (synthesis equation). It holds that \begin{equation} \label{eq:decomp:sum} x = \sum_{j=1}^{J_0} {W}_j^T w_j + {V}_{J_0}^T v_{J_0} = \sum_{j=1}^{J_0} D_j + S_{J_0}, \end{equation} where the matrices ${W}_j$, $j=1,\ldots,J_0$, and ${V}_{J_0}$ are obtained by partitioning ${W}$ into submatrices with the number of rows commensurate with $w_1,\ldots,w_{J_0},v_{J_0}$; cf.~\cite{percival00}, Sections 4.1 and 4.7. The $N$-dimensional vectors $D_j = {W}_j^T w_j$ are called wavelet details, whereas $S_{J_0} = {V}_{J_0}^T v_{J_0}$ is referred to as the $J_0$th level wavelet smooth. Together, $D_1,\ldots,D_{J_0}$ and $S_{J_0}$ define a multiresolution analysis (MRA) of $x$, which can be synthesised back from these components by a simple addition, see equation \eqref{eq:decomp:sum}. The detail $D_j$ corresponds to the portion of synthesis $x = {W}^T w$ attributable to scale $2^{j-1}$, whereas the smooth $S_{J_0}$ can be viewed as a smoothed version of $x$ and is associated with changes at scale $2^{J_0}$ and higher. See Figure~\ref{fig:bumps:mra} for an illustration of MRA for the Bumps function. \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{./img/bumps_dwt_mra.png} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{MRA of the Bumps function discretely sampled on a uniform grid of $N=512$ points. DWT with the $\operatorname{LA}(8)$ filter and periodic boundary conditions was used, and $J_0=4$ levels of the transform were computed. The smooth $S_4$ and details $D_j$'s are stacked on top of each other, and the bottom plot gives the original data $X$. For a better visibility, the vertical scales of the plots are made different, so that each panel is of equal height. Note that $S_4$ indeed has an appearance of a (rescaled) smooth of the data.} \label{fig:bumps:mra} \end{figure} For a detailed exposition of wavelet transforms, the reader may consult any of the numerous reference works on the topic, e.g.\ \cite{percival00}. Furthermore, we implicitly assume that the wavelet coefficients have been realigned so as to approximately correspond to the output of a zero-phase filter. Admittedly, though, the statistical impact of the latter adjustment was not particularly noticeable in the simulation examples we considered. A filter is called zero-phase, if its transfer function is real-valued at Fourier frequencies. This allows to associate with $Y_i$'s the physically meaningful time scale of the original data $\{X_i\}$ (see pp.~108--110 in \cite{percival00}). For Daubechies' filters (which are the ones used in the present work), a proper alignment can be achieved by circularly shifting the output of a filtering step by a specified amount, depending on the filter and the transform level, as discussed on pp.~146--147 in \cite{percival00}. \subsection{Statistical model} \label{sec:likelihood} In our regression context, upon applying DWT on the observations $\{X_i\}$, one obtains empirical wavelet coefficients $\{Y_{j,k}\}$ arranged according to levels $j=1,\ldots,J_0$. Recall from Equation~\eqref{eq:model} that the signal wavelet coefficients are denoted by $\{\beta_{j,k}\}$. The statistical model for level $j$ wavelet coefficients of the data is a Gaussian sequence model, \[ Y_i \mid \beta_i \sim N(\beta_i,\sigma^2), \quad i=1,\ldots,n, \] where in order to ease our notation, we have replaced the double index $j,k$ in \eqref{eq:model} with a single index $i$ (since $j$ stays fixed), and have also set $n=N/2^j$. Following a standard wavelet de-noising approach, originally proposed in \cite{donoho94}, we will estimate the error standard deviation $\sigma$ by the median absolute deviation (MAD) computed from the finest ($j=1$) level of DWT of the data, i.e.\ the empirical wavelet coefficients $\{Y_{1,k}\}$. Intuition underlying this estimate is that the majority of wavelet coefficients of the signal $\{f(t_i)\}$ at level $1$ will be zero, so that $Y_{1,k}$'s are mostly pure noise; a few outlier non-zero entries $\beta_{1,k}$ will not affect adversely a robust estimate of the error standard deviation such as the MAD. The estimate will be denoted by $\hat{\sigma}$. In principle, upon equipping $\sigma$ with a prior, it is also possible to take a fully Bayesian approach to estimate this parameter. However, as can be seen below, our proposal is simpler, since it allows to infer our primary objects of interest, the wavelet coefficients $\{\beta_{j,k}\}$, level by level in DWT. This is convenient, e.g.\ because different levels of DWT are expected to have different sparsity degrees, or because such a subdivision of the inference problem into smaller subtasks may speed up the algorithm we propose below. Once we have estimated the wavelet coefficients, we also need the scaling coefficients at level $J_0$ in order to invert DWT and obtain an estimate of the original signal $\{f(t_i)\}$. Following \cite{donoho94}, to that end it is common to use empirical scaling coefficients computed from the data $\{X_i\}$. Thereby the portion in $\{X_i\}$ attributable to a `coarse' scale $J_0$ is automatically classified as signal (\cite{percival00}, p.~418). Estimation of scaling coefficients via empirical scaling coefficients admits a Bayesian interpretation: assuming scaling coefficients are a priori independent and equipped with a vague $N(0,\gamma)$ prior, $\gamma\rightarrow\infty$, their posteriors are again normal (conditional on the data and the error variance $\sigma^2$), with means equal to empirical scaling coefficients. The likelihood of the data $\{Y_i\}$ in parameters $\{\beta_i\}$ (with an estimate $\hat{\sigma}$ plugged in instead of $\sigma$) is \[ \mathcal{L}_n( \{\beta_i\} ) = (2\pi)^{-n/2} \hat{\sigma}^{-n} e^{-\sum\limits_{i=1}^n (Y_i-\beta_i)^2/(2\hat{\sigma}^2)}. \] \subsection{Prior} \label{sec:prior} Fix hyperparameters $\{\theta_i:i=1,\ldots,n\}$, $\{\tau_i:i=1,\ldots,n\}$, and assume that a priori \begin{equation} \label{eq:prior:beta} \beta_i \mid \tau_i, \theta_i \sim N\left(0, \theta_i \tau_i \right). \end{equation} The hyperparameters $\{ \theta_i \}$ will form an inverse gamma Markov chain, defined as follows (see \cite{cemgil07}): fix hyperparameters $a_0, b_0, a>0$, let $\{\lambda_i:i=0,\ldots,n-1\}$ be a sequence of latent variables, and consider a Markov chain \begin{equation} \label{eq:chain} \lambda_0,\theta_1,\lambda_1,\theta_2,\lambda_2,\ldots,\lambda_{n-1},\theta_n \end{equation} with the initial and transition distributions \begin{align*} \lambda_0 & \sim \operatorname{IG}(a_0, b_0),\\ \theta_i \mid \lambda_{i-1} &\sim \operatorname{IG}\left( a, \frac{a}{\lambda_{i-1}} \right), \quad i=1,\ldots,n,\\ \lambda_i \mid \theta_i & \sim \operatorname{IG}\left( a, \frac{a}{\theta_i } \right), \quad i=1,\ldots,n-1. \end{align*} This definition induces a dependence structure in $\{\beta_i\}$, and ensures a degree of continuity in the absolute magnitudes of $\beta_i$'s. In fact, as explained in \cite{cemgil07}, the variables $\{\theta_i\}$ are positively correlated. Thus, e.g., a large value of $\theta_i$ is likely to go paired with a large value of $\theta_{i+1}$, which by \eqref{eq:prior:beta} increases the likelihood of a similar pairing between the absolute magnitudes of $\beta_i$ and $\beta_{i+1}$ (the latent variables $\{\lambda_i\}$ are used to achieve positive correlation between $\theta_i$'s, while retaining computational tractability of the approach; see \cite{cemgil07}). In Figure \ref{fig:caravan:sample} we display one realisation of the sequence $\{\beta_i\}$ from \eqref{eq:chain}. We do not imply that real life signals follow an inverse gamma chain, but simply that the latter provides a computationally convenient means for encoding possible dependencies present in the wavelet coefficients. The hyperparameter $a$ controls the amount of smoothing in the gamma chain, with small values corresponding to less smoothing; we assume $a \sim \operatorname{Gamma}(a_a,b_a)$. For a statistical use of inverse gamma chains outside the sparsity context see, e.g., \cite{gugu18:ppp}, \cite{gugu18:vol} and \cite{gugu18:micro}. \begin{remark} Note that our construction proceeds via creating dependence between absolute magnitudes of the coefficients $\{\beta_i\}$. A glance at Figure \ref{fig:bumps} shows that for stylised real-like signals, large positive coefficients may very well cluster with large negative coefficients, and in that sense our approach is natural. In fact, a similar pattern can be observed in real signals as well, such as the electrocardiogram data in Figure 127 in \cite{percival00}, but there it would have been a stretch of imagination to pretend the observations are noise-free. \end{remark} \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{./img/caravan_sample.png} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{A realisation of the sequence $\{\beta_i\}$ of length $n=100$, using $a=5$, $\lambda_0=0.5$ and $\{\tau_i=1,i=1,\ldots,n\}$.} \label{fig:caravan:sample} \end{figure} The parameters $\{\tau_i\}$ are local shrinkage parameters: each $\tau_i$ acts individually on $\beta_i$, and a small value of $\tau_i$ encourages shrinkage of $\beta_i$ towards zero. A different perspective is that this entails modelling the scale parameters with a $t$-distribution which has heavier tails than the normal distribution. By linking $\{\tau_i\}$ via a global shrinkage parameter $\tau_{gl}$, we introduce a global control on the sparsity level of the sequence $\{\beta_i\}$. Specifically, we assume \[ \tau_i \mid \tau_{gl} \stackrel{\textrm{i.i.d.}}{\sim} \operatorname{IG}(\tau_{gl},\tau_{gl}), \quad i=1,\ldots,n, \] with $\{\tau_i\}$ conditionally independent of other parameters in the model, given $\tau_{gl}$. In turn, the hyperparameter $\tau_{gl}$ is equipped with an independent $\operatorname{Gamma}(a_{gl},b_{gl})$ prior. By the Markov property and the various independence assumptions we made, the joint prior on $\{\beta_i\}$, $\{\lambda_i\}$, $\{\theta_i\}$, $\{\tau_i\}$, $\tau_{gl}$ and $a$ factorises as \begin{multline*} p(\tau_{gl}) \left\{ \prod_{i=1}^n p(\tau_i \mid \tau_{gl} ) \right\} \left\{ \prod_{i=1}^n p(\beta_i \mid \theta_i, \tau_i) \right\} \\ \times p(\lambda_0) p(a) \left\{ \prod_{i=1}^{n-1} p(\theta_i \mid \lambda_{i-1}, a) p(\lambda_i \mid \theta_i, a) \right\} p(\theta_n \mid \lambda_{n-1}, a). \end{multline*} Given the sequential nature of the definition of our prior, we term it the caravan prior, see Figure \ref{fig:caravan} for a visualisation. \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{./img/caravan.jpeg} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{Passage de caravane \`a Smyrne, by Jean-\'Emile Laboureur, 1911--1912. Biblioth\`eque nationale de France, d\'epartement Estampes et photographie, FOL-EF-465 (3). Source: \url{http://gallica.bnf.fr} / BnF. Public domain.} \label{fig:caravan} \end{figure} \begin{remark} Our construction of the Markov chain prior is inspired by the inverse gamma Markov chain in \cite{cemgil07}. However, it is different from the approach there, in that we also employ local shrinkage parameters $\{\tau_i\}$ linked through the global shrinkage hyperparameter $\tau_{gl}$. The two sequences $\{\theta_i\}$ and $\{\tau_i\}$ moderate or enhance each other's effects, and in a way our approach stands halfway between \cite{cemgil07} and the more conventional Bayesian approaches to wavelet de-noising proposed in the statistical literature. The parameter $a$ of the Markov chain prior fulfils a double role: on one hand it governs strength of dependence between realisations of the coefficients $\beta_i$'s; on the other hand, it affects their absolute magnitudes. A large $a$ results in a priori strongly dependent $\beta_i$'s, but also encourages them to take large values. The parameters $\{\tau_i\}$ give an additional handle to control absolute magnitudes of $\beta_i$'s, by being decoupled from the dependence structure. A further important difference of our work from the line of research in \cite{cemgil07} and \cite{dikmen10} consists in the fact that ours concentrates on the one-dimensional wavelet transform, whereas theirs deals with transforms relevant in audio signal processing, e.g., the modified discrete cosine transform, or the Gabor transform. We provide a detailed simulation study of our approach in Section \ref{sec:simulations}, the results and conclusions of which cannot be directly read off \cite{cemgil07} and \cite{dikmen10}. Importantly, we benchmark de-noising results against the EBayes method. \end{remark} \begin{remark} The idea of postulating an a priori dependence between coefficients $\{\beta_i\}$ of a sparse signal has already appeared in the statistical literature. Thus, e.g., in the audio signal processing context, \cite{wolfe04} model their parameters $\{\beta_i\}$ with the spike-and-slab prior \[ p(\beta_i \mid \sigma_{\beta_i}, \gamma_i) = (1-\gamma_i) \delta_0(\beta_i) + \gamma_i \phi( \beta_i ; 0, \sigma_{\beta_i}^2), \quad \gamma_i \in \{0,1\}, \] and impose a Markovian structure on the binary sequence $\{\gamma_i\}$; independent inverse gamma priors are assigned to the variances $\{\sigma_{\beta_i}^2\}$. This is different from our approach inasmuch as the spike-and-slab prior is different from the shrinkage prior. We also mention the fact that there is a substantial body of the signal and image processing and compression literature, where dependence among wavelet coefficients is exploited in some way. See, e.g., \cite{crouse98} and references therein (this paper a priori models wavelet coefficients as discrete mixtures with a hidden state variable, and assumes the hidden states form a Markov chain). \end{remark} \subsection{Gibbs sampler} The posterior for our approach is obtained from the likelihood in Subsection \ref{sec:likelihood} and the prior in Subsection \ref{sec:prior}. The posterior inference can be performed via the Gibbs sampler. In fact, as stated in Lemma \ref{lem:full:cond} in Appendix \ref{app:gibbs}, all the full conditional distributions in our model, except those of the shrinkage parameters $\tau_{gl}$ and $a$, belong to standard unimodal families and are easy to sample from. The parameters $\tau_{gl}$ and $a$ can be sampled using Metropolis-within-Gibbs steps, as explained in Appendix \ref{app:gibbs}. Further details on this algorithm can be found, e.g., in \cite{gelfand90}. \section{Synthetic data examples} \label{sec:simulations} In this section we investigate performance of the caravan prior via representative simulation examples. Results for the DWT and MODWT de-noising are given in Subsections \ref{subsec:dwt} and \ref{subsec:modwt}. Furthermore, for readability purposes, some additional details and simulation results are deferred to Appendix \ref{app:addsim}. \subsection{Generalities} We implemented the caravan method in {\bf Julia} (see \cite{bezanson17}). The code is available under \cite{zenodocaravan18}. For wavelet transforms we used the {\bf wavelets} package in {\bf R}, see \cite{wavelets} (at the moment of writing this paper, the native {\bf Julia} package for the wavelet transform is still under development), while the plots were produced with the {\bf ggplot2} package, see \cite{ggplot2}. Simulations were performed on a Macbook Air with $1.8$ GHz Intel Core i5 processor and $4$ GB $1600$ MHz DDR3 memory, running macOS High Sierra (version $10.13.5$), and on a Lenovo with $1.7$ GHz Intel Core i5-8350U processor and $8$ GB RAM, running Windows $10$ Enterprise. Given its excellent behaviour and overall superiority over various competitors, EBayes was employed for benchmarking the caravan estimator. In short, EBayes a priori postulates that the coefficients $\beta_i \stackrel{\textrm{i.i.d.}}{\sim} (1-\lambda) \delta_0(\beta_i) + \lambda p(\beta_i),$ where $p$ is a heavy tailed density. A Laplace density with scale parameter $a$ compares well to other possible choices of $p$. The method proceeds by estimating hyperparameters, here $\lambda$ and $a$, by maximising the marginal likelihood, and then computing empirical Bayes estimates of $\beta_i$ (using the estimated hyperparameters). This constitutes a straightforward and numerically stable procedure. EBayes is implemented in the {\bf EbayesThresh} package in {\bf R}, see \cite{JS05}. We used it with settings similar to those in \cite{JS05} and \cite{johnstone05}; in particular, an absolutely continuous part of the spike-and-slab prior assigned to wavelet coefficients $\{\beta_{j,k}\}$ was the Laplace prior with a scale parameter estimated by the empirical Bayes method, and the posterior mean and median were employed as point estimates. The wavelet transform fed to EBayes was computed via the {\bf waveslim} package, see \cite{waveslim} (DWT computed by both the {\bf wavelets} and {\bf waveslim} packages is identical, since both packages rely on the algorithms in \cite{percival00}. However, {\bf EbayesThresh} does not support the {\bf wavelets} package; on the other hand, the latter has some functionalities we found useful). Point estimates for the caravan method were the posterior mean and median. Markov chains for the caravan method were ran for $30\, 000$ iterations ($100\, 000$ iterations for the Blocks and HeaviSine signals, see below), with the first third of the samples discarded as a burn-in. No thinning was used, but this is of course a possibility. The Metropolis-within-Gibbs steps of the caravan method were scaled to ensure acceptance rates in the range of $25-55\%$. Hyperparameters used for the caravan prior are given in Appendix \ref{app:hyper}. Our strategy for generating noisy signals was: Sample a given function $f$ on a uniform dyadic grid of $N=512$ points $\{t_i = i/512:i=1,\ldots,512\}$, and add i.i.d.\ $N(0,\sigma^2)$ noise to the resulting values. Next, DWT was performed on the noisy data to yield the model \eqref{eq:model}. The noise standard deviation was set to $\sigma=\operatorname{SD}(\{f(t_i)\})/\operatorname{SNR}$, with $\operatorname{SD}$ standing for the sample standard deviation. We used two values for the signal-to-noise ratio: low $\operatorname{SNR}=3$ and high $\operatorname{SNR}=7$. Finally, for DWT we used the $\operatorname{LA}(8)$ filter; this choice is often reasonable in practice, see p.~136 in \cite{percival00}. The number of levels of the DWT decomposition was $J_0=6$. The quality of estimation results with DWT in fact depends on an appropriate choice of the filter, as well as the number of de-noised levels of the transform; some practical guidelines for such choices are given in Section 4.11 in \cite{percival00}. A mechanical approach to choices such as these cannot be recommended. As the criterion to assess performance of various wavelet de-noising methods, we employed the squared error \begin{equation} \label{eq:sqe} \sum_{i=1}^n (\hat{f}(t_i)-f(t_i))^2, \end{equation} for $\hat{f}$ an estimate of $f$, that we averaged over replicate simulation runs. \subsection{Test functions} \label{subsec:test} The test functions $f$ we considered were the classical test functions named Bumps, Blocks, Doppler and HeaviSine (see \cite{donoho95}), that reproduce stylised features of signals encountered in various applications; all the expressions are collected in Appendix \ref{app:test}. In comparison to the original definitions, we rescaled the test functions, so that the signal in each case had the standard deviation $1$. We plot the (rescaled) functions in Figure \ref{fig:functions}. \begin{figure} \begin{center} \includegraphics[width=0.425\textwidth]{./img/bumps.png} \includegraphics[width=0.425\textwidth]{./img/blocks.png} \medskip \includegraphics[width=0.425\textwidth]{./img/doppler.png} \includegraphics[width=0.425\textwidth]{./img/heavisine.png} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{Top row: Bumps and Blocks functions. Bottom row: Doppler and HeaviSine functions.} \label{fig:functions} \end{figure} \subsection{Standard discrete wavelet transform} \label{subsec:dwt} We report estimation errors for the DWT (averaged over $50$ independent simulation runs) in Table1~\ref{table:dwt}, the names of the test functions there have the obvious abbreviations. While standard deviations are not displayed in these and subsequent tables, they were circa $10-20\%$ of the estimated values. It is seen from the tables that the caravan method does substantially better than EBayes for the Bumps and Doppler signals. The results are indecisive for the HeaviSine signal and equally split for the Blocks, with one of the estimators being better than another in one of the noise settings. Overall performance of the caravan method is arguably superior to that of EBayes, with the former achieving a $10-30\%$ reduction in the estimation error over the latter. Even in those cases when EBayes has a smaller estimation error, it never manages to beat the caravan estimator by too wide a margin. In terms of computational time, de-noising a single data set with the caravan method takes ca.~$1.5$ minutes (when the Gibbs sampler is run for $30\, 000$ iterations), which is reasonable on its own terms; EBayes is substantially faster, though, with its computational time being on the order of seconds instead of minutes. {\small \begin{table} \begin{center} \captionsetup{width=0.85\textwidth, font=small} \caption{Average square errors (over $50$ simulation runs) for various test functions and methods. The sample size is $N=512$, the $\operatorname{LA}(8)$ filter is used, and periodic boundary conditions are imposed. The number of DWT levels equals $J_0=6$. The minimal average squared error in each setting is highlighted in italics and blue. The values are rounded off to one decimal after zero.} \begin{tabular}{lrrrr@{\hskip 0.25in}rrrr} \toprule & \multicolumn{4}{c}{{\bf {\hskip -0.25in} Low noise}} & \multicolumn{4}{c}{{\bf High noise}}\\ \cmidrule(l{0.1in}r{0.35in}){2-5} \cmidrule(l{0.1in}r{0.1in}){6-9} {\bf Method} & {\bf bmp} & {\bf blk} & {\bf dpl} & {\bf hvs} & {\bf bmp} & {\bf blk} & {\bf dpl} & {\bf hvs} \\ \midrule Caravan (mean) & {\color{blue} \it 3.9} & {\color{blue} \it 3.5} & { \color{blue} \it 1.8} & {\color{blue} \it 1.2} & {\color{blue} \it 21.0} & 19.4 & {\color{blue} \it 8.4} & {\color{blue} \it 4.0}\\ Caravan (median) & {\color{blue} \it 3.9} & 3.6 & {\color{blue} \it 1.8} & 1.3 & 21.3 & 20.3 & 8.7 & 4.2\\ EBayes (mean) & 4.9 & 3.8 & 2.9 & {\color{blue} \it 1.2} & 22.8 & {\color{blue} \it 18.8} & 12.0 & 4.3\\ EBayes (median) & 5.6 & 4.3 & 3.3 & {\color{blue} \it 1.2} & 25.9 & 20.6 & 13.0 & {\color{blue} \it 4.0}\\ \bottomrule \end{tabular} \label{table:dwt} \end{center} \end{table} } It is instructive to display estimation results in one simulation run for the Doppler signal ($\operatorname{SNR}=7$). See Figure \ref{fig:doppler:noisy} for the noisy signal and de-noising results. The caravan estimate manages to pick up the high frequency oscillations of the signal in a neighbourhood of zero noticeably better than EBayes does. This is especially apparent from the plot of absolute deviations of both estimates from the Doppler function, and constitutes a remarkable achievement. \begin{figure} \begin{center} \includegraphics[width=0.425\textwidth]{./img/doppler_noisy.png} \includegraphics[width=0.425\textwidth]{./img/doppler_caravan_dwt.png} \medskip \includegraphics[width=0.425\textwidth]{./img/doppler_js_dwt.png} \includegraphics[width=0.425\textwidth]{./img/doppler_devs.png} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{Top row (from left to right): Noisy observations on the Doppler function (sample size $N=512$ and $\operatorname{SNR}=7$), and the caravan estimate (posterior mean) superimposed on the Doppler function. The Doppler function is in red, the estimate is in blue. Bottom row (from left to right): EBayes (posterior mean) superimposed on the Doppler function (the colours are as in the case of the caravan estimate plot), and absolute deviations of the caravan and EBayes estimates from the Doppler function (in green and in brown, respectively). De-noising is via DWT with $J_0=6$ levels and the $\operatorname{LA}(8)$ filter.} \label{fig:doppler:noisy} \end{figure} To highlight one advantage of the caravan estimator over EBayes, we considered the following simulation experiment: in the $\operatorname{SNR}=7$ setting, we artificially increased measurement errors for two data points of the Bumps function in places where it is flat, in fact zero; the indices of the points were $i = 280$ and $470$. De-noising results are reported in Figure \ref{fig:spikes1}. It is seen from the plots that among the two methods, caravan visually fares the best, in that it is the least affected by spurious peaks in the reconstructed curve due to unusually large noise on two observations. In that respect it is instructive to compare, e.g., the level $j=1$ wavelet coefficients for EBayes, caravan estimate, Bumps function, and noisy data; see Figure \ref{fig:spikes3}. As seen from that figure, two purely noise-affected empirical wavelet coefficients pass the EBayes shrinkage virtually unscathed, while they are dealt a serious blow by the caravan method. \begin{figure} \begin{center} \includegraphics[width=0.425\textwidth]{./img/spikes_noisy.png} \includegraphics[width=0.425\textwidth]{./img/spikes_caravan.png} \medskip \includegraphics[width=0.425\textwidth]{./img/spikes_js.png} \includegraphics[width=0.425\textwidth]{./img/spikes_devs.png} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{Top row (from left to right): Noisy observations on the Bumps function (sample size is $N=512$ and $\operatorname{SNR}=7$. The `special' points with indices $i = 280$ and $470$ that are affected by unusually large measurement errors are highlighted via large black points) and the caravan estimate (posterior mean) superimposed on the Bumps function (the true function is in red, the estimate is in blue). Bottom row (from left to right): EBayes (posterior mean) superimposed on the Bumps function (the colours are as for the caravan estimate plot), and absolute deviations of the caravan and EBayes estimates from the Bumps function (in green and in brown, respectively). De-noising is via DWT with $J_0=6$ levels and the $\operatorname{LA}(8)$ filter.} \label{fig:spikes1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{./img/spikes_coeffs.png} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{From top to bottom: Level $j=1$ wavelet coefficients for EBayes, caravan estimate, Bumps function, and noisy observations, respectively. In each plot (excluding the one for the Bumps function itself), the pair of coefficients that is seriously affected by artificially introduced large measurement errors on top of the zero signal is highlighted in green. For details on alignment of coefficients see the caption of Figure \ref{fig:bumps}.} \label{fig:spikes3} \end{figure} \begin{remark} In relative terms, in comparison to EBayes, the Blocks and HeaviSine functions are the most difficult to de-noise with the caravan prior. Both functions are characterised by presence of discontinuities. This may be a reason for a somewhat worse performance of the caravan prior in these examples, although ascertaining a precise cause is a difficult task. In our experience, within-level dependence of wavelet coefficients, that characterises the caravan prior, appears to work less successfully when estimating the signal in a neighbourhood of a discontinuity point; conversely, in some simulation runs the caravan method was able to pick up discontinuities in a signal better than EBayes, but was then unable to perform de-noising as well as EBayes did in those regions where the signal was smooth. A better handling of signals with discontinuities via the caravan prior would require additional modelling of intra-scale dependence of wavelet coefficients. This refers to the fact that large or small values of wavelet coefficients tend to propagate across different levels of the transform, see Section~$10.8$ in \cite{percival00}; for a visualisation, see, e.g., Figure \ref{fig:bumps}. That, however, lies outside the scope of the present paper. \end{remark} \begin{remark} In our experience, it is advisable to use longer Markov chain runs with the caravan prior in order to avoid visually unpleasant squiggles in de-noised curves, which in reality are solely due to the fact that the chains have not reached stationarity. Hence our decision to run the chains for $30\, 000$ or even $100\,000$ iterations (the latter is likely to be excessive in many scenarios). Giving concrete recommendations in the present context is a difficult task, as convergence of the chains depends on factors like the nature of the underlying signal, the number of observations and the signal-to-noise ratio. As one natural check, however, one can produce trace and autocorrelation plots for the hyperparameters $a,\tau_{gl}$, as well as for some of the coefficients $\beta_i$'s. See Appendix \ref{app:plots} for such plots for the Doppler signal de-noising that we considered above in Figure \ref{fig:doppler:noisy}. An advantage of the caravan prior is the relative simplicity of the update formulae in the Gibbs sampler (see Appendix \ref{app:gibbs}). However, this simplicity comes at a price: at each step of the sampler, only one parameter can be updated at a time, which slows down the mixing of the Markov chain for the full posterior, that is defined on a rather high-dimensional parameter space. Potentially, this may have repercussions on scalability of the method when applied on large data. See also the relevant remarks in \cite{cemgil07b} on a related Markov chain prior. \end{remark} \subsection{Maximal overlap discrete wavelet transform} \label{subsec:modwt} It has been demonstrated in, among others, \cite{coifman95}, that using the translation-invariant discrete wavelet transform for signal de-noising instead of the standard DWT often leads to better practical results, either in terms of the squared error, or visually. Unlike the standard DWT, for a data sequence of length $N$, each level of the translation-invariant transform contains $N$ wavelet coefficients, since it does not use downsampling. We specifically restrict our attention to the maximal overlap discrete wavelet transform (MODWT), see, e.g., Chapter 5 in \cite{percival00}. MODWT is highly redundant and non-orthogonal. When the data size $N$ is a dyadic number, coefficients of DWT can be extracted from those of MODWT by a suitable scaling and downsampling. Furthermore, one can extract from MODWT the coefficients of DWTs of all possible cyclic shifts of the data; see Comments and Extensions to Section 5.4 in \cite{percival00}, p.~174. Computational complexity of MODWT and its inverse (due to its redundancy, MODWT has no unique inverse; the one we have in mind is given in \cite{percival00}, and on an abstract level can be described in terms of the Moore-Penrose inverse, cf.\ p.~167 there), when evaluated via the pyramid algorithm, is $O(N\log_2 N)$ multiplications, which is somewhat slower than that for DWT, but still fast (in fact as fast as the Fast Fourier Transform). Unlike DWT, that requires the number of observations $N$ be a dyadic number, no such assumption is needed for MODWT. In theory, the number of MODWT levels $J_0$ can be arbitrarily large (unlike DWT); however, if $N$ is a dyadic integer, MODWT yields no extra information beyond the level $J=\log_2 N$, which hence can be taken as a maximal decomposition level for MODWT. See Figure \ref{fig:bumps_modwt_coefficients} for a visualisation of MODWT for the Bumps function. Because of a lack of orthogonality, for the noisy data the MODWT wavelet coefficients will be statistically dependent. On the other hand, MODWT allows one to mitigate sensitive dependence of the standard DWT on the starting position of the data sequence (which is entirely due to downsampling used in DWT). In fact, the MODWT-based de-noising essentially performs averaging of results over all possible cyclic shifts of the data (here `all possible' means shifts by $m=0,1,\ldots,N-1$ units), that may allow a better reconstruction of the essential features of the signal and reduce noise-induced artefacts. See \cite{percival00}, Comments and Extensions to Section 5 (pp.~429--431), for a succinct description of statistical applications of MODWT. \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{./img/bumps_modwt_coefficients.png} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{MODWT coefficients of $N=512$ values of the Bumps function arranged by levels of the transform. The $\operatorname{LA}(8)$ filter is used. The number of computed levels of the transform is $J_0=4$, with scaling coefficients displayed at the top, and the original data at the bottom. In each level, the coefficients are aligned via circular shifting so as to correspond to the events in the original data; for precise details on the arrangement, see pp.~179--180 in \cite{percival00}.} \label{fig:bumps_modwt_coefficients} \end{figure} When performing comparison of EBayes and caravan estimates, we used the settings similar to those in Section \ref{sec:simulations}. In particular, the sample size was $N=512$. We employed the $\operatorname{LA}(8)$ filter and the periodic boundary conditions. The number of levels of MODWT was $J_0=4$. Some guidelines on practicalities such as these are given in Section 5.11 in \cite{percival00}. Finally, separately for each level $j$ of MODWT, we estimated the error standard deviation $\sigma_j$ by the MAD estimate computed from the empirical wavelet coefficients of that level. It should be clear that such estimates of $\sigma_j$ cannot be expected to lead to necessarily good results in all cases, if only because the sparsity degree of MODWT (or DWT) coefficients typically decreases for coarser levels of the transform, whereas the non-zero coefficients tend to become larger (cf.~also the remarks on p.~450 in \cite{percival00}). Hence our decision to de-noise only $4$ levels of MODWT. \begin{remark} In the case the sample size $N$ is a dyadic number, by simple algebra that relies on the fact that DWT coefficients are rescaled and downsampled MODWT coefficients (see \cite{percival00}, equations (96d) and (169a), and page 152), an estimate of the error variance $\sigma_j^2$ can be derived as $\hat{\sigma}_j^2 = 2^{1-j} \hat{\sigma}_1^2$. Here $\hat{\sigma}_1^2$ can be obtained via MAD applied on the first level of MODWT. However, at the moment of writing this paper such an option is not envisioned for EBayes in the {\bf EBayesThresh} package, which is a primary reason why we did not employ it in our comparison. \end{remark} Estimation results on the same synthetic data as in Subsection \ref{sec:dwt} are reported in Table \ref{table:modwt}. A comparison with Table \ref{table:dwt} (that displayed the results for DWT) shows that MODWT substantially improves estimation accuracy of both the caravan and EBayes methods, except for the HeaviSine signal. The caravan method does better than EBayes for the Bumps, Blocks and Doppler signals. The results are indecisive for the HeaviSine function, with either method better than another in different noise settings. Overall performance of the caravan method is superior to that of EBayes, the margin being a $10-20\%$ reduction in the square error. In terms of computational time, de-noising a single data set with caravan method takes ca.~$6.5$ minutes, which is an order of magnitude slower than for EBayes. \begin{remark} \label{rem:perf} The fact that in some scenarios MODWT de-noising performs worse than DWT de-noising does not contradict earlier simulation studies in \cite{coifman95} and \cite{johnstone05}: DWT and translation-invariant DWT there differ in details from the implementations used by us (that are based on \cite{percival00}). Most importantly, we use a different error variance estimator in the MODWT case. \end{remark} {\small \begin{table} \begin{center} \captionsetup{width=0.85\textwidth, font=small} \caption{Average square errors (over $50$ simulation runs) for various test functions and methods. The sample size is $N=512$, the $\operatorname{LA}(8)$ filter is used, and periodic boundary conditions are imposed. The number of MODWT levels equals $J_0=4$.} \begin{tabular}{lrrrr@{\hskip 0.25in}rrrr} \toprule & \multicolumn{4}{c}{{\bf {\hskip -0.25in} Low noise}} & \multicolumn{4}{c}{{\bf High noise}}\\ \cmidrule(l{0.1in}r{0.35in}){2-5} \cmidrule(l{0.1in}r{0.1in}){6-9} {\bf Method} & {\bf bmp} & {\bf blk} & {\bf dpl} & {\bf hvs} & {\bf bmp} & {\bf blk} & {\bf dpl} & {\bf hvs} \\ \midrule Caravan (mean) & {\color{blue} \it 3.2} & {\color{blue} \it 2.9} & {\color{blue} \it 1.5} & 1.2 & 15.6 & {\color{blue} \it 16.2} & 7.5 & 5.1 \\ Caravan (median) & {\color{blue} \it 3.2} & {\color{blue} \it 2.9} & {\color{blue} \it 1.5} & {\color{blue} \it 1.1} & {\color{blue} \it 15.3} & 16.9 & {\color{blue} \it 7.3} & 4.9 \\ EBayes (mean) & 3.6 & 3.0 & 2.0 & 1.2 & 17.3 & 17.7 & 9.3 & 4.5 \\ EBayes (median) & 3.9 & 3.2 & 2.1 & 1.2 & 18.5 & 19.4 & 9.5 & {\color{blue} \it 4.4}\\ \bottomrule \end{tabular} \label{table:modwt} \end{center} \end{table} } \section{Nuclear magnetic resonance data} \label{sec:nmr} In this section we apply our de-noising methodology on the nuclear magnetic resonance (NMR) spectrum, that constitutes a standard test data set for wavelet de-noising algorithms.\footnote{We downloaded the data from Donald B.\ Percival's website at \url{http://faculty.washington.edu/dbp/s530/} (accessed on 28 June 2018).} There are $N=1024$ observations in total, that we display in the top panel of Figure \ref{fig:nmr}. We followed Section $10.5$ in \cite{percival00}, and used the $\operatorname{LA}(8)$ filter to compute DWT. Percival and Walden de-noise $J_0=6$ levels of the transform; an MRA plot of the data set, see Figure~\ref{fig:nmr:mra}, suggests that de-noising $J_0=4$ levels of the transform might be enough. A plot of the DWT coefficients, see Figure \ref{fig:nmr:coeffs}, indicates that there are some small wavelet coefficients present at level $j=5$ too, but we opted to leave the levels $j=5,6$ as such. \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{./img/nmr.png} \end{center} \captionsetup{width=0.85\textwidth, font=small} \caption{Top panel: NMR data ($1024$ observations). Middle panel: Caravan estimate. Bottom panel: EBayes. De-noising via DWT. The $\operatorname{LA}(8)$ filter was used, with $J_0=4$ levels of the transform computed.} \label{fig:nmr} \end{figure} \begin{figure} \begin{center} \captionsetup{width=0.85\textwidth, font=small} \includegraphics[width=0.85\textwidth]{./img/nmr_dwt_mra.png} \end{center} \caption{MRA of the NMR data. DWT with $\operatorname{LA}(8)$ filter was used, and $J_0=6$ levels of the transform were computed. The top plot gives the smooth $S_6$, followed by the details $D_j$ stacked on top of each other, and the original data $X$ at the bottom.} \label{fig:nmr:mra} \end{figure} \begin{figure} \begin{center} \captionsetup{width=0.85\textwidth, font=small} \includegraphics[width=0.85\textwidth]{./img/nmr_dwt_coefficients.png} \end{center} \caption{ DWT coefficients of the NMR data arranged by levels of the transform. Periodic boundary conditions and $\operatorname{LA}(8)$ filter are used to compute DWT. The number of computed levels of the transform is $J_0=6$. The scaling coefficients at level $6$ are displayed at the top, followed by wavelet coefficients (from levels $6$ to $1$) and the original data. See Figure \ref{fig:bumps} for additional information on the arrangement of the coefficients.} \label{fig:nmr:coeffs} \end{figure} In visualising de-noising results, we used posterior medians as our point estimates (we produced larger plots to clearly highlight differences between the estimates). The Markov chain for the caravan prior was run for $120\, 000$ iterations, with the first third of samples dropped as a burn-in. Both caravan and EBayes estimates remove a substantial amount of noise from the data, see Figure \ref{fig:nmr}. However, visually the caravan reconstruction appears to be more regular than EBayes. One established way to measure efficacy of a de-noising procedure in this context is to determine which of the methods better maintains the peaks of the curve; these peaks contain important information on the tissue from which the sample arose. We can compare the heights of the highest peak, cf.\ \cite{johnstone05}, p.~1719, and \cite{percival00}, p.~430. In that respect, the caravan estimate yielded the peak height $57.78$, while EBayes the peak height $56.78$. The latter method was hence worse than its competitor (to put things in perspective, the original noisy data had the peak height $58.02$). We also applied the MODWT de-noising (with $J_0=4$ levels), cf.~\cite{percival00}, Comments and Extensions to Section 10.5. The results are reported in Figure \ref{fig:nmr:modwt}. Both methods are even more successful in removing the noise. Concerning the highest peak, with the peak height $55.77$, the caravan estimate marginally outperformed EBayes, that yielded the peak height $55.41$. Note also how the second sharp peak to the left of the highest peak is much lower in the EBayes estimate, unlike in the caravan estimate. On the other hand, the caravan estimate shows some small squiggles near $t=200$ and $800$, that are absent in the EBayes estimate; this is similar to the hard thresholding estimate in Figure $430$ of \cite{percival00}. We reproduce that plot in the bottom panel of Figure \ref{fig:nmr:modwt}; note the appearance of an additional squiggle near $t=650$ there. Finally, a wave-like behaviour of both estimates over the time interval $[0,300]$ is due to our decision to de-noise only $4$ levels of the transform. These waves can be largely flattened out by de-noising a $J_0=6$ level MODWT, but that would have diminished even further the heights of the sharp peaks. Summarising, each method appears to have its own advantages on this challenging real data set. \begin{figure} \begin{center} \captionsetup{width=0.85\textwidth, font=small} \includegraphics[width=0.85\textwidth]{./img/nmr_modwt.png} \end{center} \caption{Top panel: Caravan estimate for the NMR data. Middle panel: EBayes. Bottom panel: Hard thresholding estimate (with universal threshold). De-noising via MODWT. The $\operatorname{LA}(8)$ filter was used, with $J_0=4$ levels of the transform computed.} \label{fig:nmr:modwt} \end{figure} \section{Discussion} \label{sec:conclusions} In this paper we studied a Bayesian approach to wavelet de-noising via a prior relying on the inverse gamma Markov chain (cf.~\cite{cemgil07}). Various types of Markov chain priors have been used for de-noising purposes in several references, but to the best of our knowledge, our paper is the first thorough comparative study of the performance of this kind of a prior. In particular, we benchmarked our method against a popular empirical Bayes procedure of \cite{johnstone05}. Our method, which we call the caravan, strikes a good balance between conceptual simplicity and computational feasibility. Specifically, the posterior inference can be performed via a straightforward version of the Gibbs sampler. In the synthetic data examples that we considered, the method measures up well to EBayes, often substantially outperforming it in terms of the squared estimation error. The improvement brought by the caravan method comes thanks to the fact that it takes into account some of the local structures empirically observed in wavelet coefficients of real life signals. However, the caravan method does not achieve a uniform improvement (i.e.\ over all simulation scenarios) upon EBayes, which can be taken as indication of a general excellence of the latter, rather than of a failure of the former. In particular, in our simulations the caravan prior seemed to be somewhat worse than EBayes at handling signals with jump discontinuities. On purely visual grounds, the caravan estimator appeared to be less prone to display artefacts in its reconstructions that are due to unusually large noise peaks. As far as the computational time is concerned, since the caravan estimator is evaluated via an MCMC algorithm (Gibbs sampler), its computation is considerably slower than that of EBayes, although the method is still reasonably fast. We believe that our paper adds a valuable Bayesian technique to the wavelet, or more generally the non-parametric regression toolbox. Furthermore, our hope is that the present contribution provides sufficient motivation for further study of the caravan method, a task that we ourselves plan to address in subsequent research. A natural question in this context, that we do not address in the present work, is: what about asymptotic statistical theory for the caravan prior? Such work in the spirit of \cite{ghosal17} has been done for the horseshoe prior in \cite{vanderpas14} and \cite{vanderpas17}. This is a problem we would very much like to study in another work.
1,108,101,563,766
arxiv
\section{Introduction} While modern laser facilities { have a potential of reaching} ultra-high intensities up to $10^{24} ~\rm W/cm^2$ \cite{Danson2019}, delivering laser fields up to $1\rm GV/\mu m$, acceleration of charged particles using laser-target interaction becomes more of an interest. Highly energized charged particle beams have a broad range of applicability \cite{Daido2012}: in imaging \cite{Mackinnon2004}, medicine \cite{BK2002,Bulanov2002}, {controlled nuclear fusion \cite{MROTH}, and nuclear physics \cite{MNISHI}}. Beams of charged particles may reach ultrarelativistic energies, with the current record of electron bunches being accelerated up to $\sim 10$ GeV \cite{Gonsalves2019} using state-of-the-art Laser Wake Field Acceleration (LWFA) mechanism \cite{LWFA}. Ion acceleration is also estimated to be efficient from theory and simulations \cite{Esirkepov2006,Macchi2013,SVB2014,SSB2016}, but the experimental research reports the saturation of the maximum attainable ion energies on 100 MeV level \cite{Higginson2018}. Up and coming lasers with the peak powers reaching $10\, \rm PW$ \cite{ELINP,ELIBL,APOLLON} may help to overcome this level of ion energies, but the need for the theoretical understanding of possible limiting factors still exist. Therefore, a more detailed theoretical understanding of laser ion acceleration schemes, incorporating such physics as prepulse effects \cite{Kaluza2004,Esirkepov2014, PROHAD2020a}, field ionization \cite{MNISHI20}, oblique incidence \cite{Ferri2020}, pointing stability \cite{Gray2001}, and radiation reaction effects \cite{MTAMB2010}, is necessary for successful experimental delivery of high energy ion beams on a new generation of petawatt laser facilities. On the theory side of laser ion acceleration, there are a few major mechanisms being discussed recently. The current state-of-the-art mechanism is Target Normal Sheath Acceleration (TNSA) (see \cite{SWILKS}, review articles \cite{Daido2012, Passoni2010, Macchi2013, SVB2014}, and references therein), which is realized by a build-up of an electrostatic field on the rear side of the thick target due to abundance of hot electrons generated by laser interaction with the front of the target. Accelerating electric field is known to be proportional to $\propto (T_{\rm e,nth} n_{\rm e,nth})^{1/2}$, with $T_{\rm e,nth}$ and $n_{\rm e,nth}$ denoting hot electron temperature and density, respectively, and multiple efforts are made in order to increase both hot electron population properties \cite{Liu2012,YOGO2016, YOGO2017, Zou2017,Zou2019}. A very promising maximum ion energy scaling with laser pulse power is provided by Radiation Pressure Acceleration (RPA) \cite{Esirkepov2004,SVBulanov2010}, which was observed experimentally \cite{SKAR2008, SKAR2012, Henig2009}. Multiple other mechanisms are also discussed, such as Coulomb Explosion \cite{Fourkal2005}, Magnetic Vortex Acceleration (MVA) \cite{Kuznetsov2001,FUKUDA2009,SSBulanov2010,Park2019}, Shock Acceleration \cite{Fiuza2012}, and combinations of these \cite{SSBulanov2008}. Recently, solid-state targets started to gain more interest for electron acceleration \cite{Snyder2019,Wang2020a}, ion acceleration \cite{Liu2012,Zou2017,Zou2019} and radiation sources, such as X-ray \cite{Rousse1994,Andriyash2014} and $\gamma$-ray generation \cite{Nakamura2012,Ridgers2012}. In principle, higher density targets may lead to higher densities of fast electrons \cite{Zou2017,Zou2019} and better retention of fast electrons around laser-solid interaction spot \cite{Kluge2010}, which should benefit such acceleration schemes as TNSA and MVA. On the other hand, solid densities are generally opaque for optical laser pulses, which suppresses laser absorption. This is where structured solid targets come into play. Structured targets may provide better laser-target coupling \cite{DM2012,Bailly2020}, edge field amplification \cite{Askaryan1983}, laser guidance \cite{Wang2014,SSBulanov2015}, and self-consistent ion injection into acceleration scheme \cite{MURAKAMI}. For instance, in \cite{Liu2012}, a solid target with holed conical opening and concave rear side with a proton layer doping was considered. The acceleration mechanism was attributed to a combination of TNSA and additional acceleration by the electric field of focused protons. Conical opening enhanced TNSA by a more effective hot electron generation on the rear side. A similar target, but with a plane rear side and comprised of high-Z ions was also discussed in \cite{Zou2017}. High-Z ions and microchannel structure were implemented to improve hot electron generation and avoid laser filamentation, respectively. We note that using thin foil targets with holes for laser ion acceleration has been actively studied theoretically and experimentally in Refs. \cite{PSIKAL2016, PROHAD2020b, CANT2021}. Microchannel target filled with relativistically transparent foam was considered in \cite{Arefiev2018}. Laser pulse was tightly focused into the channel, propagated through relativistically transparent plasma while delivering significant energy to electrons from the channel filling and solid target walls, and exited from the rear side. The fastest ions were generated on the rear side of the target at the moment when defocusing laser pulse started to exit the channel. Channel target filled with relativistically transparent foam was also considered in \cite{Stark2016,Jansen2018,He2021,Rinderknecht2021} for efficient generation of $\gamma$ rays via synchrotron emission of fast electrons in quasi-static MegaTesla-scale magnetic field generated by laser-foam interaction. These targets are experimentally available and provide flexibility for the particular experimental needs \cite{Snyder2019,Bailly2020,Li2021}. In this paper, we explore laser ion acceleration from structured solid targets filled with relativistically transparent plasma by means of Particle-In-Cell (PIC) simulations and theoretical estimates. We find optimal conditions for high energy proton generation theoretically and verify them using comprehensive 2D PIC scans. The acceleration mechanism is interpreted as a combination of TNSA and RPA. We also conduct 3D PIC simulations and address such experimentally relevant questions as prepulse physics, oblique incidence, pointing stability, and the role of field ionization of the target. The role of the channel and solid target electrons is also discussed, as well as the role of radiation reaction (RR) for higher laser pulse powers. Finally, we discuss the scalability of the discussed acceleration scheme to the parameters of currently available channel targets \cite{Snyder2019,Bailly2020,Rinderknecht2021}. This paper is structured as follows. Section II focuses on theoretical estimates for the maximum energy of protons from classical electrodynamics with and without the inclusion of radiation reaction force. Optimal target conditions for the given laser pulse parameters are also derived. Section III is devoted to the discussion of the simulation setup. In Section IV, we discuss the results of our 2D and 3D PIC scans and compare them with our theory. Other important aspects, such as prepulse effects, field ionization, oblique incidence, and pointing stability, are also addressed. Finally, we conclude with Section V by discussing our main results and comparing them with the literature. \section{Theory of ion acceleration from channel targets} First, let us recall the concept of relativistic transparency, which is important for the considered ion acceleration scheme. As is well known, in a non-relativistic case, the electromagnetic wave does not propagate into cold unmagnetized plasma if $\omega_{\rm pe}\geq \omega_0$. Here, $\omega_{\rm pe}^2=4\pi n_ee^2/m_e$ is the {square of the } plasma frequency for a non-relativistic case. Thus, since in our case the channel density is $\sim10^{0}-10^{1} n_{\rm cr}$, where $n_{\rm cr} = m_e \omega_0^2/4 \pi e^2$ is the critical density for the electromagnetic wave of frequency $\omega_0$, we should have expected from non-relativistic considerations that the laser should reflect from the target. However, the relativistic motion of electrons relaxes the wave penetration condition due to an additional factor in the denominator of plasma frequency: $\omega_{\rm pe,rel}^2 = 4 \pi n_ee^2/\langle \gamma_e \rangle m_e=\omega_{\rm pe}^2/\langle \gamma_e \rangle$, with $\langle \gamma_e \rangle$ being an average electron gamma factor, that effectively decreases the threshold for laser pulse propagation in classicaly overcritical plasma. Since electrons in the laser pulse gain average energy of $m_ec^2 a_0$, where $a_0=eE_0/m_e\omega_0c$ is the {normalized} amplitude of the laser pulse, we expect $\langle \gamma_e \rangle \approx a_0$, leading to $\omega_{\rm pe,rel} = \omega_{\rm pe}/\sqrt{a_0}$ (see Ref. \cite{AKHPOL}). For tightly focused petawatt-scale laser pulses, it will lead to the relativistic transparency of the channel for laser pulse, thus providing efficient conditions for laser-target coupling. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig1a.png} \\ \includegraphics[width=\linewidth]{Fig1bc.png} \caption{(a) Result of 3D PIC simulation with $P=10$ PW, $L_{\rm ch}=40 \mu \rm m$, and $n_{\rm ch}=10 n_{\rm cr}$. Primary free parameters of the problem are indicated. (b),(c) Illustration of acceleration scheme from 2D PIC for $P=10$ PW and $L_{\rm ch}=40 \mu \rm m$: electrostatic field, evolution of ion density, and laser field at time of laser pulse exiting the channel ($t=t_{\rm exit}$) and 50 femtoseconds later. } \label{fig:scheme} \end{figure} Following \cite{SSBulanov2010}, let derive the optimal condition for ion acceleration. First, the total energy of laser pulse is: \begin{equation} \mathcal{E}_{\rm las,0} = I_0 a_0^2 \pi w^2 \tau_{\rm las}, \label{eqn:Elas} \end{equation} \noindent where $I_0 = 1.384\cdot 10^{18}\, \rm W/cm^2$ and $\tau_{\rm las}$ is the duration of the laser pulse, and $w$ is the waist of the laser pulse. This energy is fixed for a particular laser pulse. At the same time, total energy of the electrons in the channel after the partial absorption of the pulse may be estimated by the formula: \begin{equation} \mathcal{E}_{\rm ele} = m_e c^2 a_0 n_{\rm ch} \pi R_{\rm ch}^2 L_{\rm ch}, \label{eqn:Eele} \end{equation} \noindent where $n_{\rm ch}$, $R_{\rm ch}$, and $L_{\rm ch}$ are channel electron number density, radius, and length, respectively. By equating Eqns.~\ref{eqn:Elas} and \ref{eqn:Eele}, we get an optimal condition for ion acceleration in the case of magnetically assisted TNSA (MVA) \citep{SSBulanov2010,Park2019}. The leftover energy of the pulse after propagating in the channel (if any) will be responsible for the radiation pressure acceleration. Let's assume that the energy dissipation happens in such a way that only affects the field amplitude. The energy of the laser pulse after exiting the channel will look like the following: \begin{equation} \mathcal{E}_{\rm las,ch} = \mathcal{E}_{\rm las,0}-\mathcal{E}_{\rm ele}=I_0 a_{\rm ch}^2 \pi w^2 \tau_{\rm las}. \label{eqn:Elasch} \end{equation} \noindent In the case of high laser intensities, we may also want to include energy losses due to radiation reaction in consideration. As is well known {\cite{LLAD}, in the near-critical density plasma} for dimensionless field amplitudes { \begin{equation} a_0 \geq \left(\frac{3\lambda m_e c^2}{4\pi e^2}\right)^{1/3}, \label{eqn:a0RR} \end{equation} (for $\lambda = 1\mu$m wavelength laser the radiation intensity should be above $ 10^{23}\, $W/cm$^2$) } the radiation reaction force becomes important. The energy lost by electromagnetic pulse propagating in plasma may be estimated as follows. A single electron maximum radiation power may be calculated as $\mathcal{P}_{\rm RR} = eEc$. Total energy lost to radiation would be: \begin{equation} \mathcal{E}_{\rm RR}= \mathcal{P}_{\rm RR} n_{ \rm ch} \pi w^2 \tau_{\rm las} L_{\rm ch} = \mathcal{E}_{\rm ele}. \label{eqn:EnRR} \end{equation} \noindent The remaining laser pulse energy that will accelerate ions via radiation pressure: \begin{equation} \mathcal{E}_{\rm las,ch+RR} = \mathcal{E}_{\rm las,0}-\mathcal{E}_{\rm ele}-\mathcal{E}_{\rm RR}=I_0 a_{\rm ch+RR}^2 \pi w^2 \tau_{\rm las}. \label{eqn:ElaschRR} \end{equation} Now, let's optimize the maximum ion energy obtainable in the considered ion acceleration scheme. For simplicity, we ignore the RR losses for now. The maximum energy gained from TNSA-like acceleration may be estimated as (following \cite{SWILKS}) \begin{equation} \Delta \mathcal{E}_{\rm TNSA}\approx \alpha T_{\rm e,nth} = \alpha m_ec^2( \sqrt{1+a_0^2}-1)\approx \alpha m_ec^2 a_0, \label{eqn:EnTNSA} \end{equation} \noindent where $\alpha$ is a dimensionless constant larger than 1 which accounts for the finite size of the accelerating field, energy cutoff of non-thermal electron energies in the rear side of the channel, and possible superpondermotive electron temperatures (e.g., see \cite{Zou2017}). This constant is to be determined by simulations. { In the limit of the ultrarelativistic ion energy, the energy gained from RPA can be estimated as (e.g. see Ref. \cite{SVB2014}) \begin{equation} \Delta \mathcal{E}_{\rm RPA} \approx m_e c^2 a_0^2 \frac{n_{\rm cr} c \tau_{\rm las}}{n_{0} l_0}, \label{eqn:EnRPA} \end{equation} where $n_0$ and $l_0$ are density and thickness of the foil target, and $n_{\rm cr}=m_e \omega_0^2/4\pi e^2$}. In our case, there is no foil target, but the hole boring by the laser pulse creates a dense foil-like structure with the thickness $\sim \lambda$ at the end of the channel. Assuming that this structure is comprised of the channel electrons, we write a condition on $n_0$ and $l_0$: $n_0 l_0 \approx L_{\rm ch} n_{\rm ch}$. In order to estimate the energy gain by an ion from RPA in a non-optimized target case, we multiply this expression by a function $\Xi=f(a_{\rm ch}, \frac{n_{\rm ch}}{n_{\rm cr}},\frac{L_{\rm ch}}{\lambda})$ that will isolate an optimal regime of RPA ion acceleration, $a_0 \approx n_{\rm ch}/n_{\rm cr} L_{\rm ch}/\lambda$, and exponentially damp the acceleration away from "the resonance condition", i.e. it is equal to one for optimal RPA condition and quickly tends to zero outside of it. Finally, we multiply this expression by dimensionless constant $K$, which controls the efficiency of RPA acceleration (i.e., the reflectivity of the foil accelerated by RPA) and will be determined from simulations as well. The total energy gain by a single proton may be estimated as: { \begin{equation} \begin{split} & \mathcal{E}_{\rm max} = \Delta \mathcal{E}_{\rm TNSA}+\Delta \mathcal{E}_{\rm RPA} \approx \\ & m_ec^2\left\{ \alpha a_0 +Ka_{\rm ch}^2 \frac{n_{\rm cr}}{n_{\rm ch}}\frac{c\tau_{\rm las}}{L_{\rm ch}}\Xi \left(a_{\rm ch}, \frac{n_{\rm ch}}{n_{\rm cr}},\frac{L_{\rm ch}}{\lambda}\right) \right\}, \end{split} \label{eqn:maxenergy} \end{equation} } \noindent where \begin{equation} a_{\rm ch}^2 = a_0^2 - a_0\frac{m_ec^3 n_{\rm cr}}{I_0}\frac{R_{\rm ch}^2}{w^2}\frac{n_{\rm ch}}{n_{\rm cr}}\frac{L_{\rm ch}}{c\tau_{\rm las}} \end{equation} \noindent is the laser field dimensionless amplitude after the depleted laser pulse exits the channel (here we neglect RR energy losses). Maximizing $\mathcal{E}_{\rm max}$ will give an optimal condition for ion acceleration. Since the energy gain is dominated by the RPA mechanism, we may approximately claim that the optimal condition is \begin{equation} a_{\rm ch}\approx\frac{n_{\rm ch}}{n_{\rm cr}}\frac{L_{\rm ch}}{\lambda} . \label{eqn:optcond} \end{equation} Now, let us describe a simple model to explain the maximum ion energy scaling with time. As noted above, we assume that the ions are first accelerated by TNSA electric field, and further accelerated by RPA, and the total energy gain is $\mathcal{E}_{\rm max} =\Delta \mathcal{E}_{\rm TNSA}+\Delta\mathcal{E}_{\rm RPA}$. We assume TNSA acceleration to be instantaneous, and the evolution of ion energy under the radiation pressure is calculated as in Refs. \cite{Esirkepov2004,SVBulanov2010}: we start with a 1D model and write down the equation of motion of plasma under the influence of radiation pressure: \begin{eqnarray} \partial_\tau p = \mathcal{P} \frac{1-\beta}{1+\beta}, \label{eqn:RPA1Da} \\ \beta = \frac{p}{\sqrt{1+p^2}}. \label{eqn:RPA1Db} \end{eqnarray} \noindent Here, $p$ is normalized to $m_ic$, $\tau \equiv t/T_0$, and the radiation pressure is given by \begin{equation} \mathcal{P} = K\frac{m_e}{m_i}a_0^2\frac{n_{\rm cr}}{n_e}\frac{\lambda}{l_0}. \end{equation} $K$ here is a free dimensionless parameter (same as in Eqn.~\ref{eqn:maxenergy}) that controls the laser-target interaction efficiency. Asymptotic solution for ultrarelativistic ions, $p \propto t^{1/3}$, is well known \cite{Esirkepov2004}. However, here we do not expect ions to gain ultrarelativistic energies, so we solve Eqns.~\ref{eqn:RPA1Da}-\ref{eqn:RPA1Db} with $p(t=0)=p_{\rm TNSA} = \sqrt{2 m_i \mathcal{E}_{\rm TNSA}}$, i.e. the initial ion momentum is gained from TNSA fields. We see that the channel parameters and laser amplitude appear in $\mathcal{P}$, and $p(0)$ also implicitly depends on channel parameters. We fit the resulting $p(t;\alpha,K,t_0)$ trajectories from simulations with free parameters $\alpha,K,t_0$. It turns out that the model describes $p(t)$ from simulations fairly well, with $\alpha \sim 3-5$, $K \sim 0.1-0.3$, $t_0 \approx t_{\rm exit}$, with $t_{\rm exit}$ being the time of laser pulse exiting the channel from the rear end. \section{Simulation setup} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig2.pdf} \caption{Evolution of ion phase space in $x-p_x$ coordinates (colormap), along with 1D profiles of electron density (light blue), electron density normalized to average electron gamma (dark blue), laser envelope (red), and longitudinal electric field (black). (a) t=330 fs, laser pulse propagates in the channel, a significant laser-electron coupling is seen - electrons are not evacuated from the channel - the channel is relativistically transparent - sustaining a significant electron density well above $n_{\rm cr}$; (b) t=410 fs, laser pulse reaches the rear end of the target, accelerating electric field builds up on the rear part of the target, TNSA accelerated ions are seen; (c) t=450 fs, laser pulse leaves the channel and provides RPA acceleration of ions; rapid acceleration of ion filament from the rear side of the channel is seen (annotated as TNSA+RPA); (d) t=510 fs, the most rapid phase of ion acceleration is over (see Figure 4a, green line), but ions continue to gradually gain more energy via RPA.} \label{fig:phspace} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig3.png} \caption{Proton spectrum evolution for the simualtion with $P=1~ \rm PW$, $L_{\rm ch}=50~ \mu \rm m$, $n_{\rm ch}=10 n_{\rm cr}$. The development of the high-energy spectrum and monoenergetic-like peak is seen after the laser pulse starts to exit the channel.} \label{fig:spec} \end{figure} To check the theoretical considerations from the previous Section and consider a more realistic physical scenario, we perform 2D and 3D particle-in-cell (PIC) simulations using the code EPOCH \cite{EPOCH1, EPOCH2}. Numerical setup and illustration of typical 2D/3D simulation result is shown in Figure 1. All the parameters we scan on are summarized in Table \ref{Table2Dscan}. For 2D runs, we consider Gaussian laser pulses with laser wavelength $\lambda=1\mu \rm m$, laser durations $\tau=30 ~\&~ 150$ fs, $w=1.1 - 15 \, \mu \rm m$ waist, and linear polarization ($B_z$ is out of simulation plane $x\text{-} y$) focused onto the channel entrance at $x=10 \mu \rm m$. The laser pulse power spans from 0.3 to 30 PW, covering the range of dimensionless amplitudes, $a_0$, from $25$ to $850$ (peak intensities range from $10^{21}$ to $3 \cdot 10^{23}$ W/cm$^2$). The target locates between $x=10\,\mu \rm m$ and $10 \mu \rm m$+$L_{\rm ch}$, where $L_{\rm ch}$ is the channel length, which varies from 10 to 100 micron. Solid wall density equals to $100-300 n_{\rm cr}$. The channel has the radius $R_{\rm ch}=1-5 \mu \rm m$, and is filled with uniform plasma with $n_{\rm ch} = 0-40 n_{\rm cr}$. The plasma is comprised of electrons and protons with zero initial temperature. The simulation box dimension is $(160 \lambda + L_{\rm ch}) \times 30 \lambda$ with the numerical resolution of 60 grid nodes per $\lambda$. The resolution ensures that typical plasma wavelength, $\lambda_{\rm pe}=2\pi c/\omega_{\rm pe}$, is resolved with 6 grid nodes. The boundary conditions are outflow for both axes. The number of particles per cell is 20-80 per species. We conduct runs with radiation reaction (RR) terms turned on and off to see its influence on ion acceleration. \begin{table} \centering \caption{2D PIC scan parameters} \begin{tabular}{lrr} \tableline\tableline & range \\ Laser parameters: \\ \hspace{3mm} Peak power, $P$, PW & $0.3-30$ \\ \hspace{3mm} Pulse duration, $\tau_{\rm las}$, fs & 30,\,150 \\ \hspace{3mm} Waist, $w$, $\mu $m & $1.1-15$ \\ \hspace{3mm} Laser wavelength, $\lambda$, $\mu $m & $1$ \\ \hspace{3mm} Contrast, $I_{\rm prepulse}/I_{\rm max}$ & $0.0, ~10^{-6}-10^{-3}$ \\ \hspace{3mm} Prepulse duration, ps & 1 \\ \\ Target parameters: \\ \hspace{3mm} Channel radius, $R_{\rm ch}/\lambda$ & $1-10$\\ \hspace{3mm} Channel length, $L_{\rm ch}/\lambda$ & $10-100$\\ \hspace{3mm} Filling density, $n_{\rm ch}/n_{\rm cr}$ & 0-40 \\ \hspace{3mm} Solid wall density, $n_{\rm wall}/n_{\rm cr}$ & 100,300 \\ \hspace{3mm} Target front cut angle, $^\circ$ & 0,15,45 \\ \\ General parameters: \\ \hspace{3mm} Simulation box size, $\lambda \times \lambda$ & $200 \times 30$ \\ \hspace{3mm} Grid resolution, 1/$\lambda$ & 60 \\ \hspace{3mm} Particle resolution, ppc & 20,40,80\\ \hspace{3mm} Total simulation time, ps & 1.5-2.5 \\ \hspace{3mm} Radiation reaction term & on \& off \\ \hspace{3mm} Field ionization & on \& off \\ \end{tabular} \label{Table2Dscan} \end{table} To address the case with realistic target material, e.g., solid Kapton substrate and CH foam as the channel filling \cite{Rinderknecht2021}, we considered CH targets with $L_{\rm ch}=20-50 \lambda$, $R_{\rm ch}=1.8 ~\lambda$, $w=2.2 \lambda$, $n_{\rm e, wall}=300 n_{\rm cr}$, $n_{\rm e,ch}=10-30 n_{\rm cr}$, and fully ionized C and H atoms. We also considered oblique incidence by adding a cut to the front side of the target. Oblique incidence ensures the absence of the backreflection of the laser pulse, which is safer for possible application on laser facilities \cite{Snyder2019,Bailly2020}. We consider a cut with $10^\circ$ and $45^\circ$ angles on the front of the target while keeping all other simulation parameters the same as described above. For 3D simulations, following \cite{Arefiev2018}, we consider 1 PW and 10 PW, 150 fs Gaussian linear polarized pulses focused onto the channel target entrance onto the 2.2 micron spot. The considered target parameters are similar to ones in 2D simulations, with $L_{\rm ch}=20-30~ \mu \rm m$, $n_{\rm wall}/n_{\rm cr}=100$, $R_{\rm ch}=1.8 ~\mu \rm m$, and $n_{\rm ch}/n_{\rm cr}=10$, comprised of protons are electrons. Finally, for auxiliary radiation hydrodynamics simulations using FLASH code, we inherited the LaserSlab simulation setup \cite{FLASH,FLASH1}, which considers the interaction of laser beam with the typical nanosecond laser pedestal parameters with the solid aluminum target. In our case, we conducted a set of analogous runs with only modifications being the modified density profile - we introduced a channel of $R_{\rm ch} =3~\mu \rm m$ at the axis of $R-z$ simulation plane in cylindrical coordinates - and considered a polystyrene (CH) target corresponding to $n_{\rm e, wall} = 300 n_{\rm cr}$ and $n_{\rm e, ch} = 20 n_{\rm cr}$, while also expanding the simulation box to $120~ \mu \rm m$ along $z$ axis, resulting in $40 \lambda \times 120 \lambda$ dimensions in $R-z$ space, with the channel located between $z=40 \lambda$ and $80\lambda$. The laser pulse has the wavelength of $1~\mu \rm m$, Gaussian transverse shape with the e-folding length of $3 ~\mu \rm m$, and focused onto the center of the channel entrance with normal incidence. The temporal profile of the laser pulse has a linear ramp of 0.1 ns from 0 to peak power and duration of 0.9 ns with the total simulation time being 1 ns. We varied the peak laser pedestal power, covering the range from $10^5$ to $10^{9}$ W. This corresponds to laser contrasts from $10^{-11}$ to $10^{-7}$ for 10 PW driver pulse. The resulting density snapshots from these simulations were mirrored around the $z$ axis, zoomed in to $-20 \lambda$ to $20 \lambda$ in the transverse direction, and inserted into 2D PIC code EPOCH to analyze the detrimental role of the prepulse in laser ion acceleration. \section{Simulation results} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig4.png} \caption{Time evolution of maximum ion energy (a and b) and channel length scan vs maximum ion energy (c and d) for 1 PW peak laser pulse power (left) and 10PW peak laser pulse power (right). Crosses and shaded region denote theoretical predictions for maximum ion energy temporal evolution and maximum ion energy as the function of channel length, respectively.} \label{fig:emax_vs_t_lch} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig5.png} \caption{Dependence of maximum ion energy on channel filling density for pulses from 0.3 to 30 PW. Shaded regions illustrate theoretical predictions for maximum ion energies as the function of channel filling density.} \label{fig:emax_vs_nch} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig6.png} \caption{(a) Evolution of the average channel density at the exit of the channel for $P = 1 \, \rm PW$ and (b) evolution of the fraction of the wall electrons in the average density at the exit of the channel for $P = 10 \, \rm PW$. The build-up of the universal density value for all simulations with $n_{\rm ch}/n_{\rm cr}\leq 10$ and dominance of the wall electrons contributing to ion acceleration are seen.} \label{fig:filling} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig7.png} \caption{(a) Electron density snapshot from FLASH simulation of nanosecond pedestal-target interaction for the case of laser contrast of $10^{-7}$. Dashed black lines sketch the initial location of the channel structure. (b) Maximum ion energy dependence on pedestal duration for different laser contrasts ($10^{-11},~10^{-9},~10^{-7}$) and no channel case for laser contrast $10^{-7}$.} \label{fig:flash} \end{figure} First, let us discuss the typical 2D PIC simulation result for $P = 1$ PW, $n_{\rm ch}/n_{\rm cr}=10$, $L_{\rm ch}= 30 \,{\rm \mu m}$, $w=1.1~\mu \rm m$, and $R_{\rm ch} = 1 {\mu \rm m}$. Figure \ref{fig:phspace} illustrates the physics of the two-stage acceleration process. It combines 1D profiles of longitudinal electric field $E_x/E_0$ (averaged over $1 \mu m$ in the transverse direction, i.e. over the central half-channel; electric field is measured in $E_0 = m_e\omega_0c/e$), 1D envelope of the laser pulse at $y=0$, $B_z/10E_0$, 1D profile of electron density $n_e/n_{\rm cr}$ (averaged over $1 \mu m$ in the transverse direction as well) and $x-p_{\rm ix}$ phase space plot for t=330, 410, 450, 510 fs. The laser pulse is focused onto the front side of the channel, and since $n_e /\langle \gamma_e \rangle \ll n_{\rm cr}$ (dark blue line in Fig.2a), the pulse will propagate into the channel without significant backreflection. Also, since the channel density is relativistically transparent, it is not completely wiped out by ponderomotive forces and always stays well above non-relativistic critical density $n_{\rm cr}$, thus providing better laser-electron coupling (blue line on Figs.\ref{fig:phspace}a-d). Once the pulse reaches the rear part of the target at t=410 fs, we observe a build-up of a strong longitudinal electric field (predominantly, electrostatic) up to $E_x/E_0 \approx 20$. At this time, ion phase space exhibits an onset of TNSA-like acceleration at the rear end of the target (Fig. \ref{fig:phspace}b, $x \approx 45 \mu \rm m$; also Fig. 1b for a 2D density map of ions accelerated solely by TNSA). However, the fastest ions in the simulation are generated promptly at the time of laser pulse exiting the rear side of the channel (see Fig. \ref{fig:phspace}c, spike at $x\approx 45 \mu \rm m$). These ions are accelerated by TNSA field first, and then further accelerated by RPA (Fig.\ref{fig:phspace}d). Figure \ref{fig:spec} represents the time evolution of proton spectrum for target with $L_{\rm ch}=50~ \mu \rm m$, $n_{\rm ch} = 10 n_{\rm cr}$ and laser pulse peak power of $P=1 \rm PW$. At the final time, $t=t_{\rm exit}+200 ~\rm fs$, there is a relatively flat spectrum in the high-energy range, with a peak around the maximum ion energy. Time evolution of the spectrum reveals that the peak in the ion spectrum is developed at the time of the laser pulse exiting from the rear side of the channel. Further acceleration is achieved by the direct acceleration of ions by laser pulse via RPA, as suggested by Fig.2d and Figs.4a,c. Figure 4a,b shows maximum ion energy as a function of time for $1$ PW and $10$ PW laser pulses for the selected channel lengths from $10$ to $100~ \mu \rm m$. By fitting the curves from Fig.4a,b, we see that they are closely followed by the theoretical model described in Section II, and tend to have the same late-time scaling as RPA (measured scaling is $p_i \propto t^{0.37}$, which is close to RPA's $p_i \propto t^{1/3}$). Figures 4c,d demonstrate how maximum ion energy depends on $L_{\rm ch}$. It is seen that there is an optimal channel length, in accordance with Eqn.~\ref{eqn:optcond}. Our theoretical model fairly predicts an optimal channel length (shaded regions on Figs.~4c,d). From Eqn.~\ref{eqn:optcond} it is also seen that there is an optimal channel density. Figure \ref{fig:emax_vs_nch} shows how maximum ion energy depends on channel filling density, $n_{\rm ch}$, for a fixed set of other parameters. While also predicting the existence of an optimal channel density, the agreement is worse than for a channel length, $L_{\rm ch}$. This may be explained by an effectively different channel density at the rear end of the channel, which is a combination of initial channel filling that stayed inside the channel and channel solid wall parts that were extracted by the intense laser pulse. Analyzing an average electron density at the rear end of the channel (i.e. inside the channel within $2 \lambda$ from the channel rear end), we found that at the time of the laser pulse exiting the channel, the electron density there turns out to be almost identical for initial channel density fillings in the range $n_{\rm ch}/n_{\rm cr} = 0-10$, see Figure \ref{fig:filling}. Simulations with tagged channel filling and wall electrons (Figure \ref{fig:filling}b) explicitly demonstrate that the contribution of channel filling is negligible in comparison to wall electrons that end up at the rear part of the channel. Additionally, the variance of the maximum ion energy with respect to channel filling density is less than $ 25 \%$ for $n_{\rm ch}\leq 30 n_{\rm cr}$. To conclude, the channel filling density plays a relatively minor role in ion acceleration, which implies relaxed requirements on the laser contrast for the described laser ion acceleration scheme. To elaborate on the role of the laser contrast for the considered target, we perform an additional set of 2D PIC simulations, where a Gaussian prepulse of picosecond duration is added before the primary pulse, with the laser contrast, $I_{\rm prepulse}/I_{\rm max}$, varied from $10^{-6}$ to $10^{-3}$. While the duration of the prepulse may reach up to nanosecond duration \cite{Pathak2021}, which is beyond reach for the conventional PIC codes, significant damage to the target may be done by spontaneous pre-pulses of shorter duration, such as a considered picosecond prepulse \cite{Pathak2021}. We examine the cases of $P=1 \& 10$ PW, $n_{\rm ch}/n_{\rm cr}=0,1,10,20$. The variations in maximum ion energy with contrast are no more than 25\%, with higher contrast runs typically overperforming the corresponding runs with lower contrast. The overall acceleration mechanism seems to be unaffected by the considered prepulse. To verify the robustness of the acceleration scheme against realistic laser contrast effects, we conducted a set of radiation hydrodynamics simulations using FLASH code \cite{FLASH} for the parameters described in Section III. Obtaining a set of density snapshots for 0.2 - 1 ns into the laser pedestal-CH target interaction, we initialized 2D PIC runs with these density snapshots and compared the resulting maximum proton energies at the end of 2D PIC runs. Figure \ref{fig:flash}a shows the electron density snapshot from the FLASH run for the case of $10^{-7}$ laser contrast and 1 ns into the simulation. We may see that while the target density departed from the initial channel structure location shown in dashed black lines, the overall structure of the target remains intact. Figure \ref{fig:flash}b reveals the effect of the laser pedestal on the maximum ion energies obtained in these simulations. We find that the presence of the pedestal with $\leq 400~ \rm ps$ duration and contrast no worse than $10^{-7}$ keeps the peak ion energy within the 75\% of the ideal case of no prepulse. Thus, we may conclude that the realistic laser contrast of moderate parameters does not reduce the efficiency of the acceleration mechanism. A set of FLASH+PIC runs with the uniform CH target were also considered, delivering significantly diminished peak ion energies (circle-dotted blue line in Figure \ref{fig:flash}b). A more detailed analysis of radiation hydrodynamics + PIC pipeline simulations is required for better matching experimental conditions of a particular laser facility, including realistic 3D geometry, target material, and oblique incidence. These questions are beyond the scope of this paper and will be addressed separately. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig8.png} \caption{Scalings of (a) electron non-thermal temperature, $T_{\rm e,nth}$ and (b) density $n_{\rm e,nth}$ with laser power. Wilks' scaling and waveguide model (green dashed lines in (a) and (b), respectively) fairly explain the simulation results.} \label{fig:nth} \end{figure} For a better understanding of the TNSA stage of ion acceleration, it is of interest to calculate the average density and temperature of hot electrons to the right from the rear end of the channel. Figure \ref{fig:nth} demonstrates the scaling of non-thermal electron population parameters with laser pulse power. Fig. \ref{fig:nth}a shows such dependence for the non-thermal temperature ($T_{\rm e,nth} \equiv \langle \mathcal{E}_e \rangle$) and compares it to the established scalings \citep{Zou2017}. Fig. \ref{fig:nth}b reveals calculations of average non-thermal electron density, $n_{\rm e,nth}$, and compares it against the waveguide model \citep{Zou2017}. Overall, the non-thermal temperature measured in all simulations is in fair agreement with Wilks' scaling \cite{SWILKS}, $T_{\rm e,nth}\propto \sqrt{1+a_0^2}-1 \approx a_0$. $n_{\rm e,nth}$ scaling is fairly captured by the waveguide model, although the best fit suggests a stronger dependence of non-thermal electron density on laser power. It may be explained by different optimal target parameters $L_{\rm ch}$ and $n_{\rm ch}$ for the considered range of laser pulse powers $P=0.3-30$ PW. Backtracking all electrons that end up with kinetic energies larger than 500 MeV for $P=10$ PW, $L_{\rm ch}=40 \mu \rm m$, $n_{\rm ch}=20 n_{\rm cr}$ run, we found that they mainly originate from the front side of the channel walls (see Figure \ref{fig:origin}a), specifically, from two lobes centered around $x= 15 \lambda,\, y = \pm 2-3 \lambda$, which is within the limits predicted by waveguide theory ($d_e = c/\omega_{pe} \approx \lambda \sqrt{a_0 n_{\rm cr}/n_{\rm wall}}/2 \pi \sim \lambda$). Simulations with smaller $P$ and/or $n_{\rm ch}$ lead to a similar conclusion. Figure 9b exemplifies a few trajectories of the fastest electrons in the simulation. They also originate from solid target walls and enter the oscillation cycle in the involved configuration of laser and background fields \cite{Mangles2005,Gong2020,Jirka2020}, effectively gaining energy at the channel exit. Finally, we calculated the relative role of channel filling and wall electrons in the TNSA accelerating field by comparing $\sqrt{n_{\rm e,nth}T_{\rm e,nth}}$ for each electron population. The fraction of channel filling contribution to the electrostatic field was found to be no more than $30 \%$, thus further suggesting a secondary role of channel filling. It is also worth discussing where the fast ions are originated. In order to do so, we conducted (1) a simulation with $P = 10$ PW, $n_{\rm ch}/n_{\rm cr}=20$, $L_{\rm ch}= 40 \,{\rm \mu m}$ with a full tracking of ion trajectories and (2) a set of simulations with $P = 10$ PW, $L_{\rm ch}= 40 \,{\rm \mu m}$, varying $n_{\rm ch}/n_{\rm cr}$ from 0 to 40 and tagging channel filling and wall ions. The first run suggested that the fastest ions (ones that ended up with kinetic energy exceeding $1$ GeV) are predominantly from the rear end of the channel filling (see Figure \ref{fig:origin}c), while a small yet non-negligible fraction originated from the channel walls withing a few microns from the channel rear end. Analogous runs with the smaller filling density of $n_{\rm ch}/n_{\rm cr}=1$ or smaller laser pulse power $P=1$ PW are in qualitative agreement with this finding. The channel filling density scan with tagged particles revealed that the high-energy end of the ion spectrum (ions with 100 MeV or more) is comprised of both filling and wall ions for all runs, with wall ions dominating in $n_{\rm ch}/n_{\rm cr} \leq 10$ range and filling ions being abundant for $n_{\rm ch}/n_{\rm cr} \geq 20$, including the case of optimal channel filling density. Figure \ref{fig:origin}d shows a few tracks of fast ions that ended up with $\mathcal{E}_{k,i}\approx 1.5~ \rm GeV$ accelerated from the rear end of the target, again verifying the two-stage nature of the ion acceleration scheme: it acts both at the rear end of the target and further away from the sheath field. \begin{figure*} \centering \includegraphics[width=\linewidth]{Fig9.png} \caption{Histograms of original location of (a) fastest electrons (ones that end up with kinetic energy $>500$ MeV) and (c) fastest ions ($>1$ GeV); fast electron (b) and ion (d) tracks for the simulation with $P=10$ PW, $L_{\rm ch}=40 \mu \rm m$, and $n_{\rm ch}/n_{\rm cr}=20$. Blue dashed lines denote the initial location of the channel interior.} \label{fig:origin} \end{figure*} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig10.png} \caption{(a) Dependence of maximum ion energy on beam waist for different channel radii $R_{\rm ch}=2-10 \mu \rm m$ for $P=10$ PW, $L_{\rm ch}=40 \mu \rm m$, $n_{\rm ch}=10 n_{\rm cr}$ simulations. Vertical dashed line denotes an approximate threshold for relativistic transparency of the channel to hold. (b) maximum ion energy dependence on laser axis shift $\delta y$ for $P=1$ PW (blue) and $10$ PW (red). Dashed lines represent the predictions of theoretical model.} \label{fig:focus} \end{figure} As one may see from Eqn.~\ref{eqn:maxenergy}, the contribution from channel radius and beam waist to the maximum attainable ion energy is connected to the energy transfer efficiency of the laser pulse with the constant beam waist to the channel filling electrons under the assumption that the channel does not significantly evolve. In reality, however, these assumptions do not hold, as we see from our runs with varying initial beam waists and channel radii. When channel radius, $R_{\rm ch}$, is significantly larger than beam waist at focus, $w$, laser pulses experience transverse filamentation and hosing instabilities \cite{Naumova2001}, leading to the reduced ion acceleration efficiency. At the same time, for $w>R_{\rm ch}$, the laser energy may be partially scattered from the channel entrance, decreasing the acceleration efficiency. Figure \ref{fig:focus}a presents our scan on laser beam waist for $P=10$ PW laser pulse and $n_{\rm ch}=20 n_{\rm cr}$ target with $R_{\rm ch} = 2, 6, 10 \lambda$. A sweet spot is found around $w_{\rm opt} \approx~ 1-3 \mu \rm m$ for all channel radii considered. This is below the radius of self-channeling laser pulse in uniform plasma given by $R_{\rm sc}/\lambda = 1/\pi (n/n_{\rm cr})^{1/3}(27 P/P_{\rm cr})^{1/6} \approx 14$ \citep{SSBulanov2010}. To understand why maximum ion energy drops for $w>4 \lambda$, we recall that the considered two-stage acceleration mechanism requires relativistic transparency of the channel for efficient laser-target coupling and ion acceleration via RPA. Relativistic transparency regime is realized when $n_{e}\ll \langle \gamma_e \rangle n_{\rm cr}$. If we recall that $\langle \gamma_e \rangle \approx a_0$ and expressing $a_0$ as the function of laser pulse power $P$ and beam waist $w$, $a_0 = 0.85 \sqrt{P/(I_0 w^2)}$, we get the threshold value of $w_{t} \approx 4.25 \mu \rm m$, above which the relativistic transparency is violated, and the two-stage mechanism transitions to classic TNSA, thus decreasing the maximum ion energy. In other words, the channel radius constraints are quite relaxed, requiring only $R_{\rm ch}>w$ for optimal acceleration. The laser focusing, however, should be narrow enough to trigger relativistic transparency within the channel, i.e. $w/\lambda < w_t/\lambda \approx \alpha \sqrt{10^5 P[\rm PW]}/(n_e/n_{\rm cr})$, where $\alpha \ll 1$ is a dimensionless factor controlling the relativistic transparency condition, $n_e = \alpha a_0 n_{\rm cr}$, which was assumed to be equal to 0.1 in our calculations. For the smallest power considered, $P=0.3$ PW, it requires a beam waist of $w/\lambda < 1.73$, which is within reach for modern laser systems \cite{Yoon2021}. Transverse laser pointing stability is also governed by a similar physical process. When laser beam completely misses the channel cross-section, i.e. when the absolute value of the laser axis pointing shift, $\delta y$, is equal or larger than $R_{\rm ch}+w/2$, TNSA mechanism is realized with an increased hot electron population due to presence of the channel. When the laser beam spot is within the channel cross-section, i.e. when $|\delta y| \leq R_{\rm ch}-w/2$, there should be no changes in maximum ion energy according to the proposed theoretical model. For the pointing shifts in between these two values, the maximum ion energy is obtained from the two-stage mechanism with the effectively reduced laser pulse power, $P_{\rm ch} \approx P (0.5+(R_{\rm ch}-\delta y)/w)^{N-1}$, where $N$ is the dimensionality of the problem. The maximum ion energy is calculated from our theoretical model for $P_{\rm ch}>0$ and from classical scaling $\mathcal{E}_{\rm max} = 173 \rm ~ MeV \sqrt{P[\rm PW]}$ for $P_{\rm ch} =0$ \cite{Esirkepov2014}. Figure \ref{fig:focus}b compares our simulation results for pointing stability scan for $R_{\rm ch}=1 ~\mu \rm m$ and $w=1.1 ~ \mu \rm m$ with our interpretation. The agreement is satisfactory, with the largest discrepancy appearing in the case of TNSA-only accelerated ions. The final requirement on maximum pointing shift is $|\delta y|\leq \delta y_{\rm max}= R_{\rm ch}-w/2$. To understand the possible range of applicability of the considered acceleration scheme, it is of interest to check such a target on the maximum energy scaling with laser pulse power. Figure \ref{fig:scaling} summarizes the whole set of our simulations. It turns out that the scaling derived in Section II (shaded region) is in approximate agreement with simulations. Maximum proton energies are well above the usual theoretical scaling for maximum proton energy (black dashed line) \cite{Esirkepov2014}. The universal fitting formula for maximum ion energy scaling may be written as: \begin{equation} \mathcal{E}_{\rm max} = \left(\frac{\rm P}{\rm 1\,PW} \right)^{0.322} \, {\rm GeV}. \label{eqn:scaling} \end{equation} \noindent Runs with RR off demonstrate a factor of a few advantage in terms of maximum ion energy in contrast to RR on simulations (cross-dashed magenta line in Fig.\ref{fig:scaling}), which is expected as the additional laser energy losses given by Eqn.~\ref{eqn:EnRR} diminish both TNSA and RPA efficiency. We also considered a cut on the front side of the target, having a wedge with $\theta =10^\circ$ and ${ 45^\circ}$ angle on its front. It is evident that such a cut does not suppress ion acceleration (orange and black triangles in Fig.\ref{fig:scaling}), and may be helpful for experimental realization of the proposed acceleration scheme by avoiding hazardous laser backreflection \cite{Snyder2019,Bailly2020}. A slight decrease in maximum ion energy for the cut of $45^\circ$ may be interpreted on a basis of Figure \ref{fig:origin}a, where the cut of a target front may effectively decrease the efficiency of hot electron generation by suppressing $n_{\rm e,nth}$. The additional runs with the increased wall density ($n_{\rm wall}/n_{\rm cr}=300$) do not differ much from our main set of simulations with $n_{\rm wall}/n_{\rm cr}=100$, since the dependence of channel wall skin depth on the solid target density is pretty weak, $d_e \propto n_{\rm wall}^{-1/2}$. As a result, fast electron generation is not affected, leading to similar values of $\mathcal{E}_{\rm max}$ (squares in Fig.\ref{fig:scaling}). Likewise, considering a realistic CH target with $n_{\rm wall}/n_{\rm cr}=300$ and $n_{\rm ch}/n_{\rm cr}=20$ \cite{Rinderknecht2021}, we observe the same level of maximum proton energies (diamond markers in Fig.\ref{fig:scaling}a) and ion acceleration mechanism. Field ionization was also included in a separate series of 2D PIC runs. Both picosecond prepulse and driver pulse are capable of fully ionizing the part of the channel responsible for ion acceleration, and the maximum ion energies are not affected for all considered laser pulse powers. The presence of picosecond prepulses did not change the maximum attainable ion energies, as discussed earlier in the paper. Finally, we conducted a series of 3D runs, which showed relatively smaller maximum proton energies for $P=1 \& 10$ PW laser pulses in comparison to our 2D runs, possibly, due to weaker hot electron retention and stronger constraints on the transparency of the accelerated foil than in 2D \cite{Psikal2021}. Still, the main features of the acceleration mechanism, namely, rapid ion acceleration from the channel rear end at the time of laser pulse exiting the channel and presence of quasi-monoenergetic structure in the ion energy spectrum were verified, in agreement with \cite{Arefiev2018}. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig11.png} \caption{(a) Maximum energy scaling with power. Different markers represent different channel densities; the dot-dashed line corresponds to the simulations with RR off. We also plot typical energy scalings - classical theoretical scaling \cite{Esirkepov2014} (black dashed line). The fit of maximum ion energies observed in our runs (red dashed line) is also shown. (b) Optimal target conditions for ELI L4, ELI-NP, and BELLA lasers from 2D PIC simulations (markers) and theory predictions (Eqn.~\ref{eqn:optcond}), solid lines).} \label{fig:scaling} \end{figure} \section{Summary and Discussion} In this paper, we considered laser ion acceleration from the micron-scale channels filled with relativistically transparent plasma. We derived an optimal set of parameters for such acceleration and obtained a model to predict the ion energy gain with time. These considerations were checked against 2D PIC simulations and are in fair agreement with them. A few experimentally relevant physical effects were also addressed. The main results may be listed as follows: \begin{itemize} \item The acceleration is interpreted as a combination of TNSA and RPA, illustrated by Figures \ref{fig:scheme} and \ref{fig:phspace}. \item Quasi-monoenergetic features in the wide high-energy proton spectra are observed, as shown in Figure \ref{fig:spec}. The time of quasi-monoenergetic structure development coincides with the time of laser pulse exiting the channel. \item A theoretical model is developed on a basis of \cite{SSBulanov2010,SVB2014,SWILKS}. Optimal interaction conditions are given by Eqn.~\ref{eqn:optcond}, in approximate agreement with 2D PIC scans on channel length $L_{\rm ch}$ and channel filling density $n_{\rm ch}$, depicted by Figures \ref{fig:emax_vs_t_lch} and \ref{fig:emax_vs_nch}. \item The role of laser contrast is investigated, both indirectly, via the analysis of the role of channel filling electrons, and directly, with picosecond prepulse PIC simulations and auxiliary radiation hydrodynamics simulations of nanosecond pedestal coupled with PIC. Figures \ref{fig:emax_vs_nch} and \ref{fig:filling} illustrate the relaxed conditions on the channel filling density, while Figures 8 and 9 discuss the role of channel filling on electron heating. \item Channel radius requirements are shown to be a non-restrictive factor in the discussed acceleration scheme, see Figure \ref{fig:focus}a, which is beneficial for the currently available channel targets \cite{Rinderknecht2021}. \item The limiting factor for beam waist was found to be a relativistic transparency threshold, $w<w_t$. As long as it is smaller than the channel radius, $w<R_{\rm ch}$, the maximum ion energies are unaffected. \item The acceleration mechanism is shown to be robust to moderate perturbations in laser pointing, see Figure \ref{fig:focus}b, in fair agreement with the model. \item Oblique incidence (i.e. a cut at the front surface of the target) and field ionization were shown to be not significant limitations for maximum ion energies. \item In our 2D PIC scan, we observed GeV-scale protons accelerated by PW-scale laser pulses with approximate energy scaling $\mathcal{E}_{\rm max} \propto P^{0.322}$, seen in Figure \ref{fig:scaling}. \item Finally, three-dimensional simulations verified the primary features of the acceleration scheme, though the maximum ion energies are less than in analogous 2D cases. \end{itemize} One may argue that the suboptimal power scaling questions the applicability of structured targets. While power scaling was found to be quite shallow (Eqn.~\ref{eqn:scaling}), for $P\leq 1 \rm PW$ the acceleration mechanism is competitive with other mechanisms \cite{SSBulanov2010,Arefiev2018} in terms of maximum ion energies. Moreover, an additional analysis suggested that the considered mechanism possesses a high laser-to-proton energy conversion efficiency of no less than 15\%, promising a high volumetric charge of fast ion beam \cite{Arefiev2018}. This feature of the discussed acceleration scheme may be beneficial for the fast ignition concept in inertial confinement fusion \cite{MROTH,Honrubia2009}. In comparison to uniform near-critical density targets (or, for the channels with channel radius significantly exceeding beam waist, $R_{\rm ch}\gg w$), channel targets provide better pulse guiding and larger counts of fast protons \cite{Arefiev2018}. Auxiliary simulations with uniform near-critical target with $n_{\rm e} = 1-40 n_{\rm cr}$ show that the fast ion population is an order of magnitude smaller than for channel target, along with a notable laser pulse hosing, detrimental for the resulting ion source angular distribution. Quasi-static magnetic fields produced by micron-scale channel targets are remarkable - they exist on a picosecond scale, and demonstrate maximum values $B_{\rm max}^{\rm QS} \approx 110~ {\rm kT} \cdot (P[\rm PW])^{0.2}$ even after the pulse exits the channel, closely reaching the megatesla-scale magnetic fields predicted in microtube implosions \cite{Murakami2020}. These fields are known to significantly modify the electron motion inside the channel, allowing for a steady energy gain \cite{Wang2020a}, and may provide a platform for experiments with $\gamma$-ray generation and pair production \cite{Stark2016,Jansen2018,Rinderknecht2021}. The obtained $B_{\rm max}^{\rm QS}$ values are smaller than $B_{\rm max}^{\rm QS} \approx 550~ {\rm kT} (P[\rm PW])^{0.5}$ suggested by Eqns. 2\&3 in \cite{Arefiev2018} due to the difference in the magnetic field measurement methodology. It is worth noting that Magnetic Vortex Acceleration (MVA) mechanism may also contribute to the ion acceleration at the rear side of the target. Indeed, as we observe a strong quasi-static magnetic field forming inside the channel, we may expect the dipole structure to expand out of the channel exit, thus maintaining the charge separation and corresponding sheath field. However, since the considered channel length is smaller than the optimal pulse dissipation length for MVA obtained by equating Eqns.~\ref{eqn:Elas} and \ref{eqn:Eele} \cite{SSBulanov2010}, and the solid wall density preventing the magnetic vortex expansion, we believe that the MVA is suppressed in our acceleration scheme. When choosing the parameters for the simulations, we aimed at those that will soon be available on ELI-Beamlines L4 ATON laser \cite{ELIBL}. Based on our analytical model, we may envision an efficient application of the discussed acceleration scheme with laser parameters of ELI-NP \cite{ELINP2}, Apollon \cite{APOLLON}, J-KAREN-P \cite{KIRIYAMA2018}, and BELLA \cite{Leemans2013} as well. Figure \ref{fig:scaling}b shows optimal structured target conditions for these lasers obtained through auxiliary 2D PIC scans and theoretically predicted optimal regime given by Eqn.~\ref{eqn:optcond}. The agreement between them is fair, though the maximum proton energy for 1 PW, 30fs laser pulse is significantly suppressed, being no more than 600 MeV. Finally, let us discuss how the considered target compares to the microstructure targets produced today. In \cite{Rinderknecht2021}, a very similar type of target was considered, with primary differences being channel dimensions and material. Our results suggest that the maximum ion energy will be suppressed for the realistic channels of $L_{\rm ch} \sim 100~\mu \rm m$, giving a preference for channels of approximately half of that size, as seen in Figures 4c,d. Scans on channel radius (Figure \ref{fig:focus}a) and pointing stability (Figure \ref{fig:focus}b) predict a promising scaling to realistic parameters $R_{\rm ch}=6 ~\mu \rm m$ and $\sqrt{ \delta y^2} = 5 ~ \mu \rm m$, sustaining the acceleration efficiency. Our simulations with increased solid wall density ($n_{\rm e, wall}=300 n_{\rm cr}$) and high-Z runs for polystyrene target (radiation hydrodynamics + 2D PIC simulations) and CH target (2D PIC simulations; diamonds in Figure \ref{fig:scaling}a) suggest that the considered laser ion acceleration scheme will be applicable for the Kapton substrate-CH foam filling targets as well, in agreement with \cite{Arefiev2018}. The results obtained in the paper show that the considered laser ion acceleration scheme is robust against moderate variations in laser and target parameters, thus making it a viable candidate for experimental implementation. \section*{Acknowledgements} This work was supported by NNSA DE-NA0003871, DE-SC0021248, and AFOSR FA9550-15-1-0391 and by the project High Field Initiative (CZ.02.1.01/0.0/0.0/15 003/0000449) from European Regional Development Fund. The EPOCH code was developed as part of the UK EPSRC funded projects EP$/$G054940$/$1. The software used in this work was developed in part by the DOE NNSA- and DOE Office of Science-supported Flash Center for Computational Science at the University of Chicago and the University of Rochester. The simulations presented in this article were performed on computational resources managed and supported by Princeton Research Computing at Princeton University. K.V.L. is thankful to Alexey Arefiev for fruitful discussions.
1,108,101,563,767
arxiv
\section{Conclusion} \label{sec:conclusion} We introduce Variation-based Cause Effect Identification (VCEI), a kernel-based framework for causal discovery in a bivariate systems. Our method combines the principle of independent causal mechanism (ICM) with convex-optimization under semi-definite relaxation (SDR) and the learning power of data-driven models to identify the genuine causal structure of a bivariate system. With the kernel-based scores, we impose only mild assumptions on the the data types, thus giving the advantage of its implementation for a wide range of applications. Additionally, our framework is robust to the model capacity as long as it is capacitive enough to learn variations of conditionals. \section{Empirical Distribution} \label{appendix:empirical-distribution} The empirical probability density function (ePDF): \begin{equation} p_{x,N}(x) = \frac{1}{N}\sum_{n=1}^N \delta_{x_n}(x) \end{equation} is the derivative of the empirical cumulative distribution (eCDF) defined by \begin{equation} F_{x,N}({x}) = \frac{1}{N} \sum_{n=1}^{N} \mathds{1}_{{x}_n \,\leq\, {x}} \end{equation} where $\mathds{1}_{(\cdot)}$ is the indicator function and the inequality is to be understood entry-wise. The eCDF $F_{x,N}(x)$ is the minimum variance unbiased estimator of the true CDF function $F_x(x)$ \citep{scott1992multivariate}. The ePDF can also be viewed a limit case of kernel-density estimation. The motivation behind such a modeling choice is that, we normally do not have the output of our \emph{unknown} system/data-generation-process to an arbitrary input $x$ (other than the samples pairs $\{x_n,y_n\}_{n=1}^N$. Hence, in our search for a distinct marginal on e.g. $p_x$, we are limited to the convex set defined by the mixture distribution. These are the stimuli for which we know the output of our unknown system treating it as a stochastic mapping. This, in turn, allows us to treat the obtained weight vector as a sample weight on the joint distribution $p_{xy}$ and train models to approximate the conditionals $p_{x|y}$ and $p_{y|x}$ accordingly. One downside is that the search space for a distinct marginal is limited to this convex set, which is itself sensitive to the sampling error. A standard kernel density estimation can alleviate such a problem, but as mentioned, we assume no access to (nor information on) the underlying system allowing us to use this kde-based estimates on the output or joint spaces. \section{Experimental Setup and Further Analysis} \label{appendix:experiment-setup} In this section we detail the experimental setup that was used in estimating the results presented in \cref{fig:rel_acc}. We first standardize the dataset using the \texttt{RobustScaler}from the \texttt{sklearn} library [B1]. As a second step we extract randomly $M$ samples to use further in the optimization problem from \ref{sub:artificial-setups}. The next steps are then to be followed as stated in the Algorithm \ref{alg:VCEI} where the hyperparameters were defined as follows: \begin{enumerate} \item We use a squared exponential kernel (SEK), with its maximum likelihood estimate of its lengthscale parameter using a KDE on a 5-fold cross validation scheme. \item We use the Exact-GP as our predictive model class $\mathcal{M}$ (SEK as a kernel). \item We use $b_\alpha = 0.2$ \item We use the mean value for the predition of the GP model \item All experiments took place on an 8-core processor from a single PC (without GPU compute power). \end{enumerate} Note that in case of a large dataset (such as the pair-07 in T\"ubingen benchmark) we extract a subset that represents the distribution of the original set, referred to as a coreset $\mathcal{D_\mathbf{C}}$ which is estimated as follows. From a KDE estimate [B2] on either of the marginals (on $x$ and $y$), include the $k$ \emph{rare} samples of with probability lower than 0.05 in either of the marginal KDEs. This is then further complemented with $M-k$ samples drawn randomly. This last step (the random draw of $M-k$ samples) is repeated a number of times, and the case with the minimal MMD to the original set is selected. In case of a small dataset, the coreset is automatically identical to the main set. \section{Experimental Validation} \label{sec:experiments} In the sequel, we report empirical validation of our proposed method. For a benchmark, we tested VCEI on the same use-cases presented in the work of \citet{tagasovska2018distinguishing}. \textbf{Simulated data:} simulation data\footnote{All synthetic dataset have been obtained from: https://github.com/tagas/bQCD} were originally generated in the work of \citet{mooij2016distinguishing}. Four different scenarios were considered: \texttt{SIM} which is the default use-case without confounder-bias, \texttt{SIM-c} which includes a single latent confounder, \texttt{SIM-ln} a use-case with low noise levels, and finally \texttt{SIM-G} which has a Gaussian-like distribution for both the cause $X$ and the additive noise. We additionally, included the 5 additional synthetic datasets published by \citet{tagasovska2018distinguishing} (namely \texttt{AN(-s)}, \texttt{LS(-s)}, and \texttt{MN-U}) (see \cref{appendix:sub:benchmark} for a more detailed description). \textbf{Real-world data:} the Tübingen Cause-Effect (CE) benchmark was considered for real-data validation, which consists of 108 pairs from 37 different domains. We only used 103 pairs, which have univariate (continuous or discrete) cause and effect variables. \textbf{Baselines:} we included a selected set of the methods from the baselines reported in the work of \citet{tagasovska2018distinguishing}. We namely compare our VCEI framework to: biCAM \cite{buhlmann2014cam}, which are \underline{a}dditive \underline{n}oise \underline{m}odel (ANM)-based, IGCI \cite{janzing2010causal}, bQCD \cite{tagasovska2018distinguishing}), Sloppy \cite{marx2019identifiability}, and finally GPI \cite{stegle2010probabilistic}. \textbf{Sample Size:} due to the limited scalability of the proposed framework (and the limited computational budget), the number of samples used in the optimization step to construct the different setting \ref{sub:artificial-setups}, and latter for training of the predictive models, was chosen to be relatively low. \Cref{fig:rel_acc} depicts the identification accuracies of our method on the selected benchmark datasets, and compared to other causal discovery baseline algorithms. We use the same metric as in \cite{mooij2016distinguishing} namely, \emph{accuracy for forced decisions}. In principle, each algorithm is forced to take a decision about the causal direction from which the identification accuracy corresponds to the how frequent the algorithm was able to reach correct decisions over the number of dataset files. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{barplot} \caption{Accuracy of VCEI on benchmark datasets compared to baseline methods. Identification accuracies for baseline methods were taken from \cite{tagasovska2018distinguishing}. For \texttt{SIM-*} the sample size was $M=200$, while for the remaining datasets the sample was limited to $M=100$.}\label{fig:rel_acc} \vspace{-0.5cm} \end{center} \end{figure} While our framework does not show an unprecedented performance on the benchmark dataset, it is certainly competitive to many previous methods, in addition to being generic w.r.t data types, and robust w.r.t choice of model class and the learning capacity thereof. \section{Introduction} \label{sec:introduction} Building trust in our machine learning models requires that they extend beyond their current limits of learning associational patterns and correlations. We need to be able to use them in interacting with our surroundings, in taking action to change or improve our environment, or in querying them for hypothetical scenarios that requires transparency. Yet, their black-box characteristics constitute significant barriers to their wide-scale adoption in, e.g., safety-critical domain. Causal inference relies on genuine cause-effect relationships rather purely statistical associations, thus promoting our understanding of the underlying data generation process. While inferring genuine causal relations (oftentimes termed \emph{causal discovery}) is, in general, a challenging task, it is even more challenging in bivariate systems where many of the early methods (based on conditional independence tests \cite{spirtes2000causation,sun2007distinguishing,pearl2009causality}) fall short. Moreover, bivariate causal discovery is a fundamental step in mining implicit asymmetries in larger structures. In bivariate systems, asymmetries in the functional relationship (e.g. causal relationships tend to be functionally simpler, more elementary, and easier to learn with limited-capacity models than purely associational ones) is an example of a characteristic permitting identifiability of causal structure from observational data. Another example of such an asymmetry is the postulate of independent mechanisms, on which our framework relies. In this principle, it is assumed that causal relationships tend to decompose into invariant, stable sub-mechanisms. Such a principle has been the core asymmetry exploited in numerous bivariate causal discovery frameworks as shall be discussed in \cref{sec:related-work}. In this work, we exploit a barely explored interpretation of this principle, namely that these sub-mechanisms do not influence each other. To this end, we introduce variations to cause generation mechanism and quantify the influence on the effect generation mechanism. Introducing variations to an empirical distribution can be as na\"ive as drawing random subsets. While this is not guaranteed to introduce non-negligible variations, \cref{fig:icm} shows a crafted toy setup that illustrates the effect of these variations on the effect generation mechanism, and the asymmetry revealed as a result. While several previous works relied on this principle for causal discovery in bivariate systems, they either impose strict constraints on the data types (e.g. continuous data in regression-based approaches or identical data spaces for cause and effect), tend to show high sensitivity to the capacity of the chosen model class, or suffer from prohibitive computational complexities that renders them practically applicable only to certain (e.g., binary) data types. In this current work, we address these limitation, propose a new cause-effect identification framework based on artificially generated variations. The choice of the discrepancy measure along with the kernel embedding of the marginal distributions renders our framework applicable a variety of data types\footnote{That is, within the identifiability limitations of the ICM postulate as shall be discussed in \cref{sub:vcei-identifiability}.} (e.g., timeseries data) and offers a practical realization leveraging convex optimization tools. \section*{Appendix} \input{empirical-distribution.tex} \input{maximally-distinct.tex} \input{experiment_setup.tex} \end{document} \section{Maximally Distinct Mixture} \label{appendix:derivation} In this section we detail the derivation of the \underline{s}emi\underline{d}efinite \underline{r}elaxation (SDR) approach to the optimization problem used in our method \cref{eq:original-formulation:objective}--\ref{eq:original-formulation:inequality}. \subsection{From the Uniform Empirical} \label{ch:mdm:uniform} \paragraph{Problem 1}\emph{Given a set of samples $\mathcal{D}_{x} = \{{x}_n\}_{n=1}^N$ from a random variable $x\in\mathbb{X}$, find the weight vector $\bm{\alpha}$ that renders the mixture distribution $p_{x,N}^{\bm{\alpha}}$ maximally distinct from $p_{x,N}$ in some discrepancy measure $D(\cdot, \cdot)$.\label{problem1}} With the kernel-based MMD measure $D\equiv \text{MMD}_{k_\mathbb{X}}$, Problem 1 can be formalized as \begin{subequations} \begin{align} ~~\underset{\bm{\alpha}}{\text{maximize}} ~~~~~ & \text{MMD}^2_{k_\mathbb{X}}(p_{x,N}^{\bm{\alpha}},\,p_{x,N}) \label{eq:original-formulation:objective_a}\\ \text{subject to} ~~~~~ & \bm{1}_N^\top \bm{\alpha} = 1 \label{eq:original-formulation:equality_a}\\ & \bm{\alpha} \geqslant 0 \;\; \text{(entry-wise) \label{eq:original-formulation:inequality_a}} \end{align} \end{subequations} where $\bm{1}_{N}$ refers to a vector of ones with dimensionality $N$. The quantity being optimized can be reformulated as follows: \begin{subequations} \label{prob1_ref} \begin{alignat}{2} &\! \text{MMD}^2_{k_\mathbb{X}}(p_{x,N}^{\bm{\alpha}},\,p_{x,N}) & =& \|p_{x,N}^{\bm{\alpha}}(x)-p_{x,N}(x)\|_{\mathcal{H}}^2\\ & & =& \left\|\sum_{n=1}^{N}\alpha \delta_{\bm{x}_n} - \frac{1}{N} \sum_{n=1}^{N} \delta_{\bm{x}_n} \right\|_{\mathcal{H}}^2\\ & & =& \sum_{n,n'=1}^{N}\alpha_n\alpha_{n'}\langle\delta_{\bm{x}_n},\delta_{\bm{x}_{n'}}\rangle - \frac{2}{N}\sum_{n,n'=1}^{N}\alpha_n\langle\delta_{\bm{x}_n},\delta_{\bm{x}_{n'}}\rangle +\frac{1}{N^2}\sum_{n,n'=1}^{N}\langle\delta_{\bm{x}_n},\delta_{\bm{x}_{n'}}\rangle\\ & & =& \bm{\alpha}^\top \mathbf{K}_{xx} \bm{\alpha} - \frac{2}{N} \bm{\alpha}^\top \mathbf{K}_{xx} \bm{1}_N + \frac{1}{N^2} \bm{1}_N^\top \mathbf{K}_{xx} \bm{1}_N \end{alignat} \end{subequations} where $\mathbf{K}_{xx}=[k(x_i,x_j)]_{i,j=1}^N$ is the Gram matrix of the kernel function $k_\mathbb{X}: \mathbb{X}\times\mathbb{X}\to\mathbb{R}^+$ on the sample set $\mathcal{D}_{x}$. with which the optimization problem becomes: \begin{subequations} \begin{alignat}{2} ~~\underset{\bm{\alpha}}{\text{maximize}} ~~~~~ & \bm{\alpha}^\top \mathbf{K}_{xx} \bm{\alpha} - \frac{2}{N} \bm{\alpha}^\top \mathbf{K}_{xx} \bm{1}_N + \frac{1}{N^2} \bm{1}_N^\top \mathbf{K}_{xx} \bm{1}_N \label{19a} \\ \text{subject to} ~~~~~ & \bm{1}_N^\top \bm{\alpha} = 1 \\ & \bm{\alpha} \geqslant 0 \;\; \text{(entry-wise)} \end{alignat} \end{subequations} The optimization problem is not a convex optimization problem since it is a \emph{maximization} of a convex function. Noting that the closed-form estimator of the squared MMD has a quadratic form in the optimization variable $\bm{\alpha}$, \citet{park2017general} address this problem in a two-step procedure referred to as \underline{s}emi\underline{d}efinite \underline{r}elaxation (SDR). They first \emph{lift} the problem to a higher dimensional space by defining $\mathbf{A}=\bm{\alpha}\bm{\alpha}^\top$ in which the objective function becomes linear, then apply a convex \emph{relaxation} to the intractable constraints. Without affecting the solution to the problem and using the properties of the \textbf{trace} of a matrix, each term of the objective \cref{19a} can be reformulated as: \begin{subequations} \begin{alignat}{2} \bm{\alpha}^\top \mathbf{K}_{xx} \bm{\alpha} &= \text{\textbf{trace}}(\bm{\alpha}^\top \mathbf{K}_{xx} \bm{\alpha})\\ &= \text{\textbf{trace}}(\bm{\alpha} \bm{\alpha}^\top \mathbf{K}_{xx} )\\ &= \text{\textbf{trace}}(\mathbf{A} \mathbf{K}_{xx} )\\ &= \mathbf{A} \bullet \mathbf{K}_{xx} \end{alignat} \end{subequations} and similarly for the second term: \begin{subequations} \begin{alignat}{2} \bm{\alpha}^\top \mathbf{K}_{xx} \bm{1}_N &= \text{\textbf{trace}}(\bm{\alpha}^\top \mathbf{K}_{xx} \bm{1}_N) \\ &= \text{\textbf{trace}}(\bm{\alpha} \bm{\alpha}^\top \mathbf{K}_{xx} \bm{1}_N\bm{1}_N^\top) \\ &= \mathbf{A} \bullet \mathbf{K}_{xx} \bm{1}_N\bm{1}_N^\top \end{alignat} \end{subequations} where $\bullet$ denotes the dot-product in matrix space defined as $\mathbf{A}\bullet\mathbf{K}_{xx} = \text{\textbf{trace}}(\mathbf{A}\mathbf{K}_{xx})$. They then extract all convex constraints from the condition $\mathbf{A}=\bm{\alpha}\bm{\alpha}^\top = [a_{ij}]_{i,j=1}^{N,N} $ The first is the entry-wise non-negativity $a_{ij} = \alpha_i \alpha_j \geqslant 0$ due to the entry-wise non-negativity of $\bm{\alpha}\in[0,1]^N$. The second is the consequence of the normalized vector $\bm{1}_N^\top\bm{\alpha}=1$ which can expressed in $\mathbf{A}$ as $\bm{1}_N^\top\mathbf{A}\bm{1}=\bm{1}_N^\top\bm{\alpha}(\bm{1}_N^\top\bm{\alpha})^\top = 1$. The last is the similarity of $\mathbf{A}=\mathbf{A}^\top$ by definition. Finally, the equality condition above is relaxed to $\mathbf{A}\succeq \bm{\alpha}\bm{\alpha}^\top$ and written in its Schur-complement form. As a result, the following formulation is a relaxation of \ref{eq:original-formulation:objective_a}--\ref{eq:original-formulation:inequality_a} which is a \underline{q}uadratically \underline{c}onstraint \underline{q}uadratic \underline{p}rogram (QCQP): \begin{align} ~~\underset{\mathbf{A}}{\text{maximize}} ~~~~~ & \mathbf{A} \bullet \left(\mathbf{K}_{xx}-\frac{2}{N}\mathbf{K}_{xx}\bm{1}_N\bm{1}_N^\top \right) + \frac{1}{N^2} \bm{1}_N^\top\mathbf{K}_{xx}\bm{1}_N \label{eq:sdr-formulatoin:objective_a} \\ \text{subject to} ~~~~~ & \begin{bmatrix} \mathbf{A} & \mathbf{A}\bm{1}_N \\ \bm{1}_N^\top\mathbf{A} & 1 \\ \end{bmatrix} \;\succeq \;0 \quad \text{(positive semidefiniteness)} \label{eq:sdr-formulation:inequality:psd_a} \\ & \mathbf{A} \geqslant 0 ~\qquad\qquad\qquad \text{(entry-wise)} \label{eq:sdr-formulation:inequality:entry_a} \\ & \bm{1}_N^\top\mathbf{A}\bm{1}_N = 1 \label{eq:sdr-formulation:equality:normalization_a} \\ & \mathbf{A} = \mathbf{A}^\top \label{eq:sdr-formulation:equality:symmetry_a} \end{align} this problem has a convex object (linear) with convex constraints which can be solved using existing packages such as \texttt{cvxpy} \cite{diamond2016cvxpy}. \paragraph{Problem 2:}\emph{Given two sets of samples $\{\bm{x}_n\}_{n=1}^N$ and $\{\bm{\tilde{x}}_m\}_{m=1}^M$ from the two distributions $p_{x,N}$ and $p_{\tilde{x},M}$, respectively, with the corresponding random variables $x,\tilde{x}\in\mathbb{X}$ find the weight vector $\tilde{\bm{\alpha}}\in[0,1]^M$ that renders the mixture distribution $p_{\tilde{x},M}^{\tilde{\bm{\alpha}}}$ maximally distinct from $p_{x,N}$} w.r.t the discrepancy measure $\text{MMD}_{k_\mathbb{X}}$. This problem can be formalized as \begin{subequations} \begin{align} ~~\underset{\bm{\alpha}}{\text{maximize}} ~~~~~ & \text{MMD}^2_{k_\mathbb{X}}(p_{\tilde{x},M}^{\tilde{\bm{\alpha}}},\,p_{x,N}) \label{eq:original-formulation:objective_a2}\\ \text{subject to} ~~~~~ & \bm{1}_M^\top \tilde{\bm{\alpha}} = 1 \label{eq:original-formulation:equality_a2}\\ & \tilde{\bm{\alpha}} \geqslant 0 \;\; \text{(entry-wise) \label{eq:original-formulation:inequality_a2}} \end{align} \end{subequations} Same as in \ref{prob1_ref} the objective can be reformulated as follows: \begin{subequations} \begin{align} \text{MMD}^2_{k_\mathbb{X}}(p_{\tilde{x},M}^{\tilde{\bm{\alpha}}},\,p_{x,N}) & = \left\|p_{\tilde{x},M}^{\tilde{\bm{\alpha}}}(\tilde{x})-p_{x,N}(x)\right\|^2_\mathcal{H}\\ & = \tilde{\bm{\alpha}}^\top \mathbf{K}_{\tilde{x}\tilde{x}} \tilde{\bm{\alpha}} - \frac{2}{N} \tilde{\bm{\alpha}}^\top \mathbf{K}_{\tilde{x}x} \bm{1}_N + \frac{1}{N^2} \bm{1}_N^\top \mathbf{K}_{xx} \bm{1}_N \end{align} \end{subequations} Similar to Problem 1, the objective terms can be rewritten as: \begin{subequations} \begin{align} \tilde{\bm{\alpha}}^\top \mathbf{K}_{\tilde{x}\tilde{x}} \tilde{\bm{\alpha}} &= \tilde{\mathbf{A}} \bullet \mathbf{K}_{\tilde{x}\tilde{x}} \end{align} \end{subequations} and similarly for the second term: \begin{subequations} \begin{align} \tilde{\bm{\alpha}}^\top \mathbf{K}_{\tilde{x}x} \bm{1}_N &= \tilde{\mathbf{A}} \bullet \mathbf{K}_{\tilde{x}x} \bm{1}_N\bm{1}_N^\top \end{align} \end{subequations} The constraints can be modified as in Problem 1. Hence, a relaxation of \ref{eq:original-formulation:objective_a2}--\ref{eq:original-formulation:inequality_a2} is formulated as: \begin{align} ~~\underset{\tilde{\mathbf{A}}}{\text{maximize}} ~~~~~ & \tilde{\mathbf{A}} \bullet \left(\mathbf{K}_{\tilde{x}\tilde{x}}-\frac{2}{N}\mathbf{K}_{\tilde{x}x}\bm{1}_N\bm{1}_N^\top \right) + \frac{1}{N^2} \bm{1}_N^\top\mathbf{K}_{xx}\bm{1}_N \label{eq:sdr-formulatoin:objective_a2} \\ \text{subject to} ~~~~~ & \begin{bmatrix} \tilde{\mathbf{A}} & \tilde{\mathbf{A}}\bm{1}_M \\ \bm{1}_M^\top\tilde{\mathbf{A}} & 1 \\ \end{bmatrix} \;\succeq \;0 \quad \text{(positive semidefiniteness)} \label{eq:sdr-formulation:inequality:psd_a2} \\ & \tilde{\mathbf{A}} \geqslant 0 ~\qquad\qquad\qquad \text{(entry-wise)} \label{eq:sdr-formulation:inequality:entry_a2} \\ & \bm{1}_M^\top\tilde{\mathbf{A}}\bm{1}_M = 1 \label{eq:sdr-formulation:equality:normalization_a2} \\ & \tilde{\mathbf{A}} = \tilde{\mathbf{A}}^\top \label{eq:sdr-formulation:equality:symmetry_a2} \end{align} which is a QCQP on the $M^2$ optimization variables in $\tilde{\mathbf{A}}=[\tilde{a}_{ij}]_{i,j=1}^{M,M}$. \subsection{Practical Considerations} \label{sub:practical-considerations} While Problem 1 tends to construct setups with maximal disparity from the given empirical distribution $p_{x,N}$, we are not necessarily interested in such extreme scenarios as long as these variations are non-negligible so that they reveal dependencies between the marginal and the conditional distributions in the acausal direction. Therefore, we would oftentimes prefer a sub-optimal, yet more appealing, solution to the optimal solution of Problem 1 for practical considerations. Such practical aspects are discussed in the sequel. \textbf{Scalability:} one directly notes that the SDR formulation \ref{eq:sdr-formulatoin:objective}--\ref{eq:sdr-formulation:equality:symmetry} hardly scales to larger datasets since the dimensionality of the optimization space is quadratic in the number of data points $N$ (as a result of the lifting step). Therefore, we rather restrict the weighted distribution $p_{\cdot,N}^{\bm{\alpha}}$ to a reasonable number of samples $M<N$ drawn randomly from the original dataset. This is denoted henceforth by $p_{\cdot,M}$ for the $M$-sample subset and $p_{\cdot,M}^{\tilde{\bm{\alpha}}}$ for the weighted version thereof. The size of the reference empirical distribution $p_{\cdot,N}$ (2\textsuperscript{nd} argument of \cref{eq:mmd:weighted:estimate}) does not affect the dimensionality of the optimization problem and, thus, can grow as needed within the Gram matrix computational limits. \begin{wrapfigure}[21]{r}{0.45\textwidth} \centering \vspace{-0.55cm} \includegraphics[width=6.2cm]{outliers} \vspace{-0.1cm} \caption{An illustrative example of solving problem 1 on a 2D Gaussian dataset. The true distribution is $p_x=\mathcal{N}(\bm{0}, \bm{1})$ from which $N=100$ samples are depicted in grey. Purple markers represent the weights $\bm{\alpha}$ of the weighted distribution $p_{x,100}^{\bm{\alpha}}$. } \label{fig:outliers} \end{wrapfigure} \textbf{Dirac Distributions:} an artifact of the choice of the discrepancy measure (and the formulation of problem 1) is that attainable solutions to \ref{eq:sdr-formulatoin:objective}--\ref{eq:sdr-formulation:equality:symmetry} are in practice Dirac-like probability measures in the sense that $\left\|\bm{\alpha}\right\|_\infty\sim1$ where $\left\|\cdot\right\|_\infty$ is the supremum norm. One can avoid such extreme scenarios by augmenting the optimization problem with regularizing constraints such as \begin{align} \left\| \mathbf{A} \right\|_\infty \leqslant b_{\alpha} \label{eq:augment:alpha-max} \end{align} with the supremum norm of a matrix given by $\left\| \mathbf{A} \right\|_\infty \coloneqq \max_i \left\| \mathbf{a}_{i\cdot}\right\|_1$ which directly constraints the maximum probability mass that is allowed on a single data point and $b_\alpha\in[1/M,\,1.0]$ becomes a hyper-parameter in our framework. \Cref{fig:outliers} illustrates the effect of this regularization constraint on a 2D sample set drawn from a standard Normal distribution. Likewise, one can constrain maximum deviation from the uniform mixture as in \begin{equation} \text{MMD}_k^2\left(p_{\cdot,M}^{\tilde{\bm{\alpha}}}, p_{\cdot, M}\right) \;\leqslant\; \text{MMD}^2\left(p_{\cdot,M}, p_{\cdot,N}\right) + b_D \label{eq:augment:mmd-uniform-max} \end{equation} where $b_D$ is a slack variable, and the l.h.s is a linear function of the optimization variable $\mathbf{A}$ similar to Eq. \ref{eq:sdr-formulatoin:objective} with a different Gram matrix. Given the convexity of both regularization constraints above, \cref{eq:augment:alpha-max,eq:augment:mmd-uniform-max}, the SDR formulation \ref{eq:sdr-formulatoin:objective}--\ref{eq:sdr-formulation:equality:symmetry} remains a convex optimization problem if augmented wither either of these constraints. \textbf{SDR Relaxation:} a solution $d_\mathbb{X}^\text{sdr}$ obtained from the SDR formulation is a lower bound on the optimal value of the original formulation \ref{eq:original-formulation:objective}--\ref{eq:original-formulation:inequality} that is tight only if the rank one condition $\mathbf{A}=\bm{\alpha}\bm{\alpha}^\top$ is satisfied \cite{park2017general}. Yet, the rank-one condition is not guaranteed, and is even unlikely to be satisfied as additional constraints (e.g., \cref{eq:augment:alpha-max,eq:augment:mmd-uniform-max}) are included in the optimization problem. Practically, however, estimating the weight vector as $\bm{\alpha} \simeq \mathbf{A}^{\text{SDR}}\bm{1}$ remains a reasonable estimate for the weighted empirical that notably outperforms naive baselines (e.g. drawing random subsets). \textbf{Disagreement Bias:} in the second step of our identification framework, we quantify disparity between two models (e.g., $\hat{f}_{y|x}$ and $\hat{f}_{y|x}^{\bm{\alpha}}$) via their MMD-based disagreement on a common input distribution. However, for some model classes (e.g., neural networks) such an approach is likely to be biased. In fact, it was observed recently that two identical neural network classifiers would disagree even when trained on identical data as long as a randomization factor plays a roll (i.e. different initial weights, batching, data shuffling, or different random seeds in general) \cite{nakkiran2020distributional,jiang2021assessing}. In fact, it was conjectured that this sort of disagreement correlates with the generalization performance of the classifier. Our empirical observations extend the claims of \cite{nakkiran2020distributional,jiang2021assessing} to regression problems with MMD as a disagreement metric. Since all our models are trained on limited data, they are likely to disagree (i.e. generalize poorly) even if the training distributions were identical. This \emph{disagreement bias} is not accounted for in our work, and is left as an open question for future contribution. \Cref{fig:trend} depicts an example of such a bias in the non-zero disagreement score $S_{y\to x}$ even though the genuine causal direction is indeed $y\to x$. \begin{wrapfigure}[21]{r}{0.45\textwidth} \centering \vspace{-0.55cm} \includegraphics[width=6.2cm]{trend} \vspace{-0.2cm} \caption{An illustration of the behaviour of the disagreement scores $S_{x\to y}$ (upper) and $S_{y\to x}$ (lower) for different values of the hyper-parameter $b_\alpha$ where the true causal structure is $y\to x$. Example from the 1\textsuperscript{st} pair of the \texttt{SIM} dataset \cite{cause_effect_moiij}.} \label{fig:trend} \end{wrapfigure} \textbf{Trend as a Score:} the final decision criterion, that is comparing the MMD-based disagreements scores, implicitly imposes a strong assumption of the data spaces $\mathbb{X}\equiv\mathbb{Y}$ and similarly on the kernels $k_{\mathbb{X}}\equiv k_{\mathbb{Y}}$ (admittedly, this has been an implicit assumption in numerous previous works e.g., roughly all approaches relying on regression performance). At the expense of additional computational demands, one can circumvent this limitation with the following observation. It is observed, and also intuitive, that the attainable solution to \ref{eq:sdr-formulatoin:objective}--\ref{eq:sdr-formulation:equality:symmetry} augmented with \ref{eq:augment:alpha-max} is monotonic in the hyper-parameter $b_\alpha$ (refer to \cref{appendix:experiment-setup} for an illustrative example). According to ICM, repeating the optimization problem with increasing values for $b_\alpha$ is likely to be reflected in an increasing trend of the disagreement score of the acausal direction. In the causal direction, however, the disagreement score is expected to remain roughly constant. Treating these disagreement score as functions of the regularization hyper-parameter (e.g., linearly regressing $S_{.}$ on $b_\alpha$ for different solutions of the optimization problem) gives an alternative decision criterion (e.g., trend of these regression lines) that is independent of the data spaces, kernels, and kernel hyper-parameters. This is briefly illustrated in \cref{fig:trend} (and a similar effect can be observed w.r.t the number of samples $M$), but is not thoroughly investigated in this work, and is rather left as another open point for future contribution. Interestingly, and also left open for future work, using this decision mechanism may also mitigate the causal sufficiency assumption leading to broader identifiability. \subsection{Identifiability} \label{sub:vcei-identifiability} The proposed VCEI framework is viewed as a practical realization of the ICM principle, and thus, inherits all identifiability limitations of that postulate. When viewed from e.g., Kolmogorov complexities $K(p_x) + K(p_{y|x}) \leq K(p_y) + K(p_{x|y})$ if $x\to y$ as formulated by \citet{janzing2010causal}, one directly notes a limitation of ICM-based frameworks, that is when equality occurs and thus the ICM-based asymmetry vanishes. Asymmetry vanishes if the underlying system can be described with the same functional form and distributional families in either direction \citep{mitrovic2018causal}. A very common example thereof are linear models with additive Gaussian noise\citep{hoyer2008nonlinear}. Loosely speaking, identifiability of ICM-based frameworks increase with increasing non-linearity of the functional form, smaller noise effects \citep{mooij2016distinguishing}, and less (or no) confounding bias . In addition, and as stated earlier, we assume existence of a causal link and causal sufficiency. The former, however, can be mitigated with either an independence test. The latter can also be mitigated with the use of disagreement trends rather than single scores (as discussed in the preceding subsection) where confounding may lead to a positive trend in either direction, but it is expected be more observable (i.e. steeper) in the acausal direction. \subsection{Artificially Generated Experimental Setups} \label{sub:artificial-setups} In this step, we propose an approach to introduce variations to the marginal distributions. For simplicity though, we will describe our approach for the first random variable $x$, but it should be clear that this step takes place once for each covariate. It should also be noted that such variations are intended to reveal potential dependencies between the marginal and the corresponding conditional, and do not necessarily retain similar dynamics to an \emph{intervention}. Given $\mathcal{D}_x$ with their unknown marginal $p_x$, we define the \emph{empirical distribution} on these samples to be the uniform mixture of the Dirac delta distributions $\delta_{x_n}$ defined on each sample individually: \begin{equation} p_{x,N}({x}) = \frac{1}{N} \sum_{n=1}^N \delta({x}-{x}_n) = \frac{1}{N}\sum_{n=1}^N \delta_{{x}_n}(x) \end{equation} which is a probability density function with the corresponding empirical cumulative distribution function $F_{x,N}(x)$ (eCDF) defined on the sample set as $ F_{x,N}({x}) = \frac{1}{N} \sum_{n=1}^{N} \mathds{1}_{{x}_n \,\leq\, {x}} $ where $\mathds{1}_{(\cdot)}$ is the indicator function and the inequality is to be understood entry-wise \citep{scott1992multivariate}. A generalization of the empirical distribution is a weighed mixture of the constituent Dirac distributions $\delta_{x_n}$ which we will denote by $p_{x,N}^{\bm{\alpha}}$ and define as (see \cref{appendix:empirical-distribution} for a brief discussion on this modelling choice): \begin{equation} p_{x,N}^{\bm{\alpha}}({x}) = \sum_{n=1}^N \alpha_n\delta_{{x}_n} \end{equation} where $\bm{\alpha} = [\alpha_n]_{n=1}^N \in [0,1]^{N\times1}$ is a non-negative weight vector satisfying $\bm{1}^\top \bm{\alpha} = 1$ where $\bm{1}$ is the all-ones vector. From \cref{eq:mmd:empirical:biased}, the MMD between the empirical distribution $p_{x,N}$ and the weighted version thereof $p_{x,N}^{\bm{\alpha}}$ becomes: \begin{equation} \text{MMD}_k^2(p_{x,N}^{\bm{\alpha}},p_{x,N}) \simeq \bm{\alpha}^\top \mathbf{K}_{xx} \bm{\alpha} - \frac{2}{N} \bm{\alpha}^\top \mathbf{K}_{xx} \bm{1} + \frac{1}{N^2} \bm{1}^\top \mathbf{K}_{xx} \bm{1} \label{eq:mmd:weighted:estimate} \end{equation} where $\mathbf{K}_{xx}=[k(x_i,x_j)]_{i,j=1}^N$ is the Gram matrix of the kernel $k$ on the sample set $\mathcal{D}_x$. With this defined, and with the objective of introducing a non-negligible variation to the marginal of ${x}$, we are interested in solving the following problem: \paragraph{Problem 1}\label{prg:problem_1}\emph{Given a set of samples $\{{x}_n\}_{n=1}^N$, find the weight vector $\bm{\alpha}$ that renders the mixture distribution $p_{x,N}^{\bm{\alpha}}$ maximally distinct from $p_{x,N}$ in some discrepancy measure $D(\cdot, \cdot)$.} For analytical tractability, we will mainly consider the (MMD) metric\footnote{While we introduce our framework based on the MMD metric, similar relaxations or heuristics \cite{park2017general} can be applied to render Problem 1 a convex optimization problem for other discrepancy measures. This is, however, outside the scope of this contribution.} w.r.t a positive definite kernel function $k_{\mathbb{X}}:\mathbb{X}^2\rightarrow\mathbb{R}$. By adopting a kernel-based approach, we mask the data space (in the sense that data space, along with its type and dimensionality, is subsumed in the kernel design/function) with an appropriately chosen kernel $k_\mathbb{X}$ function rendering our VCEI framework widely applicable to various data types\footnote{For instance, in inferring summary graphs of temporal data using a timeseries kernel, or an embedding+kernel design for e.g. natural languages.} as opposed to e.g., regression-based identification frameworks. Based on the squared MMD as a discrepancy measure, Problem 1 can be formally stated as: \begin{align} ~~\underset{\bm{\alpha}}{\text{maximize}} ~~~~~ & \text{MMD}^2_{k_\mathbb{X}}(p_{x,N}^{\bm{\alpha}},\,p_{x,N}) \label{eq:original-formulation:objective}\\ \text{subject to} ~~~~~ & \bm{1}^\top \bm{\alpha} = 1 \label{eq:original-formulation:equality}\\ & \bm{\alpha} \geqslant 0 \;\; \text{(entry-wise) \label{eq:original-formulation:inequality}} \end{align} In spite of convexity of the objective (since MMD is jointly convex in both arguments as can be deduced from \cref{eq:mmd:norm}) and linearity of both constraints, the optimization problem remains non-convex. This is due the fact that the convex objective is being maximized rather than minimized which renders the objective a concave function in the standard form of a convex optimization problem. Noting that the closed-form estimator of the squared MMD is also quadratic in the optimization variable $\bm{\alpha}$ (see \cref{eq:mmd:weighted:estimate}), \citet{park2017general} address this problem in a two-step procedure referred to as \underline{s}emi\underline{d}efinite \underline{r}elaxation (SDR). They first \emph{lift} the problem to a higher dimensional space by defining $\mathbf{A}=\bm{\alpha}\bm{\alpha}^\top$ in which the objective function becomes linear, then apply a convex \emph{relaxation} to the intractable constraints. As a result, the following formulation is a relaxation of \ref{eq:original-formulation:equality}--\ref{eq:original-formulation:inequality} (see \cref{appendix:derivation}for a derivation) which is a \underline{q}uadratically \underline{c}onstraint \underline{q}uadratic \underline{p}rogram (QCQP) that can make use of off-the-shelf convex optimization tools\footnote{For instance, we used the open-source library \texttt{cvxpy} \cite{diamond2016cvxpy} for all experiments.}: \begin{align} ~~\underset{\mathbf{A}}{\text{maximize}} ~~~~~ & \mathbf{A} \bullet \left(\mathbf{K}_{xx}-\frac{2}{N}\mathbf{K}_{xx}\bm{1}\bm{1}^\top \right) + \frac{1}{N^2} \bm{1}^\top\mathbf{K}_{xx}\bm{1} \label{eq:sdr-formulatoin:objective} \\ \text{subject to} ~~~~~ & \begin{bmatrix} \mathbf{A} & \mathbf{A}\bm{1} \\ \bm{1}^\top\mathbf{A} & 1 \\ \end{bmatrix} \;\succeq \;0 \quad \text{(positive semidefiniteness)} \label{eq:sdr-formulation:inequality:psd} \\ & \mathbf{A} \geqslant 0 ~\qquad\qquad\qquad \text{(entry-wise)} \label{eq:sdr-formulation:inequality:entry} \\ & \bm{1}^\top\mathbf{A}\bm{1} = 1 \label{eq:sdr-formulation:equality:normalization} \\ & \mathbf{A} = \mathbf{A}^\top \label{eq:sdr-formulation:equality:symmetry} \end{align} where $\mathbf{K}_{xx}=\left[k_{\mathbb{X}}({x}, \tilde{{x}})\right]_{x,\tilde{x}\in\mathcal{D}_x}$ is the Gram matrix, and $\bullet$ denotes the dot-product in matrix space defined as $\mathbf{A}\bullet\mathbf{K}_{xx} = \text{\textbf{trace}}(\mathbf{A}\mathbf{K}_{xx})$. The solution $\mathbf{A}^{\text{SDR}}$ to \ref{eq:sdr-formulatoin:objective}--\ref{eq:sdr-formulation:equality:symmetry} is an optimal solution to the original formulation $\mathbf{A}^\star$ \ref{eq:original-formulation:objective}-\ref{eq:original-formulation:inequality} (i.e. $\mathbf{A}^{\text{SDR}} \equiv \mathbf{A}^\star$) if the condition $\mathbf{A}^\star=\bm{\alpha}^\star\bm{\alpha}^{\star\top}$ is satisfied (i.e. if $\mathbf{A}^\text{SDR}$ is rank one which will be the case if $\mathbf{A}^{\text{SDR}}$ is a feasible solution to \ref{eq:original-formulation:objective}-\ref{eq:original-formulation:inequality} \cite{park2017general})\footnote{In \cref{sub:practical-considerations}, we discuss situations in which $\mathbf{A}^{\text{SDR}}$ is not a rank one matrix.}. In this case, the distribution weights can be recovered as $\bm{\alpha}^\star=\mathbf{A}^\star\bm{1}$. With the solution to Problem 1, we would have obtained a new marginal $p_{x,N}^{\bm{\alpha}^\star}$ that is constructed from the passively obtained observational data $\mathcal{D}_x$ and is maximally distinct from the original marginal $p_x$. Finally, this optimization is performed on the second covariate $y$ to obtain a weighted marginal $p_{y,N}^{\bm{\beta}}$ with weight vector $\bm{\beta}\in[0,1]^{N\times1}$ that is maximally distinct from $p_{y,N}$. \subsection{Quantifying the Impact of Distributional Variations} \label{sub:quantifying-impact} In the second step, we quantify the impact of the artificially generated variations (i.e. within the marginals $p_{x,N}$ and $p_{x, N}^{\bm{\alpha}}$ and similarly from $p_{y,N}$ to $p_{y, N}^{\bm{\beta}}$) on the conditionals $p_{x|y}$ and $p_{y|x}$, respectively. This can be achieved by fitting predictive models to each of these settings leading to the two models $\hat{f}_{y|x}$ and $\hat{f}_{y|x}^{\bm{\alpha}}$ in the $x\to y$ direction, and $\hat{g}_{x|y}$ and $\hat{g}_{x|y}^{\bm{\beta}}$ in the opposite direction. Each model is attainable from a model class $\mathcal{M}_{x\to y}$ or $\mathcal{M}_{y\to y}$ with their corresponding training paradigms $\text{Train}_{\mathcal{M}_{x\to y}}[\cdot]$ and $\text{Train}_{\mathcal{M}_{y\to x}}[\cdot]$. In order to fit a predictive model on a weighted empirical distribution e.g. $p_{x,N}^{\bm{\alpha}}$, the corresponding weights can be considered sample weights and the training paradigms $\text{Train}_{\mathcal{M}_\cdot}[\cdot]$ supports sample importance\footnote{Alternatively, model fitting can be preceded by a re-sampling step.} (see, for example, \cite{wen2018weighted} for a weighted Gaussian Process (GP) model or \cite{steininger2021density} for neural networks). ICM postulates that, if $x\to y$ is the true causal direction of the data generation process, then the impact of the introduced variations on the $\hat{g}$ models are likely to be more apparent. We quantify this impact via model disagreement on a (potentially unlabeld) set \cite{nakkiran2020distributional}, which is in turn quantified as the MMD discrepancy between each model's prediction on a common set: \begin{equation} S_{x\to y} = \text{MMD}_{k_\mathbb{Y}}^2 \left( \hat{f}_{y|x}(x), \hat{f}_{y|x}^{\bm{\alpha}}(x) \right) \end{equation} where $x\sim p_x(x)$ (which empirically could simply be all samples in $\mathcal{D}_x$ or a random subset thereof) and similarly for $S_{y\to x}$. Finally, the lower of either scores\footnote{Similarly, see \cref{sub:practical-considerations} for a discussion the implicit assumptions this decision criterion entails.} $S_{x\to y}$ and $S_{y\to x}$ is an indicator of a lesser impact on the conditionals, and in turn the genuine causal direction. An overview of the VCEI framework for identical data spaces is presented in \cref{alg:VCEI}. \begin{algorithm}[!t] \caption{Variation-based cause-effect identification (VCEI) on identical data spaces $\mathbb{X}\equiv\mathbb{Y}$} \label{alg:VCEI} \begin{algorithmic} \REQUIRE $\mathcal{D}=\{(x_n, y_n)\}_{n=1}^N$, a kernel function $k$, model classes $\mathcal{M}_{x \to y}$, $\mathcal{M}_{y \to x}$, corresponding training paradigms $\text{Train}_{\mathcal{M}_{x\to y}}[\cdot]$ and $\text{Train}_{\mathcal{M}_{y\to x}}[\cdot]$, and a regularization parameter $b_{\alpha}$. \ENSURE $\mathbb{X} \equiv \mathbb{Y}$ (where $x\in\mathbb{X}$ and $y\in\mathbb{Y}$) \STATE \textbf{Estimate }$S_{x\to y}$\textbf{:} Solve SDR of Problem 1 (\Cref{eq:sdr-formulatoin:objective}--\ref{eq:sdr-formulation:equality:symmetry} and \ref{eq:augment:alpha-max}) in $\mathcal{D}_x$ to estimate $\bm{\alpha}$ \STATE $\hat{f}_{y|x} \leftarrow \text{Train}_{\mathcal{M}_{x \to y}}\left[p_{xy,N}\right]$ \STATE $\hat{f}_{y|x}^{\bm{\alpha}} \leftarrow \text{Train}_{\mathcal{M}_{x \to y}}\left[p_{xy,N}^{\bm{\alpha}}\right]$ \STATE $S_{x\to y} \leftarrow \text{MMD}_k^2\left(\hat{f}_{y|x}(p_{x,N}), \hat{f}_{y|x}^{\bm{\alpha}}(p_{x,N})\right)$ \STATE \textbf{Estimate }$S_{y\to x}$\textbf{:} Solve SDR of Problem 1 (\Cref{eq:sdr-formulatoin:objective}--\ref{eq:sdr-formulation:equality:symmetry} and \ref{eq:augment:alpha-max}) in $\mathcal{D}_y$ to estimate $\bm{\beta}$ \STATE $\hat{g}_{x|y} \leftarrow \text{Train}_{\mathcal{M}_{y\to x}}\left[p_{xy,N}\right]$ \STATE $\hat{g}_{x|y}^{\bm{\beta}} \leftarrow \text{Train}_{\mathcal{M}_{y\to x}}\left[p_{xy,N}^{\bm{\beta}}\right]$ \STATE $S_{y\to x} \leftarrow \text{MMD}_k^2\left(\hat{g}_{x|y}(p_{y,N}), \hat{g}_{x|y}^{\bm{\beta}}(p_{y,N})\right)$ \STATE $~~~$\textbf{Return:} $``x\to y``$ \textbf{if} $S_{x\to y} < S_{y \to y}$ \textbf{otherwise} $``y\to x``$ \end{algorithmic} \end{algorithm} \section{Variation-based Cause Effect Identification} \label{sec:method} In this section, we introduce our \underline{v}ariation-based \underline{c}ause \underline{e}ffect \underline{i}dentification (VCEI) framework, a two-step procedure performed at least once in each direction of a bivariate system to infer the genuine causal structure from a single observational setting. Hypothesizing that the underlying causal structure is $x\to y$, the first step of VCEI is to introduce artificial variations to the marginal distribution $p_x$ (see \cref{sub:artificial-setups}). In the second step, we quantify the impact of these variations on the conditional $p_{y|x}$ (see \cref{sub:quantifying-impact}). According to the ICM postulate, variations on $p_x$ are expected to have minimal impact on the conditional $p_{y|x}$ in the genuine causal direction. \paragraph{Notation:} let $\mathcal{D}=\{({x}_n, {y}_n)\}_{n=1}^N$ denote a set of $N$ i.i.d samples passively obtained, i.e. in an observational setting $p_{xy}$, from a bivariate system, where $x\in\mathbb{X}$ and $y\in\mathbb{Y}$ are two random variables following the marginals $p_x$ and $p_y$, respectively. Let further $\mathcal{D}_{{x}}=\{{x}_n\,|\,({x}_n, {y}_n) \in \mathcal{D}\}$ denote the $x$-covariate view of the dataset, and likewise for $\mathcal{D}_y$. \input{method-optimizatoin.tex} \input{method-quanitification.tex} \input{method-considerations.tex} \input{method-identifiability.tex} \section{Preliminaries} \label{sec:preliminaries} \paragraph{Assumptions:} \label{pg:assumptions} We will consider a bivariate system $(x,y)$ for cause-effect inference from an observational setting. In such a system, we assume acyclicity and the existence of a causal link (i.e. either $x\to y$ or $y\to x$). We additionally assume \emph{causal sufficiency} in the sense that all relevant covariates are observed. \paragraph{Independence of Causal Mechanisms (ICM):} \label{pg:icm} Our identification framework relies principally on the ICM concept \cite{sgouritsa2016inference, peters2017elements} which postulates that the genuine data generation process decomposes into \emph{independent} modules that neither inform nor influence each other. Such independence will not necessarily (and is in practice less likely to) hold in acausal decompositions. In a bivariate causal graph $x\to y$ with a joint distribution $p_{xy}$, ICM implies \emph{independence} between the marginal $p_x$ and the conditional $p_{y|x}$, and shall be henceforth denoted by $p_{y|x} \perp p_x$. ICM induces an asymmetry in bivariate systems that has been leveraged in several causal inference approaches \cite{mooij2009regression,janzing2010causal,stegle2010probabilistic,janzing2012information,daniusis2012inferring,scholkopf2012causal,sgouritsa2016inference,kocaoglu2017entropic,marx2017telling,tagasovska2018distinguishing,blobaum2018cause,budhathoki2018origo,marx2021formally}. \citet{janzing2010causal} formulated this notion of \emph{independence} in terms of Kolmogorov complexities \cite{kolmogorov1968three} of the constituent distributions. Many works thereafter relied on the \underline{m}inimum \underline{d}escription \underline{l}ength (MDL) \cite{rissanen1978modeling} as a proxy for the intractable Kolmogorov complexity \cite{budhathoki2017mdl,budhathoki2018origo,marx2018causal,mitrovic2018causal,tagasovska2018distinguishing,kalainathan2019generative,marx2019identifiability}. \paragraph{Maximum Mean Discrepancy (MMD):} \label{pg:mmd} For analytical tractability, we will mainly consider kernel-based MMD as a metric of disparity between distributions \cite{gretton2008kernel,gretton2012kernel}. Given a kernel $k$, the MMD can be expressed as norm in a \underline{r}eproducing \underline{k}ernel \underline{H}ilbert \underline{s}pace (RKHS) $\mathcal{H}$ between the kernel embeddings of the distributions $p$ and $q$: \begin{align} \text{MMD}^2_k(p, q) &= \left \| \mu_p - \mu_q \right \|^2_{\mathcal{H}} \label{eq:mmd:norm} \end{align} where $\mu_p$ and $\mu_q$ are the mean embeddings of $p$ and $q$, respectively, in the Hilbert space $\mathcal{H}$ through the feature mapping $k(x, \cdot)$. From a practical perspective, squared MMD has an analytically tractable empirical estimator of a quadratic form given by: \begin{equation} \text{MMD}_k^2(p, q) \simeq \frac{1}{N^2} \sum_{i,j=1}^{N} k(x_i, x_j) - \frac{2}{NM} \sum_{i,j=1}^{N,M} k(x_i, y_j) + \frac{1}{M^2} \sum_{i,j=1}^{M} k(y_i, y_j) \label{eq:mmd:empirical:biased} \end{equation} with $\{x_i\}_{i=1}^N$ and $\{y_i\}_{i=1}^M$ being finite sample sets drawn from $p$ and $q$, respectively \cite{sriperumbudur2009integral,gretton2012kernel}. This efficient estimator renders MMD practically appealing for various applications amongst which is causal discovery \cite{goudet2017learning,baumann2020identifying,gao2021dag}. \section{Related work} \label{sec:related-work} In this section, we briefly review relevant work on causal discovery in bivariate systems. The intent is not to provide an extensive review (for which the interested reader is referred to e.g. \cite{mooij2016distinguishing} specifically for cause-effect identification or \cite{vowels2021d} for a more recent review on causal discovery). Rather, we review works that notably share similarities and analogies to our proposed framework in order to highlight and emphasize our contributions. Works on causal discovery started with conditional independence tests \cite{spirtes2000causation,sun2007distinguishing,pearl2009causality} which fell short in bivariate cause-effect identification scenarios due to lack of conditioning covariates. Lines of work that addressed this problem postulated a sort of an inherent asymmetry in the cause-effect relationship. An example of such is the functional and distributional asymmetries proposed by the early works in this direction \cite{shimizu2006linear,hoyer2008nonlinear,mooij2009regression,zhang2012identifiability}. Contrary to these frameworks, our proposed approach does not impose functional or distributional constraints on the causal relationship. A different aspect of asymmetry is the ICM postulate on which numerous cause-effect identification frameworks have relied \cite{sgouritsa2016inference,mooij2009regression,janzing2010causal,stegle2010probabilistic,janzing2012information,daniusis2012inferring,scholkopf2012causal,kocaoglu2017entropic,marx2017telling,tagasovska2018distinguishing,blobaum2018cause,budhathoki2018origo,marx2021formally,budhathoki2017mdl,kalainathan2019generative,marx2018causal,marx2019identifiability,mitrovic2018causal}, mainly utilizing the MDL as a proxy in place of the intractable Kolmogorov complexities. Yet, most of the works are limited specific data spaces, e.g. numeric data for regression-based frameworks \cite{sgouritsa2016inference,mooij2009regression,tagasovska2018distinguishing,marx2019identifiability}. Notable exceptions are works relying on kernel-embeddings \cite{mitrovic2018causal,lopez2015towards} Likewise, our contribution lifts all constraints on the data spaces via the adopted kernel-based MMD metric (except for a mild assumption discussed in \cref{sub:practical-considerations}) to the choice of a characteristic kernel. Kernel-based MMD was utilized a loss function in \cite{goudet2017learning} for learning bivariate causal structures. Their approach relies on the simplicity of the functional relationship in the causal direction, and thus can be identified with a model class of limited-capacity. The higher the model capacity, the less identifiable a causal structure would be to their model. In contrast, our framework is more robust to the model choice in the sense that it only requires a model class of a capacitive power to learn the functional relationship in either direction equally well. Finally, and aside from bivariate systems, \citet{peters2016causal} proposed a causal discovery framework in scenarios of multiple experimental setups (including an observational one) with random, unknown interventions. Yet, in the case of a single observational setup, they introduce conditional splitting of the dataset (under predefined conditions) to emulate an artificial scenario of multiple experimental setups. In spite of the distinction, their contribution was an inspiration for our proposed framework.
1,108,101,563,768
arxiv
\section*{Introduction} \label{s:intro} In this paper we introduce generalized invariants of links, $H[R]$, $K[Q]$ and $D[T]$, based on the regular isotopy version of the Homflypt polynomial, the Kauffman polynomial and the Dubrovnik polynomial, respectively. The invariant $H[R]$ uses the Homflypt skein relation for crossings between different components and the Homflypt polynomial for evaluating on oriented knots. More precisely, we abstract the skein relation of the regular isotopy version of the Homflypt polynomial, $R$, and use it as the basis of a new skein algorithm comprising two computational levels : on the first level we only apply the skein relation between {\it mixed crossings}, that is, crossings of different components, so as to produce {\it unions of unlinked knots}. On the second level we evaluate the invariant $H[R]$ on unions of unlinked knots, by applying a new rule, which introduces a new variable and which is based on the evaluation of $R$ on unions of unlinked knots. The invariant $R$ is evaluated on unions of unlinked knots through its evaluation on individual knots. Thus the invariant $H[R]$ generalizes the Homflypt polynomial \cite{jo2, LM,HOMFLY,PT}. It can be also viewed as a generalization of the linking number. {\it This method generalizes the skein invariants Homflypt and Kauffman (Dubrovnik) to new invariants of links.} More precisely, let $\mathcal{L}$ denote the set of classical oriented link diagrams. Let also $L_+$ be an oriented diagram with a positive crossing specified and let $L_-$ be the same diagram but with that crossing switched. Let also $L_0$ indicate the same diagram but with the smoothing which is compatible with the orientations of the emanating arcs in place of the crossing. See (\ref{triple}). The diagrams $L_+, L_-, L_0$ comprise a so-called {\it oriented Conway triple}. \begin{equation}\label{triple} \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-1,-1)-- (-0.22,-0.22); \draw [line width=0.35mm](-1,1)--(0,0); \draw [line width=0.35mm] (0.22,0.22) -- (1,1)[->]; \draw [line width=0.35mm] (0,0) -- +(1,-1)[->]; \end{tikzpicture}} \qquad \qquad \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-1,-1)-- (0,0) ; \draw [line width=0.35mm] (-1,1)--(-0.22,0.22); \draw [line width=0.35mm] (0,0) -- (1,1)[->]; \draw [line width=0.35mm] (0.22,-0.22) -- +(.8,-.8)[->]; \end{tikzpicture}} \qquad \qquad \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2, mydeco/.style = {decoration = {markings, mark = at position #1 with {\arrow{>}}} }] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates { (-1,.8) (0, 0.5) (1,.8)}; \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates { (-1,-.8) (0, -0.5) (1,-.8)}; \end{tikzpicture}} \end{equation} \[ L_+ \qquad \qquad L_- \qquad \qquad L_0 \] \noindent We then prove the following: \begin{thm} \label{hofr} Let $H (z,a)$ denote the regular isotopy version of the Homflypt polynomial and let $R (w,a)$ denote the same invariant but with a different indeterminate $w$ in place of $z$. Then there exists a unique regular isotopy invariant of classical oriented links $H[R]: \mathcal{L} \rightarrow {\mathbb Z}[z, w, a^{\pm 1}, E^{\pm 1}]$, where $z, \, w , \, a$ and $E$ are indeterminates, defined by the following rules: \begin{enumerate} \item On crossings involving different components the following mixed skein relation holds: $$ H[R](L_+) - H[R](L_-) = z \, H[R](L_0), $$ where $L_+$, $L_-$, $L_0$ is an oriented Conway triple, \item For a union of $r$ unlinked knots, ${\mathcal K}^r := \sqcup_{i=1}^r K_i$, with $r \geqslant 1$, it holds that: $$ H[R]({\mathcal K}^r) = E^{1-r} \, R({\mathcal K}^r). $$ \end{enumerate} \end{thm} We recall that the invariant $R(w,a)$ is determined by the following rules: \begin{enumerate} \item[(R1)] For $L_+$, $L_-$, $L_0$ an oriented Conway triple, the following skein relation holds: $$ R(L_+) - R(L_-) = w \, R(L_0), $$ \item[(R2)] The indeterminate $a$ is the positive curl value for $R$: $$ R ( \raisebox{-.1cm}{ \begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (-0.22,-0.22); \draw [line width=0.35mm ](-.7,.7)--(0,0); \draw [line width=0.35mm] (0.22,0.22) -- (.7,.7)[->]; \draw [line width=0.35mm] (0,0) -- +(.7,-.7)[->]; \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} \ ) = a \, R ( \raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style = {decoration = {markings, mark = at position #1 with {\arrow{>}}} }] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ) \quad \mbox{and} \quad R ( \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (0,0) ; \draw [line width=0.35mm] (-.7,.7)--(-0.22,0.22); \draw [line width=0.35mm] (0,0) -- (.7,.7)[->]; \draw [line width=0.35mm] (0.22,-0.22) -- +(.6,-.6)[->]; \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} ) = a^{-1} \, R (\raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style = {decoration = {markings, mark = at position #1 with {\arrow{>}}} }] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ), $$ \item[(R3)] On the standard unknot: $$ R(\bigcirc) = 1. $$ We also recall that the above defining rules imply the following: \item[(R4)] For a diagram of the unknot, $U$, $R$ is evaluated by taking: $$ R(U) = a^{wr(U)}, $$ where $wr(U)$ denotes the writhe of $U$ --instead of 1 that is the case in the ambient isotopy category. \item[(R5)] $R$ being the Homflypt polynomial, it is multiplicative on a union of unlinked knots, ${\mathcal K}^r := \sqcup_{i=1}^r K_i$. Namely, for $\eta := \frac{a - a^{-1}}{w}$ we have: $$ R({\mathcal K}^r) = \eta^{r-1} \Pi_{i=1}^r R(K_i). $$ \end{enumerate} Consequently, the evaluation of $H[R]$ on the standard unknot is $H[R](\bigcirc) = R(\bigcirc) = 1$. \smallbreak Assuming Theorem~\ref{hofr} one can compute $H[R]$ on any given oriented link diagram $L$ by applying the following procedure: the skein rule (1) of Theorem~\ref{hofr} can be used to give an evaluation of $H[R](L_+)$ in terms of $H[R](L_-)$ and $H[R](L_0)$ or of $H[R](L_-)$ in terms of $H[R](L_+)$ and $H[R](L_0)$. We choose to switch mixed crossings so that the switched diagram is more unlinked than before. Applying this principle recursively we obtain a sum with polynomial coefficients and evaluations of $H[R]$ on unions of unlinked knots. These knots are formed by the mergings of components caused by the smoothings in the skein relation (1). To evaluate $H[R]$ on a given union of unlinked knots we then use the invariant $R$ according to rule (2) of Theorem~\ref{hofr}. Note that the appearance of the indeterminate $E$ in rule (2) for $H[R]$ is the critical difference between $H[R]$ and $R$. Finally, formula (R5) above allows evaluations of the invariant $R$ on individual knotted components and knowledge of $R$ provides the basis for this. For proving Theorem~\ref{hofr} one must prove that the resulting evaluation is independent of the choices made and invariant under regular isotopy moves. A good guide for this is the skein-theoretic proof of Lickorish--Millett of the well-definedness of the Homflypt polynomial \cite{LM}, so we will be following in principle \cite{LM} with the necessary adaptations and modifications, taking for granted the well-definedness of $R$. The difference here lies in modifying the original skein method, which bottoms out on unlinks, since self-crossings are not distinguished from mixed crossings, to the present context, where the evaluations bottom out on evaluations by $R$ on unions of unlinked knots. This difference causes the need of particularly elaborate arguments in proving invariance of the resulting evaluation under the sequence of mixed crossing switches and the order of components (Propositions~\ref{orderxings} and~\ref{ordercpts}) in comparison with~\cite{LM}. Our motivation for the above generalization $H[R]$ of the regular isotopy version of the Homflypt polynomial is the following: In \cite{chjukala} the ambient isotopy invariants $\Theta_d(q,\lambda_d)$ for classical links, which were originally derived from a Markov trace on the Yokonuma--Hecke algebras \cite{jula2}, are recovered via the skein relation of the Homflypt polynomial, $P$, that can only apply to mixed crossings of a link. The invariants $\Theta_d$ are then compared to $P$ and are shown to be {\it distinct} from $P$ {\it on links}. The invariants $\Theta_d$ are also distinct from the Kauffman polynomial, since they are {\it topologically equivalent} to $P$ {\it on knots} \cite{chmjakala, chjukala}. Moreover, in \cite{chjukala} the family of invariants $\{\Theta_d\}_{d\in{\mathbb N}}$, which includes $P$ for $d=1$, is generalized to a new 3-variable skein link invariant $\Theta(q,\lambda,E)$, which specializes to each $\Theta_d$ for $E=1/d$ and which is stronger than $P$, as we detail below and in Section~\ref{sectheta}. Furthermore, in \cite[Appendix B]{chjukala} W.B.R. Lickorish provides a closed combinatorial formula for the definition of the invariant $\Theta$, showing that it is a mixture of Homflypt polynomials and linking numbers of sublinks of a given link (see Eq.~\ref{lickorish}). Theorem~\ref{hofr} provides a new self-contained skein theoretic proof of the existence of the invariant $\Theta$ of \cite{chjukala}. The above constructions opened the way to new research directions. Cf. \cite{jula1,jula2,jula3,jula4,jula5,gou,gojukola,ChPou1,gojukolaf,ChPou2,chla,chpa1,chmjakala,japa,chjukala,goula1,ju2,goula2,pawa,AJ1,AJ2,AJ3,chpa2,fljula}. \smallbreak These constructions alter the whole philosophy of classical skein-theoretic techniques, whereby mixed as well as self-crossings in a link diagram would get indiscriminantly switched. {\it The new logic is that, using a known skein invariant, one first unlinks all components using the skein relation and then one evaluates on unions of unlinked knots using that skein invariant and at the same time introducing a new variable.} \smallbreak Theorem~\ref{hofr} implies that, if we wish, we could specialize the $z$, the $w$, the $a$ and the $E$ in any way we wish. For example, if $a=1$ then $R$ specializes to the Alexander--Conway polynomial \cite{al,co}. If $w= \sqrt{a} - 1/\sqrt{a}$ then $R$ becomes the unnormalized Jones polynomial \cite{jo1}. In each case $H[R]$ can be regarded as a generalization of that polynomial. Furthermore, we denote by $H[H]$ the invariant $H[R]$ in the case where $w=z$. The invariant $H[H]$ still generalizes $H$ to a new 3-variable invariant for {\it links}. Indeed, in this case $H[H]$ coincides with the regular isotopy version of the new 3-variable link invariant $\Theta (q, \lambda, E)$ \cite{chjukala}. Our 4-variable invariant $H[R]$ is in fact topologically equivalent to the 3-variable invariant $H[H]$, which is the regular isotopy version of the 3-variable invariant $\Theta$. This is proved via our generalization of the Lickorish combinatorial formula given in Section~\ref{secphi}. Namely, for an oriented link $L$ with $n$ components, we have: \[ H[R](L) = \left( \frac{z}{w} \right)^{n-1} \sum_{k=1}^n \eta^{k-1} \widehat{E}_k \sum_\pi R(\pi L) \] where the second summation is over all partitions $\pi$ of the components of $L$ into $k$ (unordered) subsets and $R(\pi L)$ denotes the product of the Homflypt polynomials of the $k$ sublinks of $L$ defined by $\pi$. Also, $\widehat{E}_k = (\widehat{E}^{-1} - 1)(\widehat{E}^{-1} - 2) \cdots (\widehat{E}^{-1} - k + 1)$, with $\widehat{E} = E \frac{z}{w}$ and $\widehat{E}_1 =1$. The fact that $H[R]$ is topologically equivalent to $H[H]$ was observed by K.~Karvounis \cite{ka2} by modifying our previous version of the above formula \cite{kaula}. {\it Thus we now know that all the invariants in this paper are equivalent to 3-variable invariants, even though we have formulated them using four variables.} The 4-variable formulation is, nevertheless, useful for separating the two types of skein operations. Namely, switching crossings between different components and evaluating on knots. The reader should note that the formula above (the right hand side) is, by its very definition, a regular isotopy invariant of the link $L.$ This follows from the regular isotopy invariance of $R$ and the well-definedness of summing over all partitions of the link $L$ into $k$ parts. In fact the summations $I_{k}(L) = \sum_\pi R(\pi L),$ where $\pi$ runs over all partitions of $L$ into $k$ parts, are each regular isotopy invariants of $L.$ What is remarkable here is that these all assemble into the new invariant $H[R](L)$ with its striking two-level skein relation. We see from this combinatorial formula that the extra strength of $H[R](L)$ comes from its ability to detect linking numbers and non-triviality of certain sublinks of the link $L$. In the regular isotopy formulation, even the linking numbers are not needed. \begin{rem} \rm Since the Lickorish combinatorial formula is itself a link invariant and we prove by induction that it satisfies the two-tiered skein relations of $H[R],$ this combinatorial formula can be used as a mathematical basis for $H[R].$ We have chosen to work out the skein theory of $H[R]$ from first principles, but a reader of this paper may wish to first read the proof of the Lickorish formula and understand the skein relations on that basis. The same remarks apply to the combinatorial formuli for the other two invariants in Sections~\ref{secxi} and ~\ref{secpsi}. \end{rem} We now consider the class $\mathcal{L}^u$ of unoriented link diagrams. For any crossing of a diagram of a link in $\mathcal{L}^u$, if we swing the overcrossing arc counterclockwise it sweeps two regions out of the four. If we join these two regions, this is the $A$-smoothing of the crossing, while joining the other two regions gives rise to the $B$-smoothing. We shall say that a crossing is of {\it positive type} if it produces a horizontal $A$-smoothing and that it is of {\it negative type} if it produces a vertical $A$-smoothing. Let now $L_+$ be an unoriented diagram with a positive type crossing specified and let $L_-$ be the same diagram but with that crossing switched. Let also $L_0$ and $L_{\infty}$ indicate the same diagram but with the $A$-smoothing and the $B$-smoothing in place of the crossing. See (\ref{quadruple}). The diagrams $L_+, L_-, L_0, L_{\infty}$ comprise a so-called {\it unoriented Conway quadruple}. \begin{equation}\label{quadruple} \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-1,-1)-- (-0.22,-0.22); \draw [line width=0.35mm ](-1,1)--(0,0); \draw [line width=0.35mm] (0.22,0.22) -- (1,1); \draw [line width=0.35mm] (0,0) -- +(1,-1); \end{tikzpicture}} \qquad \qquad \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-1,-1)-- (0,0) ; \draw [line width=0.35mm] (-1,1)--(-0.22,0.22); \draw [line width=0.35mm] (0,0) -- (1,1); \draw [line width=0.35mm] (0.22,-0.22) -- +(.8,-.8); \end{tikzpicture}} \qquad \qquad \raisebox{-.07cm}{\begin{tikzpicture}[scale=.2, mydeco/.style = {decoration = {markings, mark = at position #1 with {\arrow{>}}} }] \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-1,.8) (0, 0.5) (1,.8)}; \draw [ line width=0.35mm] plot [smooth, tension=2] coordinates { (-1,-.8) (0, -0.5) (1,-.8)}; \end{tikzpicture}} \qquad \qquad \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [ line width=0.35mm] plot [smooth, tension=2] coordinates { (-1,-1) (-0.3, 0) (-1,1)}; \draw [ line width=0.35mm] plot [smooth, tension=2] coordinates { (1,-1) (0.3, 0) (1,1)}; \end{tikzpicture}} \end{equation} \[ L_+ \qquad \qquad L_- \qquad \qquad L_0 \qquad \qquad L_{\infty} \] By similar arguments as for Theorem~\ref{hofr} we also prove in this paper the existence of 4-variable generalizations of the regular isotopy versions of the Dubrovnik and the Kauffman polynomials \cite{kau4}: \begin{thm} \label{doft} Let $D (z,a)$ denote the regular isotopy version of the Dubrovnik polynomial and let $T (w,a)$ denote the same invariant but with a different parameter $w$ in place of $z$. Then there exists a unique regular isotopy invariant of classical unoriented links $D[T]: \mathcal{L}^u \rightarrow {\mathbb Z}[z, w, a^{\pm 1}, E^{\pm 1}]$, where $z, \, w , \, a$ and $E$ are indeterminates, defined by the following rules: \begin{enumerate} \item On crossings involving different components the following skein relation holds: $$ D[T] (L_+) - D[T] (L_-) = z \, \big( D[T] (L_0) - D[T] (L_{\infty}) \big), $$ where $L_+$, $L_-$, $L_0$, $L_{\infty}$ is an unoriented Conway quadruple, \item For a union of $r$ unlinked knots in $\mathcal{L}^u$, ${\mathcal K}^r := \sqcup_{i=1}^r K_i$, with $r \geqslant 1$, it holds that: $$ D[T] ({\mathcal K}^r) = E^{1-r} \, T ({\mathcal K}^r). $$ \end{enumerate} \end{thm} We recall that the invariant $T(w,a)$ is determined by the following rules: \begin{enumerate} \item[(T1)] For $L_+$, $L_-$, $L_0$, $L_{\infty}$ an unoriented Conway quadruple, the following skein relation holds: $$ T (L_+) - T (L_-) = w \, \big( T (L_0) - T (L_{\infty}) \big), $$ \item[(T2)] The indeterminate $a$ is the positive type curl value for $T$: $$ T ( \raisebox{-.1cm}{ \begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (-0.22,-0.22); \draw [line width=0.35mm ](-.7,.7)--(0,0); \draw [line width=0.35mm] (0.22,0.22) -- (.7,.7); \draw [line width=0.35mm] (0,0) -- +(.7,-.7); \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} \ ) = a \, T ( \raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style ] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ) \quad \mbox{and} \quad T ( \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (0,0) ; \draw [line width=0.35mm] (-.7,.7)--(-0.22,0.22); \draw [line width=0.35mm] (0,0) -- (.7,.7); \draw [line width=0.35mm] (0.22,-0.22) -- +(.6,-.6); \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} ) = a^{-1} \, T ( \raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style ] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ), $$ \item[(T3)] On the standard unknot: $$ T(\bigcirc) = 1. $$ We also recall that the above defining rules imply the following: \item[(T4)] For a diagram of the unknot, $U$, $T$ is evaluated by taking $$ T(U) = a^{wr(U)}, $$ \item[(T5)] $T$, being the Dubrovnik polynomial, it is multiplicative on a union of unlinked knots, ${\mathcal K}^r := \sqcup_{i=1}^r K_i$. Namely, for $\delta := \frac{a - a^{-1}}{w} + 1$ we have: $$ T ({\mathcal K}^r) = \delta^{r-1} \Pi_{i=1}^r T (K_i). $$ \end{enumerate} Consequently, on the standard unknot we evaluate $D[T](\bigcirc) = T(\bigcirc) = 1$. \smallbreak The Dubrovnik polynomial, $D$, is related to the Kauffman polynomial, $K$, via the following translation formula, observed by W.B.R~Lickorish~\cite{kau4}: \begin{equation} \label{ktod} D(L)(a,z) = (-1)^{c(L)+1} \, i^{-wr(L)}K(L)(ia,-iz). \end{equation} Here, $c(L)$ denotes the number of components of $L$, $i^2 = -1$, and $wr(L)$ is the {\it writhe} of $L$ for some choice of orientation of $L$, which is defined as the algebraic sum of all crossings of $L$. The translation formula is independent of the particular choice of orientation for $L$. Our theory generalizes also the regular isotopy version of the Kauffman polynomial \cite{kau4} through the following: \begin{thm} \label{kofq} Let $K (z,a)$ denote the regular isotopy version of the Kauffman polynomial and let $Q (w,a)$ denote the same invariant but with a different parameter $w$ in place of $z$. Then there exists a unique regular isotopy invariant of classical unoriented links $K[Q]: \mathcal{L}^u \rightarrow {\mathbb Z}[z, w, a^{\pm 1}, E^{\pm 1}]$, where $z, \, w , \, a$ and $E$ are indeterminates, defined by the following rules: \begin{enumerate} \item On crossings involving different components the following skein relation holds: $$ K[Q] (L_+) + K[Q] (L_-) = z \, \big( K[Q] (L_0) + K[Q] (L_{\infty}) \big), $$ where $L_+$, $L_-$, $L_0$, $L_{\infty}$ is an unoriented Conway quadruple, \item For a union of $r$ unlinked knots in $\mathcal{L}^u$, ${\mathcal K}^r := \sqcup_{i=1}^r K_i$, with $r \geqslant 1$, it holds that: $$ K[Q]({\mathcal K}^r) = E^{1-r} \, Q({\mathcal K}^r). $$ \end{enumerate} \end{thm} We recall that the invariant $Q(w,a)$ is determined by the following rules: \begin{enumerate} \item[(Q1)] For $L_+$, $L_-$, $L_0$, $L_{\infty}$ an unoriented Conway quadruple, the following skein relation holds: $$ Q (L_+) + Q (L_-) = w \, \big( Q (L_0) + Q (L_{\infty}) \big), $$ \item[(Q2)] The indeterminate $a$ is the positive type curl value for $Q$: $$ Q ( \raisebox{-.1cm}{ \begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (-0.22,-0.22); \draw [line width=0.35mm ](-.7,.7)--(0,0); \draw [line width=0.35mm] (0.22,0.22) -- (.7,.7); \draw [line width=0.35mm] (0,0) -- +(.7,-.7); \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} \ ) = a \, Q ( \raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style ] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ) \quad \mbox{and} \quad Q ( \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (0,0) ; \draw [line width=0.35mm] (-.7,.7)--(-0.22,0.22); \draw [line width=0.35mm] (0,0) -- (.7,.7); \draw [line width=0.35mm] (0.22,-0.22) -- +(.6,-.6); \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} ) = a^{-1} \, Q ( \raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style ] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ), $$ \item[(Q3)] On the standard unknot: $$ Q(\bigcirc) = 1. $$ We also recall that the above defining rules imply the following: \item[(Q4)] For a diagram of the unknot, $U$, $Q$ is evaluated by taking $$ Q(U) = a^{wr(U)}, $$ \item[(Q5)] $Q$, being the Kauffman polynomial, it is multiplicative on a union of unlinked knots, ${\mathcal K}^r := \sqcup_{i=1}^r K_i$. Namely, for $\gamma := \frac{a + a^{-1}}{w} - 1$ we have: $$ Q({\mathcal K}^r) = \gamma^{r-1} \Pi_{i=1}^r Q(K_i). $$ \end{enumerate} Consequently, on the standard unknot we evaluate $K[Q](\bigcirc) = Q(\bigcirc) = 1$. \smallbreak In Theorems~\ref{doft} and~\ref{kofq} the basic invariants $T(w,a)$ and $Q(w,a)$ could be replaced by specializations of the Dubrovnik and the Kauffman polynomial respectively and, then, the invariants $D[T]$ and $K[Q]$ can be regarded as generalizations of these specialized polynomials. For example, if $a=1$ then $Q(w,1)$ is the Brandt--Lickorish--Millett--Ho polynomial \cite{BLM} and if $w= A+A^{-1}$ and $a= -A^3$ then $Q$ becomes the Kauffman bracket polynomial \cite{kau2}. In both cases the invariant $K[Q]$ generalizes these polynomials. Furthermore, a formula analogous to (\ref{ktod}) relates the generalized invariants $D[T]$ and $K[Q]$, see (\ref{psitoxi}). In Theorems~\ref{doft} and~\ref{kofq} we have formulated the invariants $D[T]$ and $K[Q]$ as 4-variable invariants. Alike $H[R]$, $D[T]$ and $K[Q]$ are topologically equivalent to the 3-variable invariants $D[D]$ and $K[K]$, via the combinatorial formuli of the invariants given in Section~\ref{generalregkd}. The 4-variable formulation is, nevertheless, useful for keeping track of the two types of skein operations. \smallbreak We note that there are few known skein link invariants in the literature. By {\it skein invariant} we mean that it can be computed on each link solely by the use of skein relations and a set of initial conditions. Skein invariants include: the Alexander--Conway polynomial \cite{al,co}, the Jones polynomial \cite{jo1}, and the Homflypt polynomial \cite{jo2, LM,HOMFLY,PT}, which specializes to both the Alexander--Conway and the Jones polynomial; there is also the bracket polynomial \cite{kau2}, the Brandt--Lickorish--Millett--Ho polynomial \cite{BLM}, the Dubrovnik polynomial and the Kauffman polynomial \cite{kau4}, which specializes to both the bracket and the Brandt--Lickorish--Millett--Ho polynomial. Finally, we have the Juyumaya--Lambropoulou family of invariants $\Delta_{d,D}$ \cite{jula2}, and the analogous Chlouveraki--Juyumaya--Karvounis--Lambropoulou family of invariants $\Theta_d(q, \lambda_d)$ and their generalization $\Theta(q, \lambda, E)$ \cite{chjukala}, which specializes to the Homflypt polynomial. \smallbreak The paper is organized as follows: In Section~\ref{sectheta} we detail on the algebraic construction of the new invariants $\Theta_d$ and $\Theta$. In Section~\ref{generalregh} of this paper we place the regular isotopy counterparts of the invariants $\Theta_d$ and $\Theta$ in a more general skein-theoretic context and we produce a full skein theory and a 4-variable invariant generalizing the regular isotopy version of the Homflypt polynomial. We proceed with constructing in Section~\ref{generalregkd} analogous skein-theoretic generalizations for the Dubrovnik and the Kauffman polynomials. Moreover, in Sections~\ref{secphi}, \ref{secxi} and \ref{secpsi} we adapt the combinatorial formula of Lickorish (\ref{lickorish}) to our more general regular isotopy setting for the generalizations of the Homflypt, the Dubrovnik and the Kauffman polynomials. In these sections we also show, by using the combinatorial formuli, that the 4-variable polynomials $H[R]$, $D[T]$ and $K[Q]$ are in fact topologically equivalent to the 3-variable polynomials $H[H]$, $D[D]$ and $K[K]$ respectively. Furthermore, in Sections~\ref{generalambp}, \ref{generalamby} and \ref{generalambf} we give the ambient isotopy reformulations of all new invariants. In Section~\ref{secstatesums} we define associated state sum models for the new invariants. These state sums are based on the skein template algorithm for the Homflypt, Kauffman and Dubrovnik polynomials as explained in \cite{kau5,kau6}. Our state sums use the skein calculation process for the invariants, but have a new property in the present context. They have a double level due to the combination in our invariants of a skein calculation combined with the evaluation of a specific invariant on the knots that are at the bottom of the skein process. If we choose a state sum evaluation of a different kind for this specific invariant, then we obtain a double-level state sum of our new invariant. This is articulated in Section~\ref{secdoublesums} and we speculate in Sections~\ref{statmech} and~\ref{applications} about possible applications for these ideas. In Section~\ref{statmech} we discuss the context of statistical mechanics models and partition functions in relation to multiple level state summations. In Section~\ref{directions} we discuss further mathematical directions for the research in this paper. Finally, in Section~\ref{applications} we discuss possible relationships with reconnection in vortices in fluids, strand switching and replication of DNA, particularly the possible relations with the replication of Kinetoplast DNA, and we discuss the possibility of multiple levels in the quantum Hall effect where one considers the braiding of quasi-particles that are themselves physical subsystems composed of multiple electron vortices centered about magnetic field lines. \section{Previous work} \label{sectheta} In \cite{jula2} 2-variable framed link invariants $\Gamma_{d,D}$ were constructed for each $d\in {\mathbb N}$ via the Yokonuma--Hecke algebras ${\rm Y}_{d,n}(u)$, the Juyumaya trace and specializations imposed on the framing parameters of the trace, where $D$ is any non-empty subset of ${\mathbb Z}/d{\mathbb Z}$. When restricted to classical links, seen as links with zero framings on all components, these invariants give rise to ambient isotopy invariants for classical links $\Delta_{d,D}$. We note that for $d=1$ the algebra ${\rm Y}_{1,n}$ coincides with the Iwahori--Hecke algebra of type $A$, the trace coincides with the Ocneanu trace and the invariant $\Delta_{1,\{1\}}$ coincides with the Homflypt polynomial, $P$. The invariants $\Delta_{d,D}$ were studied in \cite{jula3,chla}, especially their relation to $P$, but topological comparison had not been possible due to algebraic and diagrammatic difficulties. Eventually, in \cite{chmjakala,chjukala} another presentation using a different quadratic relation for the Yokonuma--Hecke algebra was adopted from \cite{chpa1} and the classical link invariants related to the new presentation of the Yokonuma--Hecke algebras were now denoted $\Theta_{d,D}$. For $d=1$, $\Theta_{1,\{1\}}$ also coincides with $P$ with variables related to the corresponding different presentation of the Iwahori--Hecke algebra. Consequently, in \cite{chjukala} a series of results were proved, which led to the topological identification of the invariants $\Theta_{d,D}$ and to their generalization to a new 3-variable ambient isotopy invariant $\Theta$. Firstly, it was shown that the invariants $\Theta_{d,D}$ can be enumerated only by $d$ and so they were denoted as $\Theta_d$. It was also shown that on {\it knots} the invariants $\Theta_d$ are topologically equivalent to the Homflypt polynomial. Namely, if $K$ is a \textit{knot}, then \begin{center} $\Theta_d (q,z)(K) = P(q, d z)(K)$. \end{center} The above result was generalized to unions of unlinked knots. Namely, if ${\mathcal K}^r := \sqcup_{i=1}^r K_i$ is a union of $r$ unlinked knots, we have \begin{center} $\Theta_d (q,z)({\mathcal K}^r) = 1/d^{1-r} P(q,d z)({\mathcal K}^r)$. \end{center} It was further shown in \cite{chjukala} that the invariants $\Theta_d$ satisfy on any oriented link diagram $L$ a {\it mixed skein relation} on crossings between different components of $L$: \begin{center} $ \frac{1}{\sqrt{\lambda_d}} \, \Theta_d(L_+) - \sqrt{\lambda_d} \, \Theta_d(L_-) = (q-q^{-1}) \, \Theta_d(L_0), $ \end{center} where $L_+$, $L_-$, $L_0$ is an oriented Conway triple and $\lambda_d :=\frac{d z - (q-q^{-1})}{d z}$. The above skein relation is identical to the skein relation of the Homflypt polynomial $P$ considered at variables $(q, \lambda_d)$. As a consequence, the invariants $\Theta_d$ can be computed directly from the diagram $L$ by applying the mixed skein relation between pairs of different components and gradually decomposing $L$ into unions of unlinked knots that result as mergings of components of $L$ via the smoothings in the mixed skein relation. Then, one has to evaluate the Homflypt polynomials of the unions of unlinked knots. Namely: \begin{center} $ \Theta_d(L) = \sum_{k=1}^{c} \frac{1}{d^{1-k}} \sum_{\ell \in \mathcal{K}^k} p(\ell) \,P(\ell), $ \end{center} where $c$ is the number of components of the link $L$, $\mathcal{K}^k$ denotes the set of all split links $\ell$ with $k$ split components, obtained from $L$ by applying the mixed skein relation for $k=1,\ldots, c$ and $p(\ell)$ are the coefficients coming from the application of the mixed skein relation. Finally, the above results enabled in \cite{chjukala} the topological distinction of the invariants $\Theta_d$ from the Homflypt polynomial on Homflypt-equivalent pairs of {\it links}. To summarize, the family of invariants $\{\Theta_d(q,\lambda_d)\}_{d\in{\mathbb N}}$ is a family of new skein invariants for links that includes the Homflypt polynomial $P$ for $d=1$ and are distinct from $P$ for each $d > 1$. \smallbreak In \cite{chjukala} it is further demonstrated that the family of invariants $\{\Theta_d(q,\lambda_d)\}_{d\in{\mathbb N}}$ generalizes to {\it a new 3-variable skein link invariant} $\Theta(q,\lambda,E)$, which is defined skein-theoretically on link diagrams by the following inductive rules: \begin{enumerate} \item On crossings involving different components the following skein relation holds: $$ \frac{1}{\sqrt{\lambda}} \Theta( \includegraphics[scale=0.5]{crossing_pos.pdf} ) - \sqrt{\lambda} \Theta(\includegraphics[scale=0.5]{crossing_neg.pdf} ) = (q-q^{-1})\, \Theta(\includegraphics[scale=0.5]{crossing_zero.pdf}), $$ \item For $\mathcal{K}^r := \sqcup_{i=1}^r K_i$, a union of $r$ unlinked knots, with $r \geqslant 1$, it holds that: $$ \Theta(\mathcal{K}^r) = E^{1-r} \,P(\mathcal{K}^r). $$ \end{enumerate} The invariant $\Theta$ specializes to $P$ for $E = 1$ and to $\Theta_d$ for $E = 1/d$, and is stronger than $P$. Further, $\Theta$ satisfies the same properties as the invariants $\Theta_d$ and $P$, namely: multiplicative behaviour on connected sums, inversion of certain variables on mirror images, non-distinction of mutants. For details see \cite{chmjakala}. The well-definedness of $\Theta$ is proved in \cite{chjukala} by comparing it to an invariant $\overline{\Theta}$ for tied links, constructed from the algebra of braids and ties \cite{AJ1}, but using now the new quadratic relation for it. The invariant $\overline{\Theta}$ is analogous but, as computational evidence indicates, not the same as the invariant $\overline{\Delta}$ for tied links of F.~Aicardi and J.~Juyumaya \cite{AJ2}. In the next section of this paper we will derive an independent purely skein-theoretic proof for the well-definedness of $\Theta$. Moreover, in \cite[Appendix B]{chjukala} W.B.R. Lickorish proved the following closed combinatorial formula for the invariant $\Theta$ on an oriented link $L$ with $n$ components (proved also in \cite{pawa} with different methods): \begin{equation} \label{lickorish} \Theta (L) = \sum_{k=1}^n \mu^{k-1}E_k \sum_\pi \lambda^{\nu(\pi)}P(\pi L), \end{equation} where the second summation is over all partitions $\pi$ of the components of $L$ into $k$ (unordered) subsets and $P(\pi L)$ denotes the product of the Homflypt polynomials of the $k$ sublinks of $L$ defined by $\pi$. Furthermore, $\nu(\pi)$ is the sum of all linking numbers of pairs of components of $L$ that are in distinct sets of $\pi$, $E_k = (E^{-1} - 1)(E^{-1} - 2) \cdots (E^{-1} - k + 1)$, with $E_1 =1$, and $\mu = \frac{\lambda^{-{1/2}} - \lambda^{{1/ 2}}}{q - q^{-1}}$. \section{Generalization of the regular isotopy Homflypt polynomial} \label{generalregh} In this section we define the general regular isotopy invariant for links, $H[R]$, by means of proving Theorem~\ref{hofr}. The 4-variable invariant $H[R]$ in its ambient isotopy version generalizes the skein-theoretic concept of the 3-variable invariant $\Theta(q,\lambda,E)$. We then use inductive methods based on the methods of Lickorish-Millett and Kauffman and our own variations to prove that $H[R]$ is well-defined. Once this theorem is in place and by normalizing $H[R]$ to obtain its ambient isotopy counterpart, $P[R]$, we have a skein-theoretic proof of the well-definedness of the invariant $\Theta$. \subsection{Computing algorithm for $H[R]$} \label{algorithm} Assuming Theorem~\ref{hofr}, the invariant $H[R]$ on any oriented link diagram $L$ can be easily computed by applying the following algorithm: \begin{enumerate}[{\bf Step 1.}] \item {\it (Diagrammatic level)} Order the components of $L$ and choose a basepoint and a direction on each component (could be the one pointed by the orientation of the component). Start from the chosen point of the first component and go along it in the chosen direction. When arriving at a {\it mixed} crossing for the first time along an under-arc we switch it by the mixed skein relation, so that we pass by the mixed crossing along the over-arc. At the same time we smooth the mixed crossing, obtaining a new diagram in which the two components of the crossing merge into one. We repeat for all mixed crossings of the first component. In the end among all resulting diagrams there is only one with the same number of crossings as the initial diagram and in this one this component gets unlinked from the rest and lies above all of them. The other resulting diagrams have one less crossing and have the first component fused together with some other component. We proceed similarly with the second component switching all its mixed crossings except for crossings involving the first component. In the end the second component gets unliked from all the rest and lies below the first one and above all others in the maximal crossing diagram, while we also obtain diagrams containing mergings of the second component with others (except component one). We continue in the same manner with all components in order and we also apply this procedure to all product diagrams coming from smoothings of mixed crossings. In the end we obtain the unlinked version of $L$ plus a linear sum of links $\ell$ with unlinked components resulting from the mergings of different components. \smallbreak \item {\it (Computational level)} On the level of the invariant $H[R]$, Rule (1) of Theorem~\ref{hofr} tells us how the switching of mixed crossings is controlled: \smallbreak \noindent $ H[R](L_+) - H[R](L_-) = z \, H[R](L_0) \hfill \textit{Rule (1)} $ \smallbreak \noindent where $L_+$, $L_-$, $L_0$ is an oriented Conway triple. After all applications of the mixed skein relation we have a linear sum of links $\ell$ with unlinked components. The evaluation of the invariant $H[R]$ on each $\ell$ reduces to the evaluation $R(\ell)$ by Rule (2) of Theorem~\ref{hofr}: \smallbreak \noindent $ H[R]({\ell}) = E^{1-r} \, R(\ell) \hfill \textit{Rule (2)} $ \smallbreak \noindent where $r$ is the number of knotted components of $\ell$. \smallbreak \noindent In the end we obtain a linear sum of the values of the invariant $H[R]$ on all the resulting links $\ell$ with unlinked components: \smallbreak \noindent $ H[R](L) = \sum_{k=1}^{c} E^{1-k} \sum_{\ell \in \mathcal{K}^k} p(\ell) \,R(\ell), \hfill \textit{Rule (3)} $ \smallbreak \noindent where $p(\ell)$ are the coefficients coming from the applications of the mixed skein relation. Then, on each $R(\ell)$ Rule (R5) applies and then Rules (R1)--(R5). \end{enumerate} \subsection{Our terminology and notations}\label{notations} As usual, an {\it oriented link} is a link with an orientation specified for each component. Also, a {\it link diagram} is a projection of a link on the plane with only double points, the crossings, of which there are finitely many, and which are endowed with information `under/over'. We shall be using the same notation for a link and a diagram of it as long as there is no risk of confusion. Two oriented link diagrams are considered {\it equivalent} if they differ by oriented regular isotopy moves on the plane, namely, by planar isotopy and by Reidemeister moves~II and~III with all variations of orientations. A crossing between different components is a {\it mixed crossing}. We label the mixed crossings by distinct natural numbers. An (oriented) link diagram is called {\it generic} if it is {\it ordered}, that is, an order $c_1, \ldots, c_r$ is given to its components, {\it directed}, that is, a direction is specified on each component, and {\it based}, that is, a basepoint is specified on each component, distinct from the double points of the crossings. The set of all generic diagrams with at most $n$ crossings is denoted $\mathcal{L}_n$ and the set of all generic diagrams is denoted $\mathcal{L} = \cup_n \mathcal{L}_n$. In particular, for a union of $r$ unlinked knots in $\mathcal{L}$, with $r \geqslant 1$, we will be using the notation ${\mathcal K}^r := \sqcup_{i=1}^r K_i$. A diagram ${\mathcal K}^r$ is said to be a {\it descending stack} if, when walking along the components of ${\mathcal K}^r$ in their given order following the orientations and starting from their basepoints, every mixed crossing is first traversed along its over-arc. Clearly, the structure of a descending stack no longer depends on the choice of basepoints; it is entirely determined by the order of its components. Note also that a descending stack is equivalent to the corresponding split link comprising the $r$ knotted components, $K_i$, where the order of components is no longer relevant. The descending stack of knots associated to a given link diagram $L$ is denoted as $d L$. We let, now, $\mathcal{Z} := {\mathbb Z}[z, w, a^{\pm 1}, E^{\pm 1}]$ denote the ring of finite Laurent polynomials in four variables $z, w, a, E$. In the proofs, the evaluation $H[R](L) \in \mathcal{Z}$ on a generic link diagram $L$ will be shortened to $(L)$. Moreover, let $\varepsilon$ denote the sign of a mixed crossing in $L$. Then Rule~(1) of Theorem~\ref{hofr} can be re-written as: \begin{equation}\label{mixedskein} (L_{\varepsilon}) = (L_{-\varepsilon}) + \varepsilon z \, (L_0). \end{equation} \subsection{Proof of Theorem~\ref{hofr}} \label{proofregh} For proving Theorem~\ref{hofr} one has to show that the computation of $H[R] (L)$, via the algorithmic steps described above, does not depend on the sequence of mixed crossing changes, the ordering of components, the choice of basepoints, and the performance of Reidemeister moves II and III. Our proof follows the logic of \cite{LM} but adapted to our setting. Namely, we assume that the statement is valid for all link diagrams of up to $n-1$ crossings, independently of choices made during the evaluation process and of Reidemeister III moves and Reidemeister II moves that do not increase the number of crossings above $n-1$. Our aim is to prove that the statement is valid for all generic link diagrams in the set $\mathcal{L}_n$, independently of choices, Reidemeister III moves and Reidemeister II moves not increasing above $n$ crossings. We do this by double induction on the total number of crossings of a generic link diagram and on the number of mixed crossing switches needed for bringing the diagram to the form of a descending stack of knots. On knots we assume the well-definedness of $R$. The fact that the process treats self-crossings and mixed crossings differently is the main difference in in comparison with~\cite{LM} and it causes the need of particularly elaborate arguments in proving invariance of the resulting evaluation under the sequence of mixed crossing switches and the order of components (Propositions~\ref{orderxings} and~\ref{ordercpts}). \vspace{.2cm} \noindent {\it The inductive hypothesis} $(n-1)$: Assume that for any link diagram $L \in \mathcal{L}_{n-1}$ a unique polynomial $H[R](L) \in \mathcal{Z}$ is associated, which is independent of the choices made during its evaluation, is invariant under Reidemeister III moves and Reidemeister II moves, non-increasing beyond $n-1$ crossings, and which satisfies formula (1) of Theorem~\ref{hofr} and also formula (2), if it is a union of unlinked knots. The basis of the induction is a generic link diagram with zero crossings for which there is nothing to prove. \vspace{.2cm} \noindent {\it The recursive definition} $(n)$: Let $L\in \mathcal{L}_n$ a generic link diagram on $r$ components. If $L = \mathcal{K}^r$, a descending stack of knots, define $H[R](\mathcal{K}^r) = E^{1-r} \, R(\mathcal{K}^r)$. Otherwise, apply the steps of the computing algorithm employing formulae (1) and (2) of Theorem~\ref{hofr}. One of the terminal diagrams is $d L$, and this is the only terminal diagram with $n$ crossings. All others have at most $n-1$ crossings and they result from mergings of the components of $L$. For each one of these terminal diagrams $\ell$ the inductive hypothesis $(n-1)$ applies for evaluating $H[R](\ell)$. For $d L$ we have $H[R](d L) = E^{1-r} \, R(d L)$. The resulting polynomial in $\mathcal{Z}$ may depend, however, on the order of the components and on the choice of basepoints, which both specify the sequence of mixed crossing switches, or on Reidemeister III moves or non-increasing Reidemeister II moves applied on $L$. Also, it may not satisfy the skein relation (1). So, we will prove a series of propositions to ensure that these possibilities will not occur, hence we will have proved the inductive hypothesis $(n)$. \begin{prop}[$n$] \label{orderxings} Suppose $L\in \mathcal{L}_n$. If the mixed crossings of $L$ that differ from those of $d L$ are switched in any sequence to achieve $d L$, then the corresponding polynomial $H[R](L)$ does not change. \end{prop} \begin{proof} We use induction on the number of mixed crossing switches needed for obtaining $d L$ from $L$. For zero or one mixed crossing switch there is nothing to show. Suppose now that we have more mixed crossing switches to perform and that $L^\prime$ is the same diagram as $L$ with only difference the sequence of the mixed crossing switches. By the fact that any permutation is a product of elementary transpositions, it suffices to show invariance of $(L)$ under the exchange of the first two mixed crossing switches in the sequence indicated by the algorithm, say $i$ and $j$. We shal denote by $L^\prime$ the same generic diagram as $L$ but with the reverse order in switching $i$ and $j$. Denote also $c_i$, $c_i^\prime$ the components of $L$ forming the $i$th crossing and $c_j$, $c_j^\prime$ the components forming the $j$th crossing. Let further $\sigma_i L$ and $s_i L$ denote the same diagrams as $L$, except that in $\sigma_i L$ the $i$th crossing is switched and in $s_i L$ the $i$th crossing is smoothed. Same for the crossing $j$. Let now $\varepsilon_i , \varepsilon_j$ be the signs of the $i$th and $j$th crossings in $L$ respectively. We take first the sequence $\sigma_i$ before $\sigma_j$ and we compute using~(\ref{mixedskein}): \begin{equation}\label{ij} (L) = (\sigma_i L) + \varepsilon_i z (s_i L) = (\sigma_j \sigma_i L) + \varepsilon_j z (s_j \sigma_i L) + \varepsilon_i z (s_i L). \end{equation} Computing with the reverse order we obtain the expression: \begin{equation}\label{ji} (L^\prime) = (\sigma_j L^\prime) + \varepsilon_j z (s_j L^\prime) = (\sigma_i \sigma_j L^\prime) + \varepsilon_i z (s_i \sigma_j L^\prime) + \varepsilon_j z (s_j L^\prime). \end{equation} By the inductive hypothesis, choice of basepoints, ordering of components and sequence of mixed crossing switches in the diagrams $s_j \sigma_i L$, $s_i L$, $s_i \sigma_j L^\prime$ and $s_j L^\prime$ are irrelevant in the evaluation of their polynomials. Also, rule~(1) of Theorem~\ref{hofr} can be applied on these diagrams independently of any choices involved. Furthermore, comparing (\ref{ij}) and (\ref{ji}) we observe that $(\sigma_j \sigma_i L) = (\sigma_i \sigma_j L^\prime)$, since $\sigma_j \sigma_i L$ and $\sigma_i \sigma_j L^\prime$ represent the same generic diagram of $n$ crossings, since they have the same order of mixed crossing switches, and this diagram is strictly closer to $d L$ than the diagram $L$. So, by the induction hypothesis, $ (\sigma_j \sigma_i L) = (\sigma_i \sigma_j L^\prime) $. Hence, subtracting (\ref{ji}) from (\ref{ij}) we obtain: \begin{equation}\label{ijminusji} (L) - (L^\prime) = \varepsilon_j z \big[ (s_j \sigma_i L) - (s_j L^\prime) \big] + \varepsilon_i z \big[ (s_i L) - (s_i \sigma_j L^\prime) \big]. \end{equation} Note that the diagrams $s_j \sigma_i L$ and $s_j L^\prime$ differ only by the switching of the $i$th crossing, they have $n-1$ crossings and the same order of mixed crossing switches, since the components $c_j$, $c_j^\prime$ are fused together. Analogous observations hold for the pair of diagrams $s_i L$ and $s_i \sigma_j L^\prime$. \smallbreak If now the mixed crossings $i$ and $j$ {\it do not belong to the same pair of components} we use formula (\ref{mixedskein}) on the links $s_i L$ and $s_j L^\prime$ and we obtain: $$ (s_i L) = (\sigma_j s_i L) + \varepsilon_j z (s_j s_i L) \quad \text{and} \quad (s_j L^\prime) = (\sigma_i s_j L^\prime) + \varepsilon_i z (s_i s_j L^\prime). $$ Substituting the above expressions in (\ref{ijminusji}) and regrouping the terms we obtain: \begin{equation}\label{ijji} (L) - (L^\prime) = \varepsilon_j z \big[ (s_j \sigma_i L) - (\sigma_i s_j L^\prime) \big] + \varepsilon_i z \big[ (\sigma_j s_i L) - (s_i \sigma_j L^\prime) \big] + \varepsilon_i \varepsilon_j z^2 \big[ (s_j s_i L) - (s_i s_j L^\prime) \big]. \end{equation} We finally observe that the links $s_j \sigma_i L$ and $\sigma_i s_j L^\prime$ represent the same generic diagrams of $n-1$ crossings, so by the inductive hypothesis $(n-1)$ we have $(s_j \sigma_i L) = (\sigma_i s_j L^\prime)$. Moreover, the terms $(s_j \sigma_i L)$ and $(\sigma_i s_j L^\prime)$ have the same coefficient $\varepsilon_j z$ in the two equations. Analogous observations are valid for the pairs of diagrams $\sigma_j s_i L, s_i \sigma_j L^\prime$ and $s_j s_i L, s_i s_j L^\prime$, both pairs of the same generic diagrams of less than $n$ crossings. So, the right-hand side of the equation is zero and the two expressions $(L)$ and $(L^\prime)$ are equal. \smallbreak Suppose now that the mixed crossings $i$ and $j$ {\it belong to the same pair of components} $c_i$ and $c_j$. In this case we are not allowed to apply (\ref{mixedskein}) on the links $s_i L$ and $s_j L^\prime$, because the two crossings are now self-crossings. So we proceed as follows. We first prove the proposition for the case where in the initial generic diagrams $L$ and $L^\prime$ the switching of only the two mixed crossings $i$ and $j$ is needed. In this case the diagrams $\sigma_j \sigma_i L$ and $\sigma_i \sigma_j ^\prime$ both coincide with $d L$ and all diagrams in (\ref{ijminusji}) are descending stacks of $n-1$ crossings and of one component less than $L$, say $c-1$ in number, since $c_i$ and $c_j$ are merged together. So, applying on all of them the inductive hypothesis and rule~(2) of Theorem~\ref{hofr}, (\ref{ijminusji}) becomes equivalently: \begin{equation}\label{ijminusji2} (L) - (L^\prime) = \varepsilon_j z E^{2-c} \big[ R(s_j \sigma_i L) - R(s_j L^\prime) \big] + \varepsilon_i z E^{2-c} \big[ R(s_i L) - R(s_i \sigma_j L^\prime) \big]. \end{equation} Applying now the skein relation of $R$ on self-crossing $i$ of $s_j L^\prime$ and on self-crossing $j$ of $s_i L$ we have: \begin{equation}\label{rsjLprime} R(s_j L^\prime) = R(\sigma_i s_j L^\prime) + \varepsilon_i w R(s_i s_j L^\prime) \big] \quad \mbox{and} \quad R(s_i L) = R(\sigma_j s_i L) + \varepsilon_j w R(s_j s_i L) \end{equation} and substituting in (\ref{ijminusji2}) we obtain: \begin{equation}\label{ijminusji2r} \begin{array}{lcl} (L) - (L^\prime) & = & \varepsilon_j z E^{2-c} \big[ R(s_j \sigma_i L) - R(\sigma_i s_j L^\prime) \big] + \varepsilon_i z E^{2-c} \big[ R(\sigma_j s_i L)- R(s_i \sigma_j L^\prime) \big] \\ & + & \varepsilon_i \varepsilon_j z w E^{2-c} \big[ R(s_j s_i L) - R(s_i s_j L^\prime) \big]. \end{array} \end{equation} The right-hand side expression in (\ref{ijminusji2r}) is zero, by the well-definedness of $R$, so $(L) = (L^\prime)$. Suppose now that in the diagrams $L$ and $L^\prime$ the switching of three mixed crossings $i$, $j$ and $k$ is needed and that the order of $i$ and $j$ is the only difference between $L$ and $L^\prime$. The switching of $k$ applies initially on the diagrams $\sigma_j \sigma_i L$ and $\sigma_i \sigma_j L^\prime$ in (\ref{ij}) and (\ref{ji}), to obtain: \begin{equation}\label{ijk} (L) = (\sigma_k \sigma_j \sigma_i L) + \varepsilon_k z (s_k \sigma_j \sigma_i L) + \varepsilon_j z (s_j \sigma_i L) + \varepsilon_i z (s_i L) \end{equation} and: \begin{equation}\label{jik} (L^\prime) = (\sigma_k \sigma_i \sigma_j L^\prime) + \varepsilon_k z (s_k \sigma_i \sigma_j L^\prime) + \varepsilon_i z (s_i \sigma_j L^\prime) + \varepsilon_j z (s_j L^\prime). \end{equation} The diagrams involved in (\ref{ijk}) and (\ref{jik}) are all descending stacks. Further, $(\sigma_k \sigma_j \sigma_i L) = (\sigma_k \sigma_i \sigma_j L^\prime)$, since the generic diagrams are identical. Similarly $(s_k \sigma_j \sigma_i L) = (s_k \sigma_i \sigma_j L^\prime)$ and they have in (\ref{ijk}) and (\ref{jik}) the same coefficient. So, by subtracting and grouping terms with the same coefficients we get the equation: \begin{equation}\label{ijkminusjik} (L) - (L^\prime) = \varepsilon_j z \big[ (s_j \sigma_i L) - (s_j L^\prime) \big] + \varepsilon_i z \big[ (s_i L) - (s_i \sigma_j L^\prime) \big], \end{equation} which is of the same type as (\ref{ijminusji}). Now, in all four diagrams involved in (\ref{ijkminusjik}) the components $c_i$ and $c_j$ are merged together. So, the crossing $k$ is either a self-crossing in all of them or a mixed crossing in all of them. If $k$ is a self-crossing, then the four diagrams are already descending stacks, and we proceed with applying rule~(2) of Theorem~\ref{hofr} and the skein relation of $R$ for the $i$ and $j$ crossing respectively, as in (\ref{ijminusji2}), (\ref{rsjLprime}) and (\ref{ijminusji2r}) above. If $k$ is a mixed crossing we proceed with switching it in all four diagrams so as to obtain descending stacks. More precisely, and grouping in pairs of polynomials with the same coefficients we obtain from (\ref{ijkminusjik}): \begin{equation}\label{ijkjikk} \begin{array}{lclcl} (L) - (L^\prime) & = & \varepsilon_j z \big[ (\sigma_k s_j \sigma_i L) - (\sigma_k s_j L^\prime) \big] & + & \varepsilon_j \varepsilon_k z^2 \big[ (s_k s_j \sigma_i L) - (s_k s_j L^\prime) \big] \\ & + & \varepsilon_i z \big[ (\sigma_k s_i L) - (\sigma_k s_i \sigma_j L^\prime) \big] & + & \varepsilon_i \varepsilon_k z^2 \big[ (s_k s_i L) - (s_k s_i \sigma_j L^\prime) \big]. \end{array} \end{equation} Now, all diagrams in the above expression are descending stacks and each grouped pair differs by one self-crossing switch. So, applying rule~(2) of Theorem~\ref{hofr} and the skein relation of $R$ for the $i$ and $j$ crossing, (\ref{ijkjikk}) yields equivalently: \begin{equation}\label{ijkjikkr} \begin{array}{lclcl} (L) - (L^\prime) & = & \varepsilon_j z \, E^{c-2} \big[ R(\sigma_k s_j L) - \varepsilon_i w R(\sigma_k s_j s_i L) - R(\sigma_k s_j L^\prime) \big] \\ & + & \varepsilon_j \varepsilon_k z^2 \, E^{c-3} \big[ R(s_k s_j L) - \varepsilon_i w R(s_k s_j s_i L) - R(s_k s_j L^\prime) \big] \\ & + & \varepsilon_i z \, E^{c-2} \big[ R(\sigma_k s_i L) - R(\sigma_k s_i L^\prime) + \varepsilon_j w R(\sigma_k s_i s_j L^\prime) \big] \\ & + & \varepsilon_i \varepsilon_k z^2 \, E^{c-3} \big[ R(s_k s_i L) - R(s_k s_i L^\prime) + \varepsilon_j w R(\sigma_k s_i s_j L^\prime) \big]. \end{array} \end{equation} So, by the well-definedness of $R$ and cancellations of terms we finally obtain from (\ref{ijkjikkr}) that $(L) = (L^\prime)$. The proof of this case for any number of mixed crossing switches proceeds in the same manner as for three. Hence, the proof of the proposition is concluded. \end{proof} The following is now an immediate consequence of Proposition~\ref{orderxings}. \begin{cor} \label{direction} The change of direction (recall Section~\ref{notations}) on any component is irrelevant in the computation of the polynomial $H[R](L)$. \end{cor} We will now make sure that the evaluation of the polynomial $(L)$ is independent of the choice of basepoints. \begin{prop}[$n$] \label{basepoints} The polynomial $H[R]$ does not depend on the choice of basepoints. \end{prop} \begin{proof} By an inductive argument, we only need to show that if a basepoint of a component moves to a segment of the diagram past a crossing then the polynomial does not change. Indeed, suppose that the basepoint of the component $c_i$ of a generic link diagram $L_1 \in \mathcal{L}_n$ is moved from position $b_1$ to position $b_2$ past a crossing $k$ and let $L_2$ be the corresponding new generic link diagram in $\mathcal{L}_n$. If the crossing $k$ is a self-crossing of the component $c_i$ then the computation algorithm does not see the change of basepoint and, thus, $(L_2) = (L_1)$. If the crossing $k$ is a mixed crossing between components $c_i$ and $c_j$, assuming with no harm that $i<j$, we have the following possibilities. If the mixed crossing is traversed from `over' when walking along $c_i$ in the direction of its orientation, then it will be traversed either last from first or first from last, according to the orientation of $c_i$. View Figure~\ref{basepoint}(a). In either case it will not be switched, since $i<j$, and the sequence of the mixed crossings that will be switched remains the same. \smallbreak \begin{figure}[!ht] \begin{center} \includegraphics[width=4.7cm]{basepoints.pdf} \caption{(a) moving basepoints on an over-arc, and (b) moving basepoints on an under-arc} \label{basepoint} \end{center} \end{figure} Finally, if the mixed crossing $k$ is traversed from `under' when walking along $c_i$ in the direction of its orientation, then it will be switched either last from first or first from last, according to the orientation of $c_i$. View Figure~\ref{basepoint}(b). In either case the corresponding descending stacks $d L_1$ and $d L_2$ are identical, except for the positions of the basepoints $b_1$ and $b_2$, which play no role as explained in Subsection~\ref{notations}. Also, by Proposition~\ref{orderxings} the value of the polynomial $(L_1)$ is independent of the sequence of the mixed crossing switches for achieving the end diagrams $d L_1$ and $d L_2$ (we interchange in switching the first and the last crossing). Therefore $(L_1) = (L_2)$. \end{proof} \begin{prop}[$n$] \label{skeinrule} The polynomial $H[R]$ satisfies the skein relation (1) of Theorem~\ref{hofr} on mixed crossings. \end{prop} \begin{proof} Let $L \in \mathcal{L}_n$. Rule~(1) of Theorem~\ref{hofr} is the first step (using Proposition~\ref{orderxings}) in the computation of $(L)$ from $(d L)$. Then the equation of Rule~(1) is well-defined, since for the other two diagrams their polynomials are well-defined. Indeed, in the first switch from $d L$ we have $d L$ as one of $L_+$ or $L_-$, in the symbols of Rule~(1), and $L_0$ is a descending stack of $n-1$ knots. So, the recursive definition $(n)$ and the inductive hypothesis $(n-1)$ apply. Starting from $L$ as one of $L_+$ or $L_-$ we have that $(L_0)$ is well-defined by the inductive hypothesis $(n-1)$ and the other one of $(L_+)$ or $(L_-)$ by applying induction on the closeness of a diagram to $d L$. \end{proof} \begin{prop}[$n$] \label{reidem} The polynomial $H[R]$ is invariant under Reidemeister III moves and Reidemeister II moves that do not increase the number of crossings beyond $n$. \end{prop} \begin{proof} {\it Type II.} \ View Figure~\ref{reidemII}. The case $i=j$ means that the move takes place on the same component $c_i$, so the move is visible only on the level of $R$, which is known to be a link invariant. In the case $j<i$, in the left-hand illustration of Figure~\ref{reidemII} no mixed crossing gets switched by the algorithm. So, applying this move on all diagrams but $d L$ created in the skein tree, we have by the inductive hypothesis $(n-1)$ that $H[R]$ remains invariant on these diagrams and their equivalent counterparts. Applying now the move on $d L$, the new diagram $d L^{\prime}$, say, remains a descending stack of the same knots involved, so $H[R](d L) = H[R](d L^{\prime})$ since $R(d L) = R(d L^{\prime})$. Hence, the end polynomials computed before and after the move are the same. The same arguments are valid for the case $i<j$ in the right-hand instance of the figure. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=5.8cm]{reidII.pdf} \caption{The Reidemeister II moves} \label{reidemII} \end{center} \end{figure} Let now $i<j$ in the left-hand illustration of Figure~\ref{reidemII}. This means that, by the algorithm, the two mixed crossings in the figure will be switched. Note that the two crossings have opposite signs independently of the choices of orientations. By Proposition~\ref{orderxings}, the two mixed crossings can be switched first and their order is irrelevant, so we label them 1 and 2. Let now $\varepsilon$ be the sign of crossing 1. Then we distinguish two cases and we compute applying Proposition~\ref{skeinrule}: If $\varepsilon = +1$ we have: $ (L) = (\sigma_1 L) + z (s_1 L) = (\sigma_2 \sigma_1 L) - z (s_2 \sigma_1 L) + z (s_1 L). $ Examining Figure~\ref{reidemIIproof}(a) we see that the diagrams of $(n-1)$ crossings $s_2 \sigma_1 L$ and $s_1 L$ differ by planar isotopy. So, by the inductive hypothesis we have $(s_2 \sigma_1 L) = (s_1 L)$. Note that the components $c_i$ and $c_j$ have merged into one, so the isotopy is only visible on the level of $R$, which is known to be a link invariant. Hence, $(L) = (\sigma_2 \sigma_1 L)$ and the situation is reduced to the case $j<i$. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=7cm]{reidIIproof.pdf} \caption{Invariance under the Reidemeister II moves} \label{reidemIIproof} \end{center} \end{figure} If $\varepsilon = -1$ we have from Figure~\ref{reidemIIproof}(b): $ (L) = (\sigma_1 L) - z (s_1 L) = (\sigma_2 \sigma_1 L) + z (s_2 \sigma_1 L) - z (s_1 L). $ Now, the diagrams $s_2 \sigma_1 L$ and $s_1 L$ both contain a positive kink which is on the same component since the two components $c_i$ and $c_j$ are merged into one. So, they are regular isotopic. Hence, by the inductive hypothesis $(n-1)$ and the invariance of $R$ under this isotopy (the algorithm does not ``see" it) we obtain $(s_2 \sigma_1 L) = (s_1 L)$. So, $(L) = (\sigma_2 \sigma_1 L)$ and, again, the situation is reduced to the case $j<i$. Hence, the end polynomials computed before and after the move are the same also in this case. The same arguments hold for the case $j<i$ in the right-hand instance of Figure~\ref{reidemII}. \smallbreak \noindent {\it Type III.} \ Suppose that no mixed crossing switch is needed, so the algorithm does not see the move. This can happen, for example, in the case $k = j = i$ and the invariance of $H[R]$ under the move rests on the invariance of $R$. Also in the case $ k\leqslant j \leqslant i$ but with not all arcs in the same component, and then there is nothing to do. View Figure~\ref{reidemIII1}, where the crossing marked with 1 and a shaded disc should be ignored for the time. Suppose now that one mixed crossing switch will be needed. This means that not all arcs belong to the same component, that $j < k$ and $j < i$ and that the mixed crossing marked with a shaded disc in Figure~\ref{reidemIII1} has to be switched. By Proposition~\ref{orderxings} we can label this crossing by 1 and after the move is performed we can also label by 1 the corresponding crossing. The two diagrams with crossing 1 switched that differ by one Reidemeister move III fall now in the previous case. We shall follow now what happens to the two diagrams with the corresponding smoothings of the marked crossing. Note that the marked crossing retains its sign after the move is performed. Hence the computations are the same throughout both skein trees deriving from the diagrams with the mixed crossing switched. In Figure~\ref{reidemIIIpf1} we see the two possibilities according to the compatibility of orientations. In both cases the components $c_j$ and $c_k$ merge into one. In the one case (top row of the figure) the resulting diagrams are planar isotopic. In the other case (bottom row of the figure) they differ by two Reidemeister II moves in which the total number of crossings does not exceed $n-1$. Hence, by the inductive hypothesis $(n-1)$, both configurations will be assigned the same polynomials. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=5.6cm]{reidIII1.pdf} \caption{The Reidemeister III moves: case 1} \label{reidemIII1} \end{center} \end{figure} \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=5.6cm]{reidIIIproof1.pdf} \caption{Invariance under the Reidemeister III moves - case 1} \label{reidemIIIpf1} \end{center} \end{figure} Suppose now that two mixed crossing switches will be needed. This means that not all arcs belong to the same component, that $i < j \leqslant k$. Then the crossings marked with shaded discs in Figure~\ref{reidemIII2} have to be switched. By Proposition~\ref{orderxings} we can label these crossings by 1 and 2 and after the move is performed we can relabel them to 1 and 2 respectively. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=5.6cm]{reidIII2.pdf} \caption{The Reidemeister III moves: case 2} \label{reidemIII2} \end{center} \end{figure} We first apply skein rule (1) on crossing 1 on both sides of the move. Note that the two crossings retain their signs after the move is performed. The analysis of the two resulting diagrams with the crossing switched reduces to the case where only one mixed crossing has to be switched (crossing 2). Let us follow now what happens to the two diagrams with the smoothings of crossing 1. Figure~\ref{reidemIIIpf2} illustrates the two possibilities according to the compatibility of orientations. In the one case the resulting diagrams are planar isotopic. In the other case they differ by two Reidemeister II moves in which the total number of crossings does not exceed $n-1$. In both cases the components $c_i$ and $c_j$ merge into one, so the isotopy is only `seen' on the level of $R$. So, by the inductive hypothesis $(n-1)$ both configurations will be assigned the same polynomials. Hence, the proof is concluded. It is worth noting here that the logic we followed for the proof of this last case with two crossing switches would not have worked if the order of the crossings in question were reversed. Indeed, if crossing 2 were to be switched first, no Reidemeister III move would be available on the initial diagram. \end{proof} \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=5.6cm]{reidIIIproof2.pdf} \caption{Invariance under the Reidemeister III moves - case 2 (a)} \label{reidemIIIpf2} \end{center} \end{figure} \begin{prop}[$n$] \label{ordercpts} The polynomial $H[R]$ is independent of the choice of order of the components. \end{prop} \begin{proof} Let $L \in \mathcal{L}_n$ with a specified order of components assigned to it. Suppose also that a different order of components is assigned to $L$ and denote this link by $L^\prime \in \mathcal{L}_n$. It suffices to prove the proposition in the case where the relative positions of only one pair of adjacent components $A$ and $B$, with respect to the given order, is switched. By Proposition~\ref{orderxings} we may assume that, using Proposition~\ref{skeinrule}, we have done the unlinking of all pairs of components of $L$ and $L^\prime$ except for the pair $A, B$. In both resolution trees the coefficients in the computations toward $(L)$ and $(L^\prime)$ are identical up to the point where we have to start exchanging $A$ and $B$. For the resulting diagrams of less than $n$ crossings in both skein trees so far we apply the inductive hypothesis $(n-1)$, whereby the order of the components $A$ and $B$ does not affect the values of their polynomials. Consequently, we may assume, without loss of generality, that the pair $A, B$ is the first pair of components in the ordering assigned to $L$ and that these two are the only components still linked together. Let us denote the two diagrams of $n$ crossings in the above two resolution trees by $AB$ and $BA$ respectively. In $AB$ component $A$ is prior to $B$ in the order induced by the order of $L$, while in $BA$ component $B$ is prior to $A$ in the order induced by that of $L^\prime$. By the above reasoning we may assume that our analysis begins with the diagrams $AB$ and $BA$. \smallbreak Let $r+r^\prime$ be the total number of mixed crossings between components $A$ and $B$ and let $r$ be the number of mixed crossings between $A$ and $B$ that need to be switched in $AB$ so that the two components get unlinked and $A$ lies above $B$. By Proposition~\ref{orderxings} we may assign to these crossings the numbers $1, \ldots, r$ and let $\varepsilon_1, \ldots, \varepsilon_r$ be their signs. After switching them all (using Proposition~\ref{skeinrule}) we end up with the final descending stack of $n$ crossings, for which we will use the notation $\frac{A}{B}$, and in which $A$ is before $B$ in the given order and it lies above $B$. Let also $r^\prime$ be the number of mixed crossings between $B$ and $A$ that need to be switched in $BA$ so that the two components get unlinked and $B$ lies above $A$. By Proposition~\ref{orderxings} we may assign to these crossings the numbers $r+1, \ldots, r+r^\prime$ and let $\varepsilon_{r+1}, \ldots, \varepsilon_{r+r^\prime}$ be their signs. After switching them all (using again Proposition~\ref{skeinrule}) we end up with the descending stack of $n$ crossings, for which we will use the notation $\frac{B}{A}$ and in which $B$ lies above $A$ and it comes before $A$. \smallbreak More precisely, applying Propositions~\ref{orderxings} and Proposition~\ref{skeinrule}, we start by selecting in $AB$ the crossing numbered 1 and we rename $AB$ to $L_{\varepsilon_1}$. After switching crossing~1 we select the crossing numbered 2 and we proceed similarly until we reach crossing~$r$. After switching crossing $r$ we obtain the diagram $L_{-\varepsilon_r}$, which is in fact $\frac{A}{B}$. Namely, we have the sequence of generic diagrams with component $A$ coming before $B$: \begin{equation} \label{1tor} \begin{array}{lclcl} (AB) & := & (L_{\varepsilon_1}) & = & (L_{-\varepsilon_1}) + \varepsilon_1\, z \, (L_{0,1}), \\ (L_{-\varepsilon_1}) & := & (L_{\varepsilon_2}) & = & (L_{-\varepsilon_2}) + \varepsilon_2\, z \, (L_{0,2}), \\ & \vdots & & & \\ (L_{-\varepsilon_{r-1}}) & := & (L_{\varepsilon_{r}}) & = & (L_{-\varepsilon_{r}}) + \varepsilon_{r}\, z \, (L_{0,r}) \\ & & & = & (\frac{A}{B}) + \varepsilon_{r}\, z \, (L_{0,r}). \end{array} \end{equation} At the same time we select in $BA$ the crossing $r+1$ for switching, so we rename $BA$ to $L^\prime_{\varepsilon_{r+1}}$. Then, we rename $L^\prime_{-\varepsilon_{r+1}}$ to $L^\prime_{\varepsilon_{r+2}}$ and we select in in the crossing $r+2$. Proceeding in this manner we arrive at the final step of the process that yields the descending stack of $n$ crossings, $\frac{B}{A}$ indicated below as $L^\prime_{-\varepsilon_{r+ r^\prime}}$, after switching the crossing numbered $r+ r^\prime$ in the diagram $L^\prime_{-\varepsilon_{r+ r^\prime - 1}}$ (renamed to $L^\prime_{\varepsilon_{r+ r^\prime}}$). Namely, we have the sequence of generic diagrams with component $B$ coming before $A$: \begin{equation} \label{rplus1torprime} \begin{array}{lclcl} (BA) & := & (L^\prime_{\varepsilon_{r+1}}) & = & (L^\prime_{-\varepsilon_{r+1}}) + \varepsilon_{r+1} \, z \, (L^\prime_{0, r+1}), \\ (L^\prime_{-\varepsilon_{r+1}}) & := & (L^\prime_{\varepsilon_{r+2}}) & = & (L^\prime_{-\varepsilon_{r+2}}) + \varepsilon_{r+2} \, z \, (L^\prime_{0, r+2}), \\ & \vdots & & & \\ (L^\prime_{-\varepsilon_{r+ r^\prime - 1}}) & := & (L^\prime_{\varepsilon_{r+ r^\prime}}) & = & (L^\prime_{-\varepsilon_{r+ r^\prime}}) + \varepsilon_{r+ r^\prime}\, z \, (L^\prime_{0, r+ r^\prime}) \\ & & & = & (\frac{B}{A}) + \varepsilon_{r+ r^\prime}\, z \, (L^\prime_{0, r+ r^\prime}). \end{array} \end{equation} Substituting now the expressions in (\ref{1tor}) consecutively, starting from the last equation, we obtain: \begin{equation} \label{ABtoAoverB} (AB) = (\frac{A}{B}) + \, z \, \left[ \varepsilon_1 \, (L_{0,1}) + \cdots + \varepsilon_{r}\, (L_{0,r}) \right]. \end{equation} Analogously, from (\ref{rplus1torprime}) we obtain: \begin{equation} \label{BAtoBoverA} (BA) = (\frac{B}{A}) + \, z \, \left[ \varepsilon_{r+1} \, (L^\prime_{0,r+1}) + \cdots + \varepsilon_{r+ r^\prime}\, (L^\prime_{0,r+ r^\prime}) \right]. \end{equation} Denoting now: \begin{equation} \label{XandY} (X) := \varepsilon_1 \, (L_{0,1}) + \cdots + \varepsilon_{r}\, (L_{0,r}) \quad {\rm and } \quad (Y) := \varepsilon_{r+1} \, (L^\prime_{0,r+1}) + \cdots + \varepsilon_{r+ r^\prime}\, (L^\prime_{0,r+ r^\prime}), \end{equation} Eqs.~\ref{ABtoAoverB} and~\ref{BAtoBoverA} are shortened to the following: \begin{equation} \label{ABX} (AB) = (\frac{A}{B}) + \, z \, (X), \end{equation} \begin{equation} \label{BAY} (BA) = (\frac{B}{A}) + \, z \, (Y). \end{equation} Subtracting equations (\ref{ABX}) and (\ref{BAY}) by parts we obtain: \begin{equation} \label{ABvsBAXY} (AB) - (BA) = (\frac{A}{B}) - (\frac{B}{A}) + z \, [(X) - (Y)]. \end{equation} Further, we observe that the descending stacks $\frac{A}{B}$ and $\frac{B}{A}$ are both assigned the same value of $H[R]$ since, by the recursive definition $(n)$, we have: \begin{equation} \label{reductiontor} (\frac{A}{B}) = E^{1-c} \, R(\frac{A}{B}) \quad \& \quad (\frac{B}{A}) = E^{1-c} \, R(\frac{B}{A}), \end{equation} where $c$ is the number of components in both descending stacks. But, by the well-definedness of the link invariant $R$ it is ensured that $R(\frac{A}{B}) = R(\frac{B}{A})$. So $ (\frac{B}{A}) = (\frac{A}{B})$. So, (\ref{ABvsBAXY}) becomes: \begin{equation} \label{ABBAonlyXY} (AB) - (BA) = z \, [(X) - (Y)]. \end{equation} In order to prove further that $(X) = (Y)$ we argue as follows: we do the same procedure as above, switching and smoothing progressively all $r$ crossings starting from $AB$ and all $r^\prime$ crossings starting from $BA$, but this time applying the skein relation of the invariant $R$. We obtain equations of the same form as (\ref{1tor}) and (\ref{rplus1torprime}), but now $z$ is replaced by $w$ and the invariant $R$ is evaluated on all diagrams. Summing up we obtain: \begin{equation} \label{evalR} R(AB) - R(BA) = w \, [R(X) - R(Y)], \end{equation} where $R(X) := \varepsilon_1 \, R(L_{01}) + \cdots + \varepsilon_{r}\, R(L_{0r})$ and $R(Y) := \varepsilon_{r+1} \, R(L^\prime_{0,r+1}) + \cdots + \varepsilon_{r+ r^\prime}\, R(L^\prime_{0,r+ r^\prime})$. Clearly, by the well-definedness of $R$ we have $R(AB) = R(BA)$. Note, now, that all intermediate generic diagrams in (\ref{1tor}) and (\ref{rplus1torprime}) that come from smoothings have $(n-1)$ crossings, so the inductive hypothesis $(n-1)$ applies to all of them. Furthermore, these diagrams are descending stacks of $c-1$ components, since the components $A$ and $B$ have merged into one. So, by the inductive hypothesis $(n-1)$: \begin{equation} \label{RonLi} (L_{0,i}) = E^{2-c} \, R(L_{0,i}), \end{equation} for all $i=1,\ldots, r+ r^\prime$. Multiplying then Eq.~\ref{evalR} by $E^{2-c}$ we obtain: \begin{equation} \label{equalXY} (X) - (Y) = 0. \end{equation} Substituting (\ref{equalXY}) in (\ref{ABBAonlyXY}) we finally obtain: \begin{equation} \label{BAequalAB} (AB) = (BA). \end{equation} and the proof of the Proposition is concluded. \end{proof} By Propositions \ref{orderxings}, \ref{basepoints}, \ref{skeinrule}, \ref{reidem} and \ref{ordercpts}, the inductive hypothesis $(n)$ is proved. Therefore, the proof of Theorem~\ref{hofr} is now completed. \hfill {\it Q.E.D.} \begin{rem} \rm The places in the proof of Theorem~\ref{hofr} where the actual properties of the invariant $R$ were intrinsically used, beyond rule (2) of the Theorem, were in the proofs of Propositions~\ref{orderxings} and~\ref{ordercpts}, where it was essential that $R$ satisfies the same form of skein relation as $H[R]$. However, nowhere in the proof was it forced that $R$ has the same indeterminate $z$ as $H[R]$. \end{rem} \subsection{Translation to Ambient Isotopy} \label{generalambp} In this subection we define and discuss the ambient isotopy generalized invariant, counterpart of the regular isotopy generalized invariant $H[R]$ constructed above. Let $P$ denote the classical Homflypt polynomial. Then, as we know, one can obtain the ambient isotopy invariant $P$ from its regular isotopy counterpart $H$ via the formula: $$ P (L) := a^{-wr (L)} H (L), $$ where $wr (L)$ is the total writhe of the oriented diagram $L$. Analogously, and letting $G$ denote $P$ but with different variable, from our generalized regular isotopy invariant $H[R]$ one can derive an ambient isotopy invariant $P[G]$ via: \begin{equation}\label{prfromhr} P[G] (L) := a^{-wr (L)} H[R] (L). \end{equation} Then for the invariant $P[G]$ we have the following: \begin{thm} \label{pofr} Let $P (z,a)$ denote the Homflypt polynomial and let $G (w,a)$ denote the same invariant but with a different parameter $w$ in place of $z$. Then there exists a unique ambient isotopy invariant of classical oriented links $P[G]: \mathcal{L} \rightarrow {\mathbb Z}[z, w, a^{\pm 1}, E^{\pm 1}]$ defined by the following rules: \begin{enumerate} \item On crossings involving different components the following skein relation holds: $$ a \, P[G](L_+) - {a}^{-1} \, P[G](L_-) = z \, P[G](L_0), $$ where $L_+$, $L_-$, $L_0$ is an oriented Conway triple. \item For ${\mathcal K}^r := \sqcup_{i=1}^r K_i$, a union of $r$ unlinked knots, with $r \geqslant 1$, it holds that: $$ P[G]({\mathcal K}^r) = E^{1-r} \, G({\mathcal K}^r). $$ \end{enumerate} \end{thm} \begin{rem} \rm As pointed out in the Introduction, in Theorem~\ref{hofr} we could specialize the $z$, the $w$, the $a$ and the $E$ in any way we wish. For example, if $a=1$ then $R (w,1)$ becomes the Alexander--Conway polynomial, while if $w = \sqrt{a} - 1/\sqrt{a}$ then $R (\sqrt{a} - 1/\sqrt{a} , a)$ becomes the unnormalized Jones polynomial. In each case $H[R]$ can be regarded as a generalization of that polynomial. Furthermore, in the case where $G(w,a) = P(z,a)$ (for $w=z$) the ambient isotopy invariant $P[P]$ coincides with the new 3-variable link invariant $\Theta(q, \lambda, E)$ \cite{chjukala}, while for $w=z$ and $E=1/d$, $P[P]$ coincides with the invariant $\Theta_d$ \cite{jula2} (for $E=1$ it coincides with $P$), recall Section~\ref{sectheta}. So, our invariant $P[G]$ is stronger than $P$ and it is a (seemingly) 4-variable generalization of the invariant $\Theta$. As we shall see below (Proposition~\ref{topequivh}) one variable is redundant. Hence, our proof of the existence of $H[R]$ provides a direct skein-theoretic proof of the existence of the invariant $\Theta$, without the need of algebraic tools or the theory of tied links. Finally, for $w=z = \sqrt{a} - 1/\sqrt{a}$ the invariant $P[P]$ coincides with the new 2-variable link invariant $\theta(a, E)$ \cite{goula2}, which generalizes and is stronger than the Jones polynomial. \end{rem} \section{A closed combinatorial formula for $H[R]$} \label{secphi} As we mentioned in the Introduction, in \cite[Appendix B]{chjukala} W.B.R. Lickorish provides a closed combinatorial formula for the definition of the invariant $\Theta = P[P]$, that uses the Homflypt polynomials and linking numbers of sublinks of a given link. We will give here an analogous formula for our regular isotopy extension $H[R]$. For this purpose we need to recall the basic skein formulas for $H[R]$ from Theorem~\ref{hofr}. \begin{enumerate} \item $H[R](L_+) - H[R](L_-) = z \, H[R](L_0)$, for any oriented Conway triple $L_+, L_-, L_0$ in which the arcs of $L_+$ are in {\it different} components of the link $L$; \item $H[R] (L) = E^{1-k}\, R({\mathcal K}^k)$ when $L$ is the union of $k$ {\it unlinked} knots. \end{enumerate} Moreover, $R(w,a)$, the regular isotopy Homflypt polynomial, is defined by the rules: \begin{enumerate} \item[(R1)] For $L_+$, $L_-$, $L_0$ an oriented Conway triple, the following skein relation holds: $$ R(L_+) - R(L_-) = w \, R(L_0), $$ \item[(R2)] The indeterminate $a$ is the positive curl value for $R$: $$ R ( \raisebox{-.1cm}{ \begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (-0.22,-0.22); \draw [line width=0.35mm ](-.7,.7)--(0,0); \draw [line width=0.35mm] (0.22,0.22) -- (.7,.7)[->]; \draw [line width=0.35mm] (0,0) -- +(.7,-.7)[->]; \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} \ ) = a \, R ( \raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style = {decoration = {markings, mark = at position #1 with {\arrow{>}}} }] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ) \quad \mbox{and} \quad R ( \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (0,0) ; \draw [line width=0.35mm] (-.7,.7)--(-0.22,0.22); \draw [line width=0.35mm] (0,0) -- (.7,.7)[->]; \draw [line width=0.35mm] (0.22,-0.22) -- +(.6,-.6)[->]; \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} ) = a^{-1} \, R (\raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style = {decoration = {markings, mark = at position #1 with {\arrow{>}}} }] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ), $$ \item[(R3)] On the standard unknot: $$ R(\bigcirc) = 1. $$ We also recall that the above defining rules imply the following: \item[(R4)] For a diagram of the unknot, $U$, $R$ is evaluated by taking: $$ R(U) = a^{wr(U)}, $$ where $wr(U)$ denotes the writhe of $U$ --instead of 1 that is the case in the ambient isotopy category. \item[(R5)] $R$ being the Homflypt polynomial, it is multiplicative on a union of unlinked knots, ${\mathcal K}^r := \sqcup_{i=1}^r K_i$. Namely, for $\eta := \frac{a - a^{-1}}{w}$ we have: $$ R({\mathcal K}^r) = \eta^{r-1} \Pi_{i=1}^r R(K_i). $$ \end{enumerate} \begin{thm}\label{theta_linking_P} Let $L$ be an oriented link with $n$ components. Then \begin{equation}\label{hr} H[R](L) = \left( \frac{z}{w} \right)^{n-1} \sum_{k=1}^n \eta^{k-1} \widehat{E}_k \sum_\pi R(\pi L) \end{equation} where the second summation is over all partitions $\pi$ of the components of $L$ into $k$ (unordered) subsets and $R(\pi L)$ denotes the product of the Homflypt polynomials of the $k$ sublinks of $L$ defined by $\pi$. Furthermore, $\widehat{E}_k = (\widehat{E}^{-1} - 1)(\widehat{E}^{-1} - 2) \cdots (\widehat{E}^{-1} - k + 1)$, with $\widehat{E} = E \frac{z}{w}$, $\widehat{E}_1 =1$, and $\eta = \frac{a - a^{-1}}{w}$. \end{thm} \begin{proof} Before proving the result, note the following equalities: \begin{align*} R(L_1 \sqcup L_2) &= \eta \, R(L_1) \, R(L_2), \\ H[R](L_1 \sqcup L_2) &= \frac{\eta}{E} \, H[R](L_1) \, H[R](L_2). \end{align*} In the case where both $L_1$ and $L_2$ are knots the above formuli follow directly from rules (R5) and (2) above. If at least one of $L_1$ and $L_2$ is a true link, then the formuli follow by doing independent skein processes on $L_1$ and $L_2$ for bringing them down to unlinked components, and then using the defining rules above. Suppose now that a diagram of $L$ is given. The proof is by induction on $n$ and on the number, $u$, of crossing changes between distinct components required to change $L$ to $n$ unlinked knots. If $n=1$ there is nothing to prove. So assume the result true for $n-1$ components and $u-1$ crossing changes and prove it true for $n$ and $u$. The induction starts when $u = 0$. Then $L$ is the union of $n$ unlinked components $L_1, \dots, L_n$ and all linking numbers are zero. A classic elementary result concerning the Homflypt polynomial shows that $R(L) = \eta^{n-1}R(L_1) \cdots R(L_n)$. Furthermore, in this situation, for any $k$ and $\pi$, $R(\pi L) = \eta^{n-k}R(L_1) \cdots R(L_n)$. Note that $H[R](L) = E^{1-n} R(L) = \eta^{n-1} E^{1-n} R(L_1) \cdots R(L_n)$. So it is required to prove that \begin{equation} \eta^{n-1} E^{1-n} = \left( \frac{z}{w} \right)^{n-1} \eta^{n-1} \sum_{k=1}^n S(n,k)(\widehat{E}^{-1} - 1)(\widehat{E}^{-1} - 2) \cdots (\widehat{E}^{-1} - k + 1), \end{equation} where $S(n,k)$ is the number of partitions of a set of $n$ elements into $k$ subsets. Now it remains to prove that: \begin{equation}\label{hstirling} E^{1-n} \left( \frac{z}{w} \right)^{1-n} = \widehat{E}^{1-n} = \sum_{k=1}^n S(n,k)(\widehat{E}^{-1} - 1)(\widehat{E}^{-1} - 2) \cdots (\widehat{E}^{-1} - k + 1). \end{equation} However, in the theory of combinatorics, $S(n,k)$ is known as a Stirling number of the second kind and this required formula is a well known result about such numbers. Now suppose that $u > 0$. Suppose that in a sequence of $u$ crossing changes that changes $L$, as above, into unlinked knots, the first change is to a crossing $c$ of sign $\epsilon$ between components $L_1$ and $L_2$. Let $L^\prime$ be $L$ with the crossing changed and $L^0$ be $L$ with the crossing annulled. Now, from the definition of $H[R]$, $$ H[R] (L) = H[R] (L^\prime) + \epsilon z \, H[R] (L^0). $$ \noindent The induction hypotheses imply that the result is already proved for $L^\prime$ and $L^0$ so \begin{equation}\label{hstar} H[R] (L) = \left( \frac{z}{w} \right)^{n-1} \sum_{k=1}^n \eta^{k-1}\widehat{E}_k \sum_{\pi^\prime} R(\pi^\prime L^\prime) + \epsilon z \left( \frac{z}{w} \right)^{n-2} \sum_{k=1}^{n-1} \eta^{k-1}\widehat{E}_k \sum_{\pi^0} R(\pi^0 L^0), \end{equation} where $\pi^\prime$ runs through the partitions of the components of $L^\prime$ and $\pi^0$ through those of $L^0$. A sublink $X^0$ of $L^0$ can be regarded as a sublink $X$ of $L$ containing $L_1$ and $L_2$ but with $L_1$ and $L_2$ fused together by annulling the crossing at $c$. Let $X^\prime$ be the sublink of $L^\prime$ obtained from $X$ by changing the crossing at $c$. Then $$ R (X) = R (X^\prime) + \epsilon w \, R(X^0). $$ This means that the second (big) term in (\ref{hstar}) is \begin{equation}\label{hbig_term} \frac{z} {w} \left( \frac{z}{w} \right)^{n-2} \sum_{k=1}^{n-1} \eta^{k-1} \widehat{E}_k \sum_{\rho} \bigl( R(\rho L) - R(\rho^\prime L^\prime ) \bigr), \end{equation} where the summation is over all partitions $\rho$ of the components of $L$ for which $L_1$ and $L_2$ are in the same subset and $\rho^\prime$ is the corresponding partition of the components of $L^\prime$. Note that, for any partition $\pi$ of the components of $L$ inducing partition $\pi^\prime$ of $L^\prime$, if $L_1$ and $L_2$ are in the same subset then we can have a difference between $R(\pi L)$ and $R(\pi^\prime L^\prime)$, but when $L_1$ and $L_2$ are in different subsets then \begin{equation}\label{hdiff_subsets} R(\pi^\prime L^\prime) = R(\pi L). \end{equation} Thus, substituting (\ref{hbig_term}) in (\ref{hstar}) we obtain: \begin{equation}\label{halmostdone} H[R](L) = \left( \frac{z}{w} \right)^{n-1} \sum_{k=1}^n \eta^{k-1} \widehat{E}_k \biggl( \sum_{\pi^\prime} R(\pi^\prime L^\prime) + \sum_{\rho} \bigl( R(\rho L) - R(\rho^\prime L^\prime)\bigr) \biggr), \end{equation} where $\pi^\prime$ runs through all partitions of $L^\prime$ and $\rho$ through partitions of $L$ for which $L_1$ and $L_2$ are in the same subset. Note that, for $k=n$ the second sum is zero. Therefore: \begin{equation}\label{hdone} H[R] (L) = \left( \frac{z}{w} \right)^{n-1} \sum_{k=1}^n \eta^{k-1} \widehat{E}_k \biggl( \sum_{\pi^\prime} R(\pi^\prime L^\prime) + \sum_{\rho} R(\rho L) \biggr), \end{equation} where $\pi^\prime$ runs through only partitions of $L^\prime$ for which $L_1$ and $L_2$ are in different subsets and $\rho$ through all partitions of $L$ for which $L_1$ and $L_2$ are in the same subset. Hence, using (\ref{hdone}) and also (\ref{hdiff_subsets}), we obtain: $$ H[R] (L) = \left( \frac{z}{w} \right)^{n-1} \sum_{k=1}^n \eta^{k-1} \widehat{E}_k \sum_\pi R(\pi L) $$ and the induction is complete. \end{proof} \begin{rem} \rm Note that the combinatorial formula~(\ref{hr}) can be regarded by itself as a definition of the invariant $H[R]$, since the right-hand side of the formula is an invariant of regular isotopy, since $H$ is invariant of regular isotopy. The proof of Theorem~\ref{theta_linking_P} then proves that this invariant is $H[R]$ by verifying the skein relation and axioms for $H[R]$. In the same way the original Lickorish formula (\ref{lickorish}) can be regarded as a definition for the invariant $\Theta = P[P]$. Clearly, the two formuli for $H[H]$ and $\Theta$ are intechangeable by writhe normalization, recall (\ref{prfromhr}). \end{rem} \begin{rem} \rm \label{fiandh} In the first version of this paper \cite{kaula} the formula (\ref{hr}) was originally proved for the specialization $H[H]$ of $H[R]$, where $w=z$. Namely, \begin{equation}\label{phi} H[H](L)(z,a,E) = \sum_{k=1}^n \phi^{k-1} E_k \sum_\pi H(\pi L), \end{equation} where $\phi = \frac{a - a^{-1}}{z}$ and $E_k = (E^{-1} - 1)(E^{-1} - 2) \cdots (E^{-1} - k + 1)$, with $E_1 =1$. Konstantinos Karvounis \cite{ka2} adapted our proof to the (seemingly more general, see Proposition~\ref{topequivh} below) invariant $H[R]$. Note that for $z=w$ \eqref{hr} reduces to \eqref{phi}. So, we present here the proof of Karvounis. \end{rem} Furthermore, Karvounis noticed that, using \eqref{hr} and \eqref{phi} we can relate the invariants $H[R]$ and $H[H]$: \begin{prop}[K. Karvounis \cite{ka2}] \label{topequivh} The invariants $H[R]$ and $H[H]$ are topologically equivalent. Specifically, it holds that: \begin{equation} H[R](L)(z,w,a,E) = \left( \frac{z}{w} \right)^{n-1} H[H](L)(w,a,\widehat{E}). \end{equation} \end{prop} \begin{rem} \rm \label{depth} Proposition~\ref{topequivh} implies that the four variables in the original setting of the invariant $H[R]$ can be reduced to three, by setting $z=w$, without any influence on the topological strength of the invariant. However, by taking $z$ and $w$ as independent variables, we developed the theory in its full generality. We believe that this separation of variables clarifies the logic of the skein theoretic proofs of invariance. In particular, when we keep track of the powers of $z$ in the polynomial, we are looking at the depth of switching operations needed to unlink the link. It may be possible to use this information for further topological invariants of the link. \end{rem} \begin{rem} \rm \label{sublinks} The combinatorial formula~(\ref{hr}) shows that the strength of $H[R]$ against $H$ comes from its ability to distinguish certain sublinks of Homflypt-equivalent links. In \cite{chjukala} a list of six 3-component links are given, which are Homflypt equivalent but are distinguished by the invariant $\Theta$ and thus also by $H[R]$. \end{rem} \begin{exmp}[L. Kauffman and D. Goundaroulis] \rm \label{ekt} Here is an example showing how $H[R]$ and the combinatorial formula give extra information in the case of two link components. We will use the ambient isotopy version of the Jones polynomial $V_{K}(q)$ and so first work with a skein calculation of the Jones polynomial, and then with a calculation of the generalized invariant $V[V](L)(q)$. We use the link $ThLink$ first found by Morwen Thisthlethwaite \cite{Th} and generalized by Eliahou, Kauffman and Thistlethwaite \cite{EKT}. This link of two components is not detectable by the Jones polynomial, but it is detectable by our extension of the Jones polynomial. In doing this calculation we (Louis Kauffman and Dimos Goundaroulis) use Dror Bar Natan's Knot Theory package for Mathematica. In this package, the Jones polynomial is a function of $q$ and satisfies the skein relation $$ q^{-1} V_{K_+}(q) - q V_{K_-}(q) = (q^{1/2} - q^{-1/2}) V_{K_0}(q) $$ where $K_+,K_-,K_0$ is the usual skein triple. Let $$ a = q^2, z=(q^{1/2} - q^{-1/2}), b = q z, c = q^{-1} z. $$ Then we have the skein expansion formulas: $$ V_{K_+} = a V_{K_-} + b V_{K_0} \quad {\mbox \rm and} \quad V_{K_-} = a^{-1} V_{K_+} - c V_{K_0}. $$ In Figure~\ref{tlink} we show the Thistlethwaite link that is invisible to the Jones polynomial. In the same figure we show an unlink of two components obtained from the Thisthlethwaite link by switching four crossings. In Figure~\ref{k1234} we show the links $K_1,K_2,K_3,K_4$ that are intermediate to the skein process for calculating the invariants of $L$ by first switching only crossings between components. From this it follows that the knots and links in the figures indicated here satisfy the formula $$V_{ThLink} = bV_{K_1} + abV_{K_2} -ca^2 V_{K_3} - ac V_{K_4} + V_{Unlinked}.$$ This can be easily verified by the specific values computed in Mathematica: $$V_{ThLink} = -q^{-1/2} - q^{1/2}$$ $$V_{K_1} = -1+\frac{1}{q^7}-\frac{2}{q^6}+\frac{3}{q^5}-\frac{4}{q^4}+\frac{4}{q^3}-\frac{4}{q^2}+\frac{3}{q}+q$$ $$V_{K_2} = 1-\frac{1}{q^9}+\frac{3}{q^8}-\frac{4}{q^7}+\frac{5}{q^6}-\frac{6}{q^5}+\frac{5}{q^4}-\frac{4}{q^3}+\frac{3}{q^2}-\frac{1}{q}$$ $$V_{K_3} = 1-\frac{1}{q^9}+\frac{2}{q^8}-\frac{3}{q^7}+\frac{4}{q^6}-\frac{4}{q^5}+\frac{4}{q^4}-\frac{3}{q^3}+\frac{2}{q^2}-\frac{1}{q}$$ $$V_{K_4} = -1-\frac{1}{q^6}+\frac{2}{q^5}-\frac{2}{q^4}+\frac{3}{q^3}-\frac{3}{q^2}+\frac{2}{q}+q$$ $$V_{Unlinked}= \frac{1}{q^{13/2}}-\frac{1}{q^{11/2}}-\frac{1}{q^{7/2}}+\frac{1}{q^{3/2}}-\frac{1}{\sqrt{q}}-q^{3/2}$$ This is computational proof that the Thistlethwaite link is not detectable by the Jones polynomial. If we compute $V[V](ThLink)(q)$ then we modify the computation to $$ V[V](ThLink)(q) = bV_{K_1} + abV_{K_2} -ca^2 V_{K_3} - ac V_{K_4} + E^{-1}V_{Unlinked}. $$ and it is quite clear that this is non-trivial when the new variable $E$ is not equal to $1$. On the other hand, the Lickorish formula for this case tells us that, for the regular isotopy version of the Jones polynomial $V'[V'](ThLink)(q)$, $$ V'[V'](ThLink)(q) = \eta(E^{-1} -1)V'_{K_1}V'_{K_2}(q) + V'_{ThLink}(q) $$ whenever we evaluate a 2-component link. Note that $\eta(E^{-1} -1)$ is non-zero whenever $E \ne 1$. Thus it is quite clear that the Lickorish formula detects the Thisthlethwaite link since the Jones polyomials of the components of that link are non-trivial. We have, in this example, given two ways to see how the extended invariant detects the link $ThLink$. The first way shows how the detection works in the extended skein theory. The second way shows how it works using the Lickorish formula. \end{exmp} \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=6.5cm]{ThLinkUnLink.pdf} \end{tabular} \caption{The Thistlethwaite Link and Unlink} \label{tlink} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=5.3cm]{K1234.pdf} \end{tabular} \caption{ The links $K_1,K_2,K_3,K_4$} \label{k1234} \end{center} \end{figure} \section{Generalization of the Dubrovnik and the Kauffman polynomials} \label{generalregkd} In this section we define the general regular isotopy invariants for links, $D[T]$ and $K[Q]$, which generalize the Dubrovnik polynomial, $D$, and the Kauffman polynomial, $K$, respectively. More precisely, we first prove Theorems~\ref{doft} and \ref{kofq}. We then prove a closed combinatorial formula for $D[T]$ and an analogous formula for $K[Q]$. We recall the notations $\mathcal{L}^u$ for the class of unoriented links and $\mathcal{Z} := {\mathbb Z}[z, w, a^{\pm 1}, E^{\pm 1}]$ for the ring of finite Laurent polynomials in four indeterminates $z, w, a, E$. For proving Theorems~\ref{doft} and \ref{kofq} we keep the same notations and we follow the same method and techniques as in Section~\ref{generalregh}. So, we will avoid repetitions and we will only elaborate on the differences in the proofs. The computing algorithm for $D[T]$ and $K[Q]$ is analogous to the one in Subsection~\ref{algorithm} for $H[R]$, where in Step~2 the rules of Theorem~\ref{doft} and of Theorem~\ref{kofq} now apply for $D[T]$ and $K[Q]$ respectively. \subsection{Extending the Dubrovnik Polynomial - Proof of Theorem~\ref{doft}}\label{generalregd} In the proofs that follow, the evaluation $D[T](L) \in \mathcal{Z}$ on a generic link diagram $L \in \mathcal{L}^u$ will be shortened to $(L)$. Moreover, let $\varepsilon$ denote the type of a mixed crossing in $L$. Then Rule~(1) of Theorem~\ref{doft} can be re-written as: \begin{equation}\label{dmixedskein} (L_{\varepsilon}) = (L_{-\varepsilon}) + \varepsilon \, z \, \big[(L_0) - (L_{\infty})\big]. \end{equation} Adapting Proposition~\ref{orderxings} and Corollary~\ref{direction} to $D[T]$, for a given mixed crossing $i$ of a diagram $L \in \mathcal{L}^u$ we denote by $\sigma_i L$ the same diagram but with that crossing switched and by $$ s_i L := a_i L - b_i L, $$ the formal sum of the diagrams $a_i L$ and $b_i L$, which are the same as $L$ but with the $A$- and $B$-smoothing respectively replacing crossing $i$. In this notation we have the polynomial equation $(s_i L) = (a_i L) - (b_i L)$. Then the proof carries through with the same formal expressions as in Proposition~\ref{orderxings}, concluding with equality of $(L)$ and $(L^\prime)$. Propositions~\ref{basepoints} and \ref{skeinrule} adapt directly for the case of $D[T]$. Concerning now Proposition~\ref{reidem}, we first check Reidemeister~II move adapting the proof of Proposition~\ref{reidem}. Looking at the left-hand instance of Figure~\ref{reidemII} for the case $i<j$, we note first that the two mixed crossings to be switched are of opposite type. Proceeding the analysis we obtain the final equation: \begin{equation}\label{dreidemII} (L) = (\sigma_2 \sigma_1 L) - z \, \big[ (a_2 \sigma_1 L) - (b_2 \sigma_1 L) \big] + z \, \big[ (a_1 L ) - (b_1 L ) \big], \end{equation} involving the diagrams in Figure~\ref{kreidIIproof}, which comprise a combination of the diagrams involved in the cases (a) and (b) illustrated in Figure~\ref{reidemIIproof}. Then, by the same arguments as in Proposition~\ref{reidem} we obtain invariance of $D[T]$ under the Reidemeister~II moves. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=5.6cm]{kreidIIproof.pdf} \caption{The diagrams in the analysis of Reidemeister II moves} \label{kreidIIproof} \end{center} \end{figure} For proving invariance of $D[T]$ under Reidemeister~III moves we also follow the same strategy as for $H[R]$ in Proposition~\ref{reidem}. In the cases where one (resp. two) of the crossings involved in the move need to be switched, all four configurations of Figure~\ref{reidemIIIpf1} (resp. Figure~\ref{reidemIIIpf2}) will enter the picture and the same arguments apply as for $H[R]$, see Figure~\ref{kreidIIIproof1}. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=5.6cm]{kreidIIIproof1.pdf} \caption{Analysis of the Reidemeister III moves} \label{kreidIIIproof1} \end{center} \end{figure} We will finally adapt Proposition~\ref{ordercpts} to the case of $D[T]$. We use the same notations as in the proof of Proposition~\ref{ordercpts}, but with different interpretations due to (\ref{dmixedskein}). Then, with the conventions above and denoting $$ L_{0,i} := a_i L_{\varepsilon_i} - b_i L_{\varepsilon_i} \qquad \mbox{and} \qquad L^\prime_{0,j} := a_j L^\prime_{\varepsilon_j} - b_j L^\prime_{\varepsilon_j}, $$ for $i = 1, \ldots, r$ and $j = r+1, \ldots, r+ r^\prime$, we have the same formal expressions as in Proposition~\ref{ordercpts}, where further the invariant $R$ is replaced by $T$. Therefore, the proof of Theorem~\ref{doft} is concluded. \hfill \qed \subsection{Translation to Ambient Isotopy} \label{generalamby} In this subection we define the ambient isotopy generalized invariant, counterpart of the regular isotopy generalized invariant $D[T]$ constructed above. Let $Y$ denote the classical ambient isotopy Dubrovnik polynomial. Then, one can obtain the ambient isotopy invariant $Y$ from its regular isotopy counterpart $D$ via the formula: $$ Y (L) := a^{-wr (L)} D(L), $$ where $wr (L)$ is the total writhe of the diagram $L$ for some choice of orientation of $L$. Analogously, and letting $Z$ denote $Y$ but with different variable, from our generalized regular isotopy invariant $D[T]$ one can derive an ambient isotopy invariant $Y[Z]$ via: \begin{equation}\label{yzfromdt} Y[Z] (L) := a^{-wr (L)} D[T] (L). \end{equation} In order to have a skein relation one leaves it in regular isotopy form. \subsection{A closed combinatorial formula for the Dubrovnik polynomial extension $D[T]$}\label{secxi} Recall the rules for the Dubrovnik polynomial and our extension of it $D[T].$ We let $D (z,a)$ denote the regular isotopy version of the Dubrovnik polynomial and $T (w,a)$ denote the same invariant but with a different parameter $w$ in place of $z$. Then there exists a unique regular isotopy invariant of classical unoriented links $D[T]: \mathcal{L}^u \rightarrow {\mathbb Z}[z, w, a^{\pm 1}, E^{\pm 1}]$, where $z, \, w , \, a$ and $E$ are indeterminates, defined by the following rules: \begin{enumerate} \item On crossings involving different components the following skein relation holds: $$ D[T] (L_+) - D[T] (L_-) = z \, \big( D[T] (L_0) - D[T] (L_{\infty}) \big), $$ where $L_+$, $L_-$, $L_0$, $L_{\infty}$ is an unoriented Conway quadruple, \item For a union of $r$ unlinked knots in $\mathcal{L}^u$, ${\mathcal K}^r := \sqcup_{i=1}^r K_i$, with $r \geqslant 1$, it holds that: $$ D[T] ({\mathcal K}^r) = E^{1-r} \, T ({\mathcal K}^r). $$ \end{enumerate} We recall that the invariant $T(w,a)$ is determined by the following rules: \begin{enumerate} \item[(T1)] For $L_+$, $L_-$, $L_0$, $L_{\infty}$ an unoriented Conway quadruple, the following skein relation holds: $$ T (L_+) - T (L_-) = w \, \big( T (L_0) - T (L_{\infty}) \big), $$ \item[(T2)] The indeterminate $a$ is the positive type curl value for $T$: $$ T ( \raisebox{-.1cm}{ \begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (-0.22,-0.22); \draw [line width=0.35mm ](-.7,.7)--(0,0); \draw [line width=0.35mm] (0.22,0.22) -- (.7,.7); \draw [line width=0.35mm] (0,0) -- +(.7,-.7); \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} \ ) = a \, T ( \raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style ] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ) \quad \mbox{and} \quad T ( \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (0,0) ; \draw [line width=0.35mm] (-.7,.7)--(-0.22,0.22); \draw [line width=0.35mm] (0,0) -- (.7,.7); \draw [line width=0.35mm] (0.22,-0.22) -- +(.6,-.6); \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} ) = a^{-1} \, T ( \raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style ] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ), $$ \item[(T3)] On the standard unknot: $$ T(\bigcirc) = 1. $$ We also recall that the above defining rules imply the following: \item[(T4)] For a diagram of the unknot, $U$, $T$ is evaluated by taking $$ T(U) = a^{wr(U)}, $$ \item[(T5)] $T$, being the Dubrovnik polynomial, it is multiplicative on a union of unlinked knots, ${\mathcal K}^r := \sqcup_{i=1}^r K_i$. Namely, for $\delta := \frac{a - a^{-1}}{w} + 1$ we have: $$ T ({\mathcal K}^r) = \delta^{r-1} \Pi_{i=1}^r T (K_i). $$ \end{enumerate} Consequently, on the standard unknot we evaluate $D[T](\bigcirc) = T(\bigcirc) = 1$. \smallbreak \begin{thm}\label{thmxi} Let $L$ be an unoriented link with $n$ components. Then \begin{equation} \label{xi} D[T](L) = (\frac{z}{w})^{n-1}\sum_{k=1}^n \delta^{k-1}\hat{E_k} \sum_\pi T(\pi L) \end{equation} where the second summation is over all partitions $\pi$ of the components of $L$ into $k$ (unordered) subsets and $D(\pi L)$ denotes the product of the Dubrovnik polynomials of the $k$ sublinks of $L$ defined by $\pi.$ Furthermore, $\widehat{E}_k = (\widehat{E}^{-1} - 1)(\widehat{E}^{-1} - 2) \cdots (\widehat{E}^{-1} - k + 1)$, with $\widehat{E} = E \frac{z}{w}$, $\widehat{E}_1 =1$, and $\delta = \frac{a - a^{-1}}{w} + 1$. \end{thm} \begin{proof} Before proving the result, note the following equalities: \begin{align*} T(L_1 \sqcup L_2) &= \delta \, T(L_1) T(L_2), \\ D[T](L_1 \sqcup L_2) &= \frac{\delta}{E} \, D[T](L_1) D[T](L_2). \end{align*} Suppose that a diagram of $L$ is given. The proof is by induction on $n$ and on the number, $u$, of crossing changes between distinct components required to change $L$ to $n$ unlinked knots. If $n=1$ there is nothing to prove. So assume the result true for $n-1$ components and $u-1$ crossing changes and prove it true for $n$ and $u$. The induction starts when $u = 0$. Then $L$ is the union of $n$ unlinked components $L_1, L_2, \dots, L_n$. A classic elementary result concerning the Dubrovnik polynomial shows that $$ T(L) = \delta^{n-1}T(L_1)T(L_2)\cdots T(L_n). $$ Furthermore, in this situation, for any $k$ and $\pi$, $T(\pi L) = \delta^{n-k}T(L_1)T(L_2)\cdots T(L_n)$. So it is required to prove that \begin{equation} \delta^{n-1} E^{1-n} = \left( \frac{z}{w} \right)^{n-1} \delta^{n-1} \sum_{k=1}^n S(n,k)(\widehat{E}^{-1} - 1)(\widehat{E}^{-1} - 2) \cdots (\widehat{E}^{-1} - k + 1), \end{equation} where $S(n,k)$ is the number of partitions of a set of $n$ elements into $k$ subsets. Now it remains to prove that: \begin{equation}\label{dstirling} E^{1-n} \left( \frac{z}{w} \right)^{1-n} = \widehat{E}^{1-n} = \sum_{k=1}^n S(n,k)(\widehat{E}^{-1} - 1)(\widehat{E}^{-1} - 2) \cdots (\widehat{E}^{-1} - k + 1). \end{equation} However, in the theory of combinatorics, $S(n,k)$ is known as a Stirling number of the second kind and this required formula is a well known result about such numbers. Now suppose that $u > 0$. Suppose that in a sequence of $u$ crossing changes that changes $L$, as above, into unlinked knots, the first change is to a crossing $c$ between components $L_1$ and $L_2$ with relative sign $\epsilon$. Let $L^\prime$ be $L$ with the crossing changed and $L^0$ and $L^{\infty}$ be $L$ with the crossing annulled in the two possible ways. Now, from the definition of $D[T]$, $$ D[T](L) = D[T](L^\prime) + \epsilon z\big(D[T](L^0) - D[T](L^{\infty})\big) $$ \noindent The induction hypotheses imply that the result is already proved for $L^\prime$, $L^0$ and $L^{\infty}$ so \begin{equation}\label{dstar} D[T](L)= \sum_{k=1}^n \delta^{k-1}\hat{E_k} \sum_{\pi^\prime} T(\pi^\prime L^\prime) + \epsilon z (\sum_{k=1}^{n-1} \delta^{k-1}\hat{E_k} \sum_{\pi^0} T(\pi^0 L^0) - \sum_{k=1}^{n-1} \delta^{k-1}\hat{E_k} \sum_{\pi^{\infty}} T(\pi^{\infty} L^{\infty})) \end{equation} where $\pi^\prime$ runs through the partitions of the components of $L^\prime$, $\pi^0$ through those of $L^0$ and $\pi^{\infty}$ through those of $L^{\infty}$. A sublink $X^0$ of $L^0$ can be regarded as a sublink $X$ of $L$ containing $L_1$ and $L_2$ but with $L_1$ and $L_2$ fused together by annulling the crossing at $c$. Similarly, a sublink $X^{\infty}$ of $L^{\infty}$ can be regarded as a sublink $X$ of $L$ containing $L_1$ and $L_2$ but with $L_1$ and $L_2$ fused together by annulling the crossing at $c$. Let $X^\prime$ be the sublink of $L^\prime$ obtained from $X$ by changing the crossing at $c$. Then $$ T(X) = T(X^\prime) +\epsilon w( T(X^0) - T(X^{\infty})). $$ This means that the second (big) term in (\ref{dstar}) is \begin{equation}\label{dbig_term} \sum_{k=1}^{n-1} \delta^{k-1}\hat{E_k} \sum_{\rho} \bigl( T(\rho L) - T(\rho^\prime L^\prime ) \bigr), \end{equation} where the summation is over all partitions $\rho$ of the components of $L$ for which $L_1$ and $L_2$ are in the same subset and $\rho^\prime$ is the corresponding partition of the components of $L^\prime$. \noindent Thus, substituting (\ref{dbig_term}) in (\ref{dstar}) we obtain: \begin{equation}\label{dalmostdone} D[T](L)= \sum_{k=1}^n \delta^{k-1}\hat{E_k} \biggl( \sum_{\pi^\prime} T(\pi^\prime L^\prime) + \sum_{\rho} \bigl( T(\rho L) - T(\rho^\prime L^\prime)\bigr) \biggr), \end{equation} where $\pi^\prime$ runs through all partitions of $L^\prime$ and $\rho$ through partitions of $L$ for which $L_1$ and $L_2$ are in the same subset. Note that, for $k=n$ the second sum is zero. \noindent Therefore \begin{equation}\label{ddone} D[T] = \sum_{k=1}^n \delta^{k-1}\hat{E_k} \biggl( \sum_{\pi^\prime} T(\pi^\prime L^\prime) + \sum_{\rho} T(\rho L) \biggr), \end{equation} where $\pi^\prime$ runs through only partitions of $L^\prime$ for which $L_1$ and $L_2$ are in different subsets and $\rho$ through all partitions of $L$ for which $L_1$ and $L_2$ are in the same subset. Note that, for any partition $\pi$ of the components of $L$ inducing partition $\pi^\prime$ of $L^\prime$, if $L_1$ and $L_2$ are in the same subset then we can have a difference between $D(\pi L)$ and $D(\pi^\prime L^\prime)$, but when $L_1$ and $L_2$ are in different subsets then \begin{equation}\label{ddiff_subsets} T(\pi^\prime L^\prime) = T(\pi L). \end{equation} \noindent Hence, using (\ref{ddone}) and also (\ref{ddiff_subsets}), we obtain: $$ D[T] = \sum_{k=1}^n \delta^{k-1}\hat{E_k} \sum_\pi T(\pi L) $$ and the induction is complete. \end{proof} Note that the formula (\ref{xi}) can be regarded by itself as a definition of the invariant $D[T]$, since the right-hand side of the formula is an invariant of regular isotopy, as $D$ is invariant of regular isotopy. Furthermore, Remark~\ref{sublinks} applies also for the invariant $D[T]$. Finally, Remark~\ref{fiandh} applies here too, so, adapting Proposition~\ref{topequivh}, we have in analogy: \begin{prop}\label{topequivd} The invariants $D[T]$ and $D[D]$ are topologically equivalent. Specifically, it holds that: \begin{equation} D[T](L)(z,w,a,E) = \left( \frac{z}{w} \right)^{n-1} D[D](L)(w,a,\widehat{E}). \end{equation} \end{prop} Proposition~\ref{topequivd} implies that the four variables in the original setting of the invariant $D[T]$ can be reduced to three without any influence on the topological strength of the invariant. However, we opted for keeping $z$ and $w$ as independent variables, in order to develop the theory in its full generality. Recall also Remark~\ref{depth}, which applies for $D[T]$ as well. \subsection{Extending the Kauffman polynomial - Proof of Theorem~\ref{kofq}} \label{generalregk} We shall now describe the extension of the Kauffman polynomial. In the proofs that follow, the evaluation $K[Q](L) \in \mathcal{Z}$ on a generic link diagram $L \in \mathcal{L}^u$ will be shortened to $(L)$. Moreover, let $\varepsilon$ denote the type of a mixed crossing in $L$. Then Rule~(1) of Theorem~\ref{kofq} can be re-written as: \begin{equation}\label{kmixedskein} (L_{\varepsilon}) = - (L_{-\varepsilon}) + z \, \big[(L_0) + (L_{\infty})\big]. \end{equation} Note that the symmetry of (\ref{kmixedskein}) implies that we may suppress the indication $\varepsilon$ in the computations. For this reason and also due to the difference in signs from $D[T]$ we will record carefully some computations in the proofs, since they do not carry through directly from $H[R]$ and $D[T]$. Adapting Proposition~\ref{orderxings} to $K[Q]$, for a given mixed crossing $i$ of a diagram $L \in \mathcal{L}^u$ we denote by $\sigma_i L$ the same diagram but with that crossing switched and by $$ s_i L := a_i L + b_i L, $$ the formal sum of the diagrams $a_i L$ and $b_i L$, which are the same as $L$ but with the $A$- and $B$-smoothing respectively replacing crossing $i$. In this notation we have the polynomial equation $(s_i L) = (a_i L) + (b_i L)$. Then, relation~(\ref{ij}) in the proof of Proposition~\ref{orderxings} is replaced by the relation: \begin{equation}\label{kij} \begin{array}{rcl} (L) & = & -(\sigma_i L) + z (s_i L) = (\sigma_j \sigma_i L) - z (s_j \sigma_i L) + z (s_i L) \end{array} \end{equation} and relation~(\ref{ji}) is replaced by the relation: \begin{equation}\label{kji} \begin{array}{rcl} (L^\prime) & := & - (\sigma_j L) + z (s_j L) = (\sigma_i \sigma_j L) - z (s_i \sigma_j L) + z (s_j L). \end{array} \end{equation} Applying now relation~(\ref{kmixedskein}) to the link diagrams $a_i L, b_i L, a_j L, b_j L$, namely: \begin{center} $ (a_i L) = -(\sigma_j a_i L) + z (s_j a_i L) $ \end{center} \begin{center} $ (b_i L) = -(\sigma_j b_i L) + z (s_j b_i L) $ \end{center} \begin{center} $ (a_j L) = -(\sigma_i a_j L) + z (s_i a_j L) $ \end{center} \begin{center} $ (b_j L) = -(\sigma_i b_j L) + z (s_i b_j L) $ \end{center} \noindent and replacing in (\ref{kij}) and (\ref{kji}) we obtain equality of $(L)$ and $(L^\prime)$, using the same arguments as in Proposition~\ref{orderxings}. Propositions~\ref{basepoints} and \ref{skeinrule} carry through directly for the case of $K[Q]$. Concerning now Proposition~\ref{reidem}, we first check Reidemeister~II move adapting the proof of Proposition~\ref{reidem}. Looking at the left-hand instance of Figure~\ref{reidemII} for the case $i<j$, we note first that the two mixed crossings to be switched are of opposite type. Proceeding the analysis we obtain the final equation: \begin{equation}\label{kreidemII} (L) = (\sigma_2 \sigma_1 L) - z \, \big[ (a_2 \sigma_1 L) + (b_2 \sigma_1 L) \big] + z \, \big[ (a_1 L ) + (b_1 L ) \big], \end{equation} involving the diagrams in Figure~\ref{kreidIIproof}, which comprise a combination of the diagrams involved in the cases (a) and (b) illustrated in Figure~\ref{reidemIIproof}. Then, by the same arguments as in the proof of Proposition~\ref{reidem} we obtain invariance of $K[Q]$ under the Reidemeister~II moves. For proving invariance of $K[Q]$ under Reidemeister~III moves we follow the same strategy as for $H[R]$ in Proposition~\ref{reidem}. In the cases where one (resp. two) of the crossings involved in the move need to be switched, all four configurations of Figure~\ref{reidemIIIpf1} (resp. Figure~\ref{reidemIIIpf2}) will enter the picture and the same arguments apply as for $H[R]$, see Figure~\ref{kreidIIIproof1}. We will finally adapt Proposition~\ref{ordercpts} to the case of $K[Q]$. We use the same notations as in the proof of Proposition~\ref{ordercpts}, but with different interpretations due to (\ref{kmixedskein}). Then, with the conventions above and by denoting $$ L_{0,i} := a_i L_{\varepsilon_i} + b_i L_{\varepsilon_i} \qquad \mbox{and} \qquad L^\prime_{0,j} := a_j L^\prime_{\varepsilon_j} + b_j L^\prime_{\varepsilon_j}, $$ for $i = 1, \ldots, r$ and $j = r+1, \ldots, r+ r^\prime$, we have : \begin{equation} \label{k1tor} \begin{array}{lclcl} (AB) & := & (L_{\varepsilon_1}) & = & -(L_{-\varepsilon_1}) + z \, (L_{0,1}), \\ (L_{-\varepsilon_1}) & := & (L_{\varepsilon_2}) & = & -(L_{-\varepsilon_2}) + z \, (L_{0,2}), \\ & \vdots & & & \\ (L_{-\varepsilon_{r-1}}) & := & (L_{\varepsilon_{r}}) & = & -(L_{-\varepsilon_{r}}) + z \, (L_{0,r}) \\ & & & = & -(\frac{A}{B}) + z \, (L_{0,r}). \end{array} \end{equation} At the same time and selecting in $BA$ the crossing $r+1$ we have : \begin{equation} \label{krplus1torprime} \begin{array}{lclcl} (BA) & := & (L^\prime_{\varepsilon_{r+1}}) & = & -(L^\prime_{-\varepsilon_{r+1}}) + z \, (L^\prime_{0, r+1}), \\ (L^\prime_{-\varepsilon_{r+1}}) & := & (L^\prime_{\varepsilon_{r+2}}) & = & - (L^\prime_{-\varepsilon_{r+2}}) + z \, (L^\prime_{0, r+2}), \\ & \vdots & & & \\ (L^\prime_{-\varepsilon_{r+ r^\prime - 1}}) & := & (L^\prime_{\varepsilon_{r+ r^\prime}}) & = & -(L^\prime_{-\varepsilon_{r+ r^\prime}}) + z \, (L^\prime_{0, r+ r^\prime}) \\ & & & = & - (\frac{B}{A}) + z \, (L^\prime_{0, r+ r^\prime}). \end{array} \end{equation} Substituting now the expressions in (\ref{k1tor}) consecutively, starting from the last equation, we obtain: \begin{equation} \label{kABtoAoverB} (AB) = -(\frac{A}{B}) + \, z \, \left[(L_{0,1}) + \cdots + (L_{0,r}) \right]. \end{equation} Analogously, from (\ref{krplus1torprime}) we obtain: \begin{equation} \label{kBAtoBoverA} (BA) = -(\frac{B}{A}) + \, z \, \left[(L^\prime_{0,r+1}) + \cdots + (L^\prime_{0,r+ r^\prime}) \right]. \end{equation} Denoting now: \begin{equation} \label{kXandY} (X) := (L_{0,1}) + \cdots + (L_{0,r}) \quad {\rm and } \quad (Y) := (L^\prime_{0,r+1}) + \cdots + (L^\prime_{0,r+ r^\prime}), \end{equation} Eqs.~\ref{kABtoAoverB} and~\ref{kBAtoBoverA} are shortened to the following: \begin{equation} \label{kABXY} (AB) = -(\frac{A}{B}) + \, z \, (X) \quad {\rm and } \quad (BA) = -(\frac{B}{A}) + \, z \, (Y). \end{equation} Subtracting the equations in (\ref{kABXY}) by parts we obtain: \begin{equation} \label{kABvsBAXY} (AB) - (BA) = (\frac{B}{A}) - (\frac{A}{B}) + z \, [(X) - (Y)]. \end{equation} Further, we observe that the descending stacks $\frac{A}{B}$ and $\frac{B}{A}$ are both assigned the same value of $K[Q]$ since, by the recursive definition $(n)$, we have: \begin{equation} \label{reductiontoq} (\frac{A}{B}) = E^{1-c} \, Q(\frac{A}{B}) \quad \& \quad (\frac{B}{A}) = E^{1-c} \, Q(\frac{B}{A}), \end{equation} where $c$ is the number of components in both descending stacks. But, by the well-definedness of the link invariant $Q$ it is ensured that $Q(\frac{A}{B}) = Q(\frac{B}{A})$. So $ (\frac{B}{A}) = (\frac{A}{B})$. So, (\ref{kABvsBAXY}) becomes: \begin{equation} \label{kABBAonlyXY} (AB) - (BA) = z \, [(X) - (Y)]. \end{equation} In order to prove further that $(X) = (Y)$ we apply the same procedure as above, switching and smoothing progressively all $r$ crossings starting from $AB$ and all $r^\prime$ crossings starting from $BA$, but this time working with the invariant $Q$. We obtain equations of the same form as (\ref{k1tor}) and (\ref{krplus1torprime}), but now $z$ is replaced by $w$ and the invariant $R$ is evaluated on all diagrams. Summing up we obtain: \begin{equation} \label{kevalR} Q(AB) - Q(BA) = w \, [Q(X) - Q(Y)], \end{equation} where $Q(X) := Q(L_{0,1}) + \cdots + Q(L_{0,r})$ and $Q(Y) := Q(L^\prime_{0,r+1}) + \cdots + Q(L^\prime_{0,r+ r^\prime})$. Of course, by the well-definedness of $Q$ we have $Q(AB) = Q(BA)$. Now, by the fact that all intermediate generic diagrams in (\ref{k1tor}) and (\ref{krplus1torprime}) that come from smoothings have $(n-1)$ crossings, the inductive hypothesis $(n-1)$ applies to all of them. Furthermore, these diagrams are descending stacks of $c-1$ components, since the components $A$ and $B$ have merged into one. So, by the inductive hypothesis $(n-1)$: \begin{equation} \label{kRonLi} (L_{0,i}) = E^{2-c} \, Q(L_{0,i}), \end{equation} for all $i=1,\ldots, r+ r^\prime$. Multiplying then Eq.~\ref{kevalR} by $E^{2-c}$ we obtain: \begin{equation} \label{kequalXY} (X) - (Y) = 0. \end{equation} Substituting, finally, (\ref{kequalXY}) in (\ref{kABBAonlyXY}) we finally obtain: \begin{equation} \label{kBAequalAB} (AB) = (BA). \end{equation} and the proof of the Proposition is concluded. Hence, the proof of Theorem~\ref{kofq} is also is concluded. \hfill \qed \subsection{Translation to Ambient Isotopy} \label{generalambf} As for the Dubrovnik polynomial, in this subection we define for the Kauffman polynomial the ambient isotopy generalized invariant, counterpart of the regular isotopy generalized invariant $K[Q]$ constructed above. Let $K$ denote the classical regular isotopy Kauffman polynomial. Then, one can obtain the ambient isotopy invariant $F$ from its regular isotopy counterpart $K$ via the formula: $$ F (L) := a^{-wr (L)} K(L), $$ where $wr (L)$ is the total writhe of the diagram $L$ for some choice of orientation of $L$. Analogously, and letting $S$ denote $F$ but with different variable, from our generalized regular isotopy invariant $K[Q]$ one can derive an ambient isotopy invariant $F[S]$ via: \begin{equation}\label{fsfromkq} F[S] (L) := a^{-wr (L)} K[Q] (L). \end{equation} In order to have a skein relation one leaves it in regular isotopy form. \subsection{A closed combinatorial formula for $K[Q]$}\label{secpsi} As for the case of $H[R]$, we will give here an analogous formula for our regular isotopy generalization $K[Q]$. For this purpose we need to indicate the basic skein formulas for the regular isotopy invariant. Note that we use here $Q$ as a fully independent version of the Kauffman polynomial, and the loop value of $Q$ is denoted $\gamma = \frac{a + a^{-1}}{w} -1$. So, $K[Q]$ is defined by the rules: \begin{enumerate} \item $K[Q](L) = E^{1-k}Q(L)$ when $L$ is the union of $k$ {\it unlinked} components; \item $K[Q](L_+) + K[Q](L_-) = z \left(K[Q](L_0) + K[Q](L_{\infty}) \right)$, for any unoriented Conway quadruple $L_+, L_-, L_0, L_{\infty}$ in which the arcs of $L_+$ are in {\it different} components of the link $L$; \item $Q(L)$, the regular isotopy Kauffman polynomial, is defined by the rules: \begin{center} $ Q(L_+) + Q(L_-) = w \big( Q(L_0) + Q(L_{\infty})\big), $ \end{center} \begin{center} $ Q(\bigcirc) = 1, $ \end{center} \begin{center} $ Q ( \raisebox{-.1cm}{ \begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (-0.22,-0.22); \draw [line width=0.35mm ](-.7,.7)--(0,0); \draw [line width=0.35mm] (0.22,0.22) -- (.7,.7); \draw [line width=0.35mm] (0,0) -- +(.7,-.7); \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} \ ) = a \, Q ( \raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style ] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ) \quad \mbox{and} \quad Q ( \raisebox{-.1cm}{\begin{tikzpicture}[scale=.2] \draw [line width=0.35mm] (-.7,-.7)-- (0,0) ; \draw [line width=0.35mm] (-.7,.7)--(-0.22,0.22); \draw [line width=0.35mm] (0,0) -- (.7,.7); \draw [line width=0.35mm] (0.22,-0.22) -- +(.6,-.6); \draw [line width=0.35mm] plot [smooth, tension=2] coordinates { (-.7,.7) (0,1.3) (.7,.7)}; \end{tikzpicture}} ) = a^{-1} \, Q (\raisebox{.06cm}{ \begin{tikzpicture}[scale=.2, mydeco/.style ] \draw [line width=0.35mm, postaction = {mydeco=.6 ,decorate}] plot [smooth, tension=2] coordinates {(0,0) (1,.2) (2,0)}; \end{tikzpicture}}\ ). $ \end{center} \end{enumerate} With these preliminaries, here is the closed combinatorial formula for $K[Q] (L)$ \begin{thm} \label{thmpsi} Let $L$ be an unoriented link with $n$ components. Then \begin{equation} \label{psi} K[Q] (L) = i^{wr(L)} (\frac{z}{w})^{n-1} \sum_{k=1}^n \gamma^{k-1}\hat{E_k} \sum_\pi i^{-wr(\pi L)}Q(\pi L). \end{equation} where the second summation is over all partitions $\pi$ of the components of $L$ into $k$ (unordered) subsets and $K(\pi L)$ denotes the product of the Kauffman polynomials of the $k$ sublinks of $L$ defined by $\pi$. The term $wr(\pi L)$ denotes the sum of the writhes of the parts of the partitioned link $\pi L$. Furthermore, $\hat{E_k} = (\hat{E}^{-1} - 1)(\hat{E}^{-1} - 2) \cdots (\hat{E}^{-1} - k + 1)$, with $\hat{E_1} =1$, $\hat{E} = \frac{z}{w}E$ and $\gamma = \frac{a + a^{-1}}{w} -1$. \end{thm} \begin{proof} In order to prove this Theorem, we first discuss a translation between the Kauffman and Dubrovnik polynomials. We then use this translation to deduce a combinatorial formula for $K[Q]$ from the combinatorial formula we have already proved for the Dubrovnik polynomial extension $D[T]$. The following equation is the translation formula from the Dubrovnik to Kauffman polynomial, observed by W.B.R. Lickorish \cite{kau4}: $$ D(L)(a,z) = (-1)^{c(L)+1} \, i^{-wr(L)}K(L)(ia,-iz). $$ Here, $c(L)$ denotes the number of components of $L$, $i^2 = -1$, and $wr(L)$ is the writhe of $L$ for some choice of orientation of $L$. The translation formula is independent of the particular choice of orientation for $L$. By the same token, we have the following formula translating the Kauffman polynomial to the Dubrovnik polynomial. $$ K(L)(a,z) = (-1)^{c(L)+1} \, i^{wr(L)} D(L)(-ia, iz). $$ These formulas are proved by checking them on basic loop values and then using induction via the skein formulas for the two polynomials. This same method of proof shows that the same translation occurs between our generalizations of the Kauffman polynomial $K[Q]$ and the Dubrovnik polynomial $D[T].$ In particular, we have \begin{equation} \label{psitoxi} D[T](L)(a,z,w) = (-1)^{c(L)+1} \, i^{-wr(L)}K[Q](L)(ia,-iz,-iw) \end{equation} and $$ K[Q](L)(a,z,w) = (-1)^{c(L)+1} \, i^{wr(L)}D[T](L)(-ia,iz,iw). $$ We know that $$ D[T](L) = (\frac{z}{w})^{n-1}\sum_{k=1}^n \delta^{k-1}\hat{E_k} \sum_\pi T(\pi L), $$ and $$ T(\pi L)(a,w) = (-1)^{c(\pi L)} \, i^{-wr(\pi L)}Q(\pi L)(ia,-iw). $$ Here it is understood that $wr(\pi L)$ is the sum of the writhes of the parts of the partition of $L$ corresponding to $\pi$. Note that $T(\pi L)(a,w)$ is a product of the Dubrovnik evaluations of the parts of the partition $\pi L$. The term $c(\pi L)$ is equal to the sum $$ c(\pi L) = \sum_{\sigma} (c(\sigma) + 1) = \sum_{\sigma} c(\sigma) + k = c(L) + k $$ where $\sigma$ runs over the parts of the partition $\pi L$. Here $k$ is the number of parts in $\pi L$. Thus $$ D[T](L)(a,z,w) = (\frac{z}{w})^{n-1} \sum_{k=1}^n \delta(a,w)^{k-1}\hat{E_k} \sum_\pi T(\pi L)(a,w) $$ \begin{center} $ = (\frac{z}{w})^{n-1} \sum_{k=1}^n \delta(a,w)^{k-1}\hat{E_k} \sum_\pi (-1)^{c(\pi L)} \, i^{-wr(\pi L)}Q(\pi L)(ia,-iw). $ \end{center} We also know that \begin{center} $ K[Q] (L)(a,z,w) = (-1)^{c(L)+1} \, i^{wr(L)}D[T](L)(-ia,iz,iw), $ \end{center} and so we have $$K[Q] (L)(a,z,w) = $$ $$(-1)^{c(L)+1} \, i^{wr(L)} (\frac{z}{w})^{n-1} \sum_{k=1}^n \delta(-ia,iw)^{k-1}\hat{E_k} \sum_\pi (-1)^{c(\pi L)} \, i^{-wr(\pi L)}Q(\pi L)(a,w).$$ Now we have that \begin{center} $ \delta(-ia,iw) = \left((-ia) - (-ia)^{-1}\right)/(iw) + 1 = -\left( (a+a^{-1})/w - 1\right) = - \gamma(a,w). $ \end{center} Therefore $$K[Q] (L)(a,z,w) =$$ $$ (-1)^{c(L)+1} \, i^{wr(L)} (\frac{z}{w})^{n-1} \sum_{k=1}^n (-1)^{k-1}\gamma(a,w)^{k-1}\hat{E_k} \sum_\pi (-1)^{c(L) + k} \, i^{-wr(\pi L)}Q(\pi L)(a,w).$$ Thus \begin{center} $ K[Q] (L)(a,z,w) = i^{wr(L)} (\frac{z}{w})^{n-1} \sum_{k=1}^n \gamma(a,w)^{k-1}\hat{E_k} \sum_\pi i^{-wr(\pi L)}Q(\pi L)(a,w). $ \end{center} Hence \begin{center} $ K[Q] (L) = i^{wr(L)} (\frac{z}{w})^{n-1} \sum_{k=1}^n \gamma^{k-1}\hat{E_k} \sum_\pi i^{-wr(\pi L)}Q(\pi L). $ \end{center} This completes the proof. \end{proof} Note that the formula (\ref{psi}) can be regarded by itself as a definition of the invariant $K[Q]$, since the right-hand side of the formula is an invariant of regular isotopy, since $D$ is invariant of regular isotopy. Furthermore, Remark~\ref{sublinks} applies also for the invariant $K[Q]$. The same for Remark~\ref{fiandh}. So, adapting Proposition~\ref{topequivh}, we have in analogy: \begin{prop}\label{topequivk} The invariants $K[Q]$ and $K[K]$ are topologically equivalent. Specifically, it holds that: \begin{equation} K[Q](L)(z,w,a,E) = \left( \frac{z}{w} \right)^{n-1} K[K](L)(w,a,\widehat{E}). \end{equation} \end{prop} Proposition~\ref{topequivk} implies that the four variables in the original setting of the invariant $K[Q]$ can be reduced to three without any influence on the topological strength of the invariant. Recall also Remark~\ref{depth}, which is valid for $K[Q]$ too. \begin{rem} \rm As noted in the Introduction, in Theorems~\ref{doft} and~\ref{kofq} the basic invariants $T(w,a)$ and $Q(w,a)$ could be replaced by specializations of the Dubrovnik and the Kauffman polynomial respectively and, then, the invariants $D[T]$ and $K[Q]$ can be regarded as generalizations of these specialized polynomials. For example, if $a=1$ then $Q(w,1)$ is the Brandt--Lickorish--Millett--Ho polynomial and if $w= A+A^{-1}$ and $a= -A^3$ then $Q ( A+A^{-1}, -A^3)$ is the Kauffman bracket polynomial. In both cases the invariant $K[Q]$ generalizes these polynomials. \end{rem} \section{New state sum models}\label{secstatesums} In this section we present state sum models for the generalized regular isotopy invariant $H[R]$ (including $H[H]$) of Theorem~\ref{hofr}. Everything we do in this section can also be constructed for the generalized Dubrovnik and Kauffman polynomials, $D[T]$ and $K[Q]$, in essentially the same way. The definitions for the state sum will be given in Section~\ref{ssummation}, but here we give an outline of the state sum that we call the {\it skein template algorithm} (see \cite{kau5,kau6}). \begin{defn} \rm Let $L$ denote a diagram of an oriented link. The {\it oriented smoothing} of a crossing is the replacement of the crossing by the smoothing that is consistent with the orientations of its two arcs. See Figure~\ref{firstpassage}. {\it Pre-states}, $\widehat{S}$, for $L$ are obtained by successively smoothing or switching mixed crossings (a mixed crossing is a crossing between two components of the link). That is, one begins by choosing a mixed crossing and replacing it by smoothing it and switching it, see Figure~\ref{stateskein} top. The smoothing is decorated as in Figure~\ref{firstpassage}, so that there is a dot that discriminates whether the smoothing comes from a positive or a negative crossing. The process of placing the dot is related to walking along the diagram. {\it That walk only allows a smoothing at a mixed crossing that is approached along an undercrossing arc} as shown in Figure~\ref{firstpassage}. After the smoothing is produced, that walk and the dotting are related as shown Figure~\ref{firstpassage}. The reasons for these conventions will be clarified below, as we explain a process that encodes the skein calculation of the invariants. The switched crossing is circled to indicate that it has been chosen by this skein process, see Figure~\ref{walkpastflat}. Then one chooses another mixed crossing in each of the resulting diagrams and applies the same procedure. New self-crossings can appear after a smoothing. {\it A completed pre-state is obtained when a decorated diagram is reached where all the undecorated crossings are self-crossings.} A {\it state}, $S$, for $L$ is a completed pre-state that is obtained with respect to a {\it template} as we describe it below. In a state, we are guaranteed that the resulting link diagram is a topological union of unlinked knot diagrams (a stack). In fact, the skein template process will produce exactly a set of states whose evaluations correspond to the skein evaluation of the invariant $H[R]$. \end{defn} \noindent {\it Sketch of the skein template algorithm.} In the skein template algorithm we produce a specific set of pre-states that we can call states, and show how to compute the link invariant $H[R]$ from these states by adding up evaluations of each state. The key to producing these pre-states is the {\it template}. A template, $T$, for a link diagram $L$ is an indexed flattened diagram for $L$ (the underlying universe of $L$, a $4$-valent graph obtained from $L$ by ignoring the over and under crossing data in $L$) so that the indices are on the edges of the graph. We assume that the indices are distinct elements of an ordered set (for example, the natural numbers). We use the template to decide the order of processing for the pre-state. As we know (Proposition~\ref{orderxings}), the invariant $H[R]$ itself is independent of this ordering. Take the link diagram $L$ and a template $T$ for $L$. Choose the specialization $R$. Process the diagram $L$ to produce pre-states $\widehat{S}$ generated by the template $T$ by starting at the smallest index and walking along the diagram, smoothing and marking as described below. \begin{enumerate} \item If, when walking, the walker moves along an over-crossing, circle this crossing if it is a mixed crossing. See Figure~\ref{walkpastflat} middle. \item If, when walking, a non-mixed (self-crossing) is encountered, then just continue the walk without making any markings. See Figure~\ref{walkpastflat} bottom. \item If when walking, a mixed under-crossing is encountered, then form two new diagrams, one obtained by smoothing the crossing and marking it with a dot as shown in Figure~\ref{firstpassage}, and the other obtained by switching and circling the crossing as in Figure~\ref{walkpastflat} top. See also Figure~\ref{stateskein}. \item At a smoothing, assign to the smoothing a vertex weight of $+z$ or $-z$ (the weights are indicated in Figure~\ref{stateeval}). \item When you finish a walking cycle in the template $T$, start at the next lowest unused index in the template and continue with the steps above. Continue the process for all the diagrams that are produced, using the original template for the next choice of initial index. \item When a pre-state is finished, there will be no undecorated mixed crossings in the state. All uncircled crossings will be self-crossings and there will also be some marked smoothings. All the smoothings will have non-zero vertex weights ( $z$, $-z$ or $1$) and the pre-state becomes a contributing state for the invariant. \item This state is evaluated by taking the product of the vertex weights and the evaluation of the invariant $R$ on the the link underlying the state after all the decorations have been removed. The skein template process produces a link from the state that is a stack of knots. We give the details in the next section. \item The (unnormalized) invariant $H[R]$ is the sum over all the evaluations of these states obtained by applying the skein-template algorithm. \end{enumerate} \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=7.7cm]{WalkPastFlat.pdf} \caption{Decorations on walking past a crossing in a pre-state} \label{walkpastflat} \end{center} \end{figure} \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=10cm]{FirstPassage.pdf} \caption{First passage decoration at mixed crossings} \label{firstpassage} \end{center} \end{figure} The skein template algorithm is basically very simple. It is a formalization of the skein calculation process, designed to fix all the choices in this process by the choice of the template $T$. Then the resulting states are exactly the ends of a skein tree for evaluating $H[R]$. Each state, as a link diagram, is a stack of knots, ready to be evaluated by $R$. The product of the vertex weights for the state multiplied by $R$ evaluated on the state is equal to the contribution of that state to the polynomial. One can consider more general state and pre-states than the ones produced by a given template. Recursively, at each mixed crossing, we could obtain a raw state by just making arbitrary smoothings and decorations. Then we can compare to see if such a state is one that is produced by the skein template algorithm. By recursively we mean that, once a crossing is smoothed and decorated, the resulting diagram will have a new structure of distinct components so that new choices are then available for switching and smoothing. One can produce all such states, but not all of them are admissible under the terms of the skein template algorithm as described above. Thus one can produce all raw states and then select the subset of them that is admitted by the skein template algorithm for a given choice of template $T$. We will show these processes in more detail in the next section. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=9.5cm]{StateSkein.pdf} \caption{Decorated state production by the skein template algorithm} \label{stateskein} \end{center} \end{figure} \subsection{The skein template algorithm} We now detail the skein template algorithm. Consider a link diagram $L$ (view Figures~\ref{hopfstate} and ~\ref{skeinwhite}). Label each edge of the projected flat diagram of $L$ from an ordered index set $I$ so that each edge receives a distinct label. We have called this labeled graph the {\it template} $T(L)$. We have defined a {\it pre-state} $\widehat{S}$ of $L$ by either smoothing or flattening each crossing in $L$ according to a walks on the template, starting with the smallest index in the labeling of $T$. We now go through the skein template algorithm, referring to specific examples. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=6cm]{StateProduction.pdf} \caption{State production for the Hopf link} \label{hopfstate} \end{center} \end{figure} \begin{enumerate} \item Begin walking along the link $L$, starting at the least available index from $T(L)$. See Figures~\ref{hopfstate} and ~\ref{skeinwhite}. \item When meeting a mixed crossing via an under-crossing arc, produce two new diagrams (see Figure~\ref{stateskein} top), one by switching the crossing and circling it (Figure~\ref{walkpastflat}) and one by smoothing the crossing and labeling it (Figure~\ref{firstpassage}). \item When traveling through a smoothing, label it by a {\it dot} and a {\it connector} indicating the {\it place of first passage} as shown in Figure~\ref{firstpassage} and exemplified in Figures~\ref{hopfstate} and~\ref{skeinwhite}. We clarify these steps with two examples, the Hopf link and the Whitehead link. See Figure~\ref{hopfstate} and Figure~\ref{skeinwhite}. In these figures, for Step 1 we start at the edge with index $1$ and meet a mixed crossing at its under-arc, switching it for one diagram and smoothing it for another. We walk past the smoothing, placing a dot and a connector. \item When meeting a mixed over-crossing, circle the crossing (Figure~\ref{walkpastflat} middle) to indicate that it has been processed and continue the walk. \item When meeting a self-crossing, leave it unmarked (Figure~\ref{walkpastflat} bottom) and continue the walk. \item When a closed path has been traversed in the template, choose the next lowest unused template index and start a new walk. Follow the previous instructions for this walk, only labeling smoothings or circling crossings that have not already been so marked. \item When all paths have been traversed, and the pre-state has no remaining un-processed mixed crossing, the pre-state $\widehat{S}$ is now a {\it state S} for $L$. When we have a state $S$, it is not hard to see that it consists in an unlinked collection of components in the form of stacks of knots as we have previously described in this paper. \end{enumerate} Returning to our example, we have the diagram shown in Figure~\ref{hopfstate}. In this diagram $S$ is a completed state for the initial link $L$. Note that in forming $S$ we start at $1$ in the template and first encounter a mixed under-crossing. This is smoothed to produce the pre-state $\hat{S}$, and the walk continues to encounter a self-crossing that is left alone. The result is the state $S$. Moreover, first encounter from $1$ meets an under-crossing and we switch and circle this crossing and continue that walk. The next crossing is an over-crossing that is mixed. We circle this crossing and produce the state $S'$. The two states $S$ and $S'$ are a complete set of states produced by the skein template algorithm for the Hopf link $L$ with this template $T$. \subsection{The State summation} \label{ssummation} We are now in a position to define the state sum. \begin{defn} \rm Let $S(L)$ denote the collection of states defined by the skein template algorithm for a link diagram $L$ with template $T$. Given a state $S$, we shall define an {\it evaluation} of $S$ relative to $L$ and the invariant $R$, denoted by $<L|S>$. The {\it state sum} is then defined by \begin{equation} \label{statesum} Z[R](L) = \sum_{S \in S(L)} <L|S>. \end{equation} We will show that $Z[R](L) = H[R](L)$, the regular isotopy invariant that we have defined in earlier sections of the paper. Thus \begin{equation} \label{normstatesum} P[G](L) = a^{-wr(L)} \sum_{S \in S(L)} <L|S> \end{equation} gives the normalized invariant of ambient isotopy. The {\it sites} of the state $S$ consist in the decorated smoothings and the decorated crossings indicated in Figure~\ref{stateskein}. Each state evaluation $<L|S>$ consists of two parts. We shall write it in the form \begin{equation} \label{sevaluation} <L|S> = [L|S][R|S]. \end{equation} The first part $[L|S]$ depends only on $L$ and the state $S$. The second part $[R|S]$ uses the chosen knot invariant $R$. We define $[L|S]$ as a product over the sites of $S$: \begin{equation} \label{firstpart} [L|S] = \prod_{\sigma \in sites(S)}[L|\sigma] \end{equation} where $[L|\sigma]$ is defined by the equations in Figure~\ref{stateeval}, comparing a crossing in $L$ with the corresponding site $\sigma$. This means that if a smoothed site has a dot along its lower edge (when oriented from left to right), then its vertex weight is $+z$ and if it has a dot along its upper edge, then it has a vertex weight of $-z$. Circled crossings have vertex weights of $1$. In Figure~\ref{stateeval} we have indicated the possibility of vertex weights of $0$, but these will never occur in the states produced by the skein template algorithm. If we were to sum over a larger set of states, then some of them would be eliminated by this rule. The reader should note that the choice of $+z$ or $-z$ is directly in accord with the rules for the skein relation from a positive crossing or a negative crossing, respectively. We define $[R|S]$ as a weighted product of the $R$-evaluations of the components of the state $S$: \begin{equation} \label{secondpart} [R|S] = [\prod_{i=1}^{k} \rho(K_{i})]E^{1-k} \end{equation} where $E$ is defined previously and \begin{center} $ \rho(K) = a^{wr(K)}R(K). $ \end{center} Here $\{ K_{1},\ldots, K_{k} \}$ is the set of component knots of the state $S$. Recall that each state $S$ is a stacked union of single unlinked component knots $K_{i}$, $i =1,\ldots, k$, with $k$ depending on the state. In computing $\rho(K_{i})$ we ignore the state decorations and remove the circles from the crossings. With this, we have completed the definition of the state sum. \end{defn} \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=7.1cm]{StateEvaluation.pdf} \caption{State evaluation relative to the diagram $L$} \label{stateeval} \end{center} \end{figure} Note that, by (\ref{statesum}) and (\ref{sevaluation}) we assert that \begin{equation} \label{zevaluation} Z[R](L) = \sum_{S \in S(L)} [L|S][R|S]. \end{equation} \begin{rem} \rm If the invariant $R$ is itself generated by a state summation, then we obtain a {\it hybrid state sum} for $Z[R](L)$ consisting in the concatenations (in order) of these two structures. We expand on this idea in Section~\ref{secdoublesums}. \end{rem} \subsection{Connection of the state sum with skein calculation} We now illustrate how to use this state sum. Before doing calculations it is important to understand how these states are related to the familiar skein calculation process. We will show that the sum over states corresponds exactly with the results of making a skein calculation that is guided by the template in the skein template algorithm. Thus the template that we have already described works in these two related contexts. In this way we will show that the state summation gives a formula for the invariant $H[R](L)$. We begin with an illustration for a single abstract crossing as shown in Figure~\ref{walkpastflat}. We shall refer to the skein calculation guided by the template as the {\it skein algorithm}. In this figure the walker in the skein algorithm (using the template) approaches along the under-crossing line. If the crossing that is met is a self-crossing of the given diagram, then the walker just continues and the crossing is circled. If the crossing that is a mixed crossing of the given diagram, then two new diagrams are produced. In the first case we produce a smoothing with the labelling that indicates a passage along the edge met from the undercrossing arc. In the second case the walker switches the crossing and continues in the same direction as shown in the figure. This creates a bifurcation in the skein tree. Each resulting branch of the skein tree is treated recursively in this way, but first the walker continues on these given branches until it meets an undercrossing of two different components. Using the Homflypt regular isotopy skein relation (recall Theorem~\ref{hofr}, rule (1)) we can write an expansion symbolically as shown in Figure~\ref{stateskein}. Here it is understood that in expanding a crossing, \begin{enumerate} \item its two arcs lie on separate components of the given diagram, \item the walker for the skein process {\it always} switches a mixed crossing that the walker approaches as an under-crossing, and {\it never} switches a crossing that it approaches as an over-crossing, \item in expanding the crossing, the walker is shifted along according to the illustrations in Figure~\ref{stateskein}. \end{enumerate} Thus, for different components, we have the expansion equation shown in Figure~\ref{stateskein}. Here, the template takes on the role of letting us make a skein tree of exactly those states that contribute to the state sum for $Z[R](L)$. Indeed, examine Figure~\ref{stateeval}. The zero-weights correspond to inadmissible states while the $z$ and $-z$ weights correspond to admissible states where the walker approached at an under-crossing; the one-weights correspond to any circled crossing. Thus, we can use the skein algorithm to produce exactly those states that have a non-zero contribution to the state sum. By using the skein template algorithm and the skein formulas for expansion, we produce a skein tree where the states at the ends of the tree (the original link is the root of the tree) are exactly the states $S$ that give non-zero weights for $[L|S]$. Thus, by (\ref{statesum}) we obtain: \begin{equation} \label{stateskeintree} Z[R](L) = \sum_{S \in Ends(Skein Tree)} <L|S>. \end{equation} Since we have shown that the state sum is identical with the skein algorithm for computing $H[R](L)$, for any link $L$, this shows that $Z[R](L) = H[R](L)$, as promised. Thus, we have proved: \begin{thm} \label{zequalh} The state sum we have defined as $Z[R](L)$ is identical with the skein evaluation of the invariant $H[R](L)$ described and proved to be invariant earlier in this paper. We conclude that $Z[R](L) = H[R](L)$, and thus that the skein template algorithm provides a state summation model for the invariant $H[R](L)$. \end{thm} \begin{proof} The state sum $Z[R](L) = \sum_{S \in S(L)} <L|S>$ where $S(L)$ denotes all the states produced by the skein template algorithm, for a choice of template $T$. $Z[R](L)$ is equal to the sum of evaluations of those states that are produced by the skein algorithm. That is we have the identity $$ Z[R](L) = \sum_{S \in S(L)} <L|S> = \sum_{S \in Ends(Skein Tree)} <L|S> = H[R](L). $$ The latter part of this formula follows because the skein template algorithm is a description of a particular skein calculation process for $H[R](L)$, that is faithful to the rules and weights for $H[R](L)$. We have also proved that $H[R](L)$ is invariant and independent of the skein process that produces it. Thus we conclude that $Z[R](L) = H[R](L) $, and thus that the skein template algorithm provides a state summation model for the invariant $H[R](L)$. \end{proof} \begin{rem} \rm Note that it follows from the proof of Theorem~\ref{zequalh} that the calculation of $Z[R](L) = H[R](L)$ is independent of the choice of the template for the skein template algorithm. \end{rem} \begin{exmp} \rm In the example shown in Figure~\ref{skeinwhite} we apply the skein template algorithm to the Whitehead link $L$. The skein-tree shows that for the given template $T$ there are three contributing states $S_1, S_2,S_3 $. $S_1$ is a knot $K$. $S_2$ is a stacked unlink or two unknotted components. $S_3$ is an unknot. Thus, referring to Figure~\ref{skeinwhite1} and using (\ref{normstatesum}) we find the calculation shown below. \begin{center} $ Z[R](L) = z[R|S_{1}] + [R|S_{2}] -z[R|S_{3}] $ \end{center} \begin{center} $ = zR(K) + a^{-2}(\eta /E) - z a^{-3}, $ \end{center} where $\eta = (a - a^{-1})/w$ is defined in Rule (5) after Theorem~\ref{hofr} and $K = S_{1}.$ \end{exmp} \begin{rem} \rm In the example above we see that any choice of specialization for the invariant $R$ that can distinguish the trivial knot from the trefoil knot $K$ will suffice for our invariant to distinguish the Whitehead link from the trivial link, for which $Z[R](\bigcirc \bigcirc) = \eta/E$. \end{rem} \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=7cm]{Whitehead.pdf} \caption{Skein template algorithm applied to the Whitehead link} \label{skeinwhite} \end{center} \end{figure} \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=8cm]{Whitehead1.pdf} \caption{States for the Whitehead link} \label{skeinwhite1} \end{center} \end{figure} \section{Double state summations}\label{secdoublesums} In this section we consider state summations for our invariant where the invariant $R$ has a state summation expansion. The invariant $R$ has a variable $w$ and a framing variable $a$. By choosing these variables in particular ways, we can adjust $R$ to be the usual regular isotopy Homflypt polyomial or specializations of the Homflypt polynomial such as a version of the Kauffman bracket polynomial, or the Alexander polynomial, or other invariants. We shall refer to these choices as {\it specializations of} $R$. A given specialization of $R$ may have its own form of state summation. This can be combined with the skein template algorithm that produces states to be evaluated by $R$. The result is a double state summation. \smallbreak As in the previous section we have the global state summation (\ref{zevaluation}): $$ Z[R](L) = \sum_{S \in S(L)} [L|S][R|S] $$ where $[R|S]$ denotes the evaluation of the invariant $R$ on the union of unlinked knots that is the underlying topological structure of the state $S$. It is possible that the specialization we are using has itself a state summation that is of interest. In this case we would have a secondary state summation formula of the type \begin{equation} \label{rons} [R|S] = \sum_{\sigma}[S|\sigma]. \end{equation} Then, we would have a double state summation for the entire invariant in the schematic form: \begin{equation} \label{zrsummation} Z[R](L) = \sum_{S \in S(L), \sigma \in Rstates(S)} [L|S][S|\sigma], \end{equation} where $Rstates(S)$ denotes the secondary states for $R$ of the union of unlinked knots that underlies the state $S$. \begin{exmp} \rm Since we use the skein template algorithm to produce the first collection of states $S \in S(L)$, this double state summation has a precedence ordering with these states produced first, then each $S$ is viewed as a stack of knots and the second state summation is applied. In this section we will discuss some examples for state summations for $R$ and then give examples of using the double state summation. \smallbreak We begin with a state summation for the bracket polynomial that is adapted to our situation. View Figure~\ref{orbracket}. At the top of the figure we show the standard oriented expansion of the bracket. If the reader is familiar with the usual unoriented expansion \cite{kau5}, then this oriented expansion can be read by forgetting the orientations. The oriented states in this state summation contain smoothings of the type illustrated in the far right hand terms of the two formulas at the top of the figure. We call these {\it disoriented smoothings} since two arrowheads point to each other at these sites. Then by multiplying the two equations by $A$ and by $A^{-1}$ respectively, we obtain a difference formula of the type \begin{center} $ A<K_{+}> - A^{-1} <K_{-}> = (A^{2} - A^{-2})<K_{0}> $ \end{center} where $K_{+}$ denotes the local appearance of a positive crossing, $K_{-}$ denotes the local appearance of a negative crossing and $K_{0}$ denotes the local appearance of standard oriented smoothing. The difference equation eliminates the disoriented terms. It then follows easily from this difference equation that if we define a {\it curly bracket} by the equation \begin{center} $ \{K\} = A^{wr(K)} <K> $ \end{center} where $wr(K)$ is the diagram writhe (the sum of the signs of the crossings of $K$), then we have a Homflypt type relation for $\{K\}$ as follows: \begin{equation} \label{curlybracket} \{K_{+}\} - \{K_{-}\} = (A^{2} - A^{-2})\{K_{0}\}. \end{equation} This means that we can regard $\{K\}$ as a specialization of the Homflypt polynomial and so we can use it as the invariant $R$ in our double state summation. The state summation for $\{K\}$ is essentially the same as that for the bracket, as we now detail. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=8cm]{OrientedBracket.pdf} \caption{Oriented bracket with Homflypt skein relation} \label{orbracket} \end{center} \end{figure} From Figure~\ref{orbracket} it is not difficult to see that \begin{equation} \label{curlyplus} \{K_{+}\} = A^2 \{K_{0}\} + \{K_{\infty}\} \end{equation} and \begin{equation} \label{curlyminus} \{K_{-}\} = A^{-2} \{K_{0}\} + \{K_{\infty}\}. \end{equation} Here $K_{\infty}$ denotes the disoriented smoothing shown in the figure. These formulas then define the state summation for the curly bracket. The reader should note that the difference of these two expansion equations (\ref{curlyplus}) and (\ref{curlyminus}) is the difference formula (\ref{curlybracket}) for the curly bracket in Homflypt form. The corresponding state summation \cite{kau6} for these equations is $$ \{ K \} = \sum_{\sigma} A^{ 2s_{+}(\sigma) - 2s_{-}(\sigma) } (-A^{2} - A^{-2})^{ || \sigma ||-1}, $$ where $\sigma$ runs over all choices of oriented and disoriented smoothings of the crossings of the diagram $K$. Here $s_{+} ( \sigma)$ denotes the number of oriented smoothings of positive crossings and $s_{-} ( \sigma)$ denotes the number of oriented smoothings of negative crossings in the state $\sigma$. Further, $|| \sigma ||$ denotes the number of loops in the state $\sigma$. With this state sum model in place we can proceed to write a double state sum for the bracket polynomial specialization of our invariant. The formalism of this invariant is after (\ref{zrsummation}), as follows. \begin{equation}\label{zcurlybracket} Z[\{ \, \}](L) = \sum_{S \in S(L)} [L|S]\{S\} = \sum_{S \in S(L)}\sum_{\sigma \in smoothings(S)} [L|S] A^{ 2s_{+}(\sigma) - 2s_{-}(\sigma) } (-A^{2} - A^{-2})^{ || \sigma ||-1}. \end{equation} Here we see the texture of the double state summation. The skein template algorithm produces from the oriented link $L$ the stacks of knots $K$. Each such stack has a collection of smoothing states, and for each such smoothing state we have the term in the curly bracket expansion formula multiplying a corresponding term from the skein template expansion. \end{exmp} There are many other examples of specific double state summations for other choices of the specialization of the Homflypt polynomial. \begin{exmp} \rm For example, we can use the specialized Homflypt state summation based on a solution to the Yang-Baxter equation as explained in \cite{kau5,kau6,jo3}. \end{exmp} \begin{exmp} \rm We could also take the specialization to be the Alexander--Conway polynomial and use the Formal Knot Theory state summation as explained in \cite{kau1}. \end{exmp} All these different cases deserve more exploration, particularly for computing examples of these new invariants. \begin{rem} \rm The skein template algorithm as well as the double state summation generalizes to the Dubrovnik and Kauffman polynomials, and so applies to our generalizations of them, $D[T]$ and $K[Q]$, as well. We will take up this computational and combinatorial subject in a sequel to the present paper. \end{rem} \begin{rem} \rm Consider the combinatorial formula (\ref{hr}). This formula can be regarded itself as a state summation, where the states are the partitions $\pi$ and the state evaluations are given by the formula and the evaluations of the regular isotopy Homflypt polynomial $R$ on $\pi L$. If we choose a state summation for $R$ or a specialization of $R$, then this formula becomes a double state summation in the same sense as we discussed above, but without using the skein template algorithm. These double state sums deserve further investigation both for $H[R]$ and also for the counterparts (\ref{xi}) and (\ref{psi}) for the generalizations $D[T]$ and $K[Q]$ of the Kauffman and the Dubrovnik polynomials. \end{rem} \section{Statistical mechanics and double state summations}\label{statmech} In statistical mechanics, one considers the {\it partition function} for a physical system \cite{Baxter}. The partition function $Z_{G}(T)$ is a summation over the states $\sigma$ of the system $G$: $$ Z_{G}= \sum_{\sigma} e^{\frac{-1}{kT}E(\sigma)} $$ where $T$ is the temperature and $k$ is Bolztmann's constant. Combinatorial models for simplified systems have been studied intensively since Onsager \cite{Onsager} showed that the partition function for the Ising model for the limits of planar lattices exhibits a phase transition. Onsager's work showed that very simple physical models, such as the Ising model, can exhibit phase transitions, and this led to the deep research subject of exactly solvable statistical mechanics models \cite{Baxter}. The {\it $q$-state Potts model} \cite{Baxter,kau3} is an important generalization of the Ising model that is based on $q$ local spins at each site in a graph $G$. For the Potts model, a state of the graph $G$ is an assignment of spins from $\{1,\ldots,q\}$ to each of the nodes of the graph $G$. If $\sigma$ is such a state and $i$ denotes the $i$-th node of the graph $G$, then we let $\sigma_{i}$ denote the spin assignment to this node. Then the energy of the state $\sigma$ is given by the formula $$ E(\sigma) = \sum_{\langle i,j \rangle} \delta(\sigma_{i}, \sigma_{j}) $$ where $\langle i,j \rangle$ denotes an edge in the graph between nodes $i$ and $j$, and $\delta(x,y)$ is equal to $1$ when $x=y$ and equal to $0$ otherwise. Temperley and Lieb \cite{TL} proved that the partition function for the Potts model can be calculated using a contraction - deletion algorithm, and so showed that $Z_{G}$ is a special version of the dichromatic or Tutte polynomial in graph theory. This, in turn, is directly related to the bracket polynomial state sum, and so by generalizing the variables in the bracket state sum and translating the planar graph $G$ into a knot diagram by a {\it medial construction} (associating a planar graph to a link diagram via a checkerboard coloring of the diagram so that each shaded region in the checkerboard corresponds to a graphical node and each crossing between shaded regions corresponds to an edge) , one obtains an expression for the Potts model as a bracket summation with new parameters \cite{kau3}. We wish to discuss the possible statistical mechanical interpretation of our generalized bracket state summation $Z[\{ \, \}]$\, (see Eq.~\ref{zcurlybracket}). In order to do this we shall extend the variables of our state sum so that the bracket calculation (for the stacks of knots $S$ that correspond to skein template states) is sufficiently general to support (generalized) Potts models associated with these knots. Accordingly, we add variables to the bracket expansion so that \begin{center} $ \{K_{+}\} = x \{K_{0}\} + y \{K_{\infty}\}, $ \end{center} \begin{center} $ \{K_{-}\} = x' \{K_{0}\} + y' \{K_{\infty}\} $ \end{center} and the loop value is taken to be $D$ rather than $-A^{2} - A^{-2}$. \smallbreak \begin{figure}[H] \begin{center} \includegraphics[width=9.8cm]{StateSkeinSimple.pdf} \caption{Raw state production for skein template algorithm} \label{stateskeinsimple} \end{center} \end{figure} For a given knot in the stack $S,$ the state sum remains well-defined and it now can be specialized to compute a generalized Potts model for a plane graph via a medial graph translation. Letting $R(K)= \{K\}$ denote this bracket state sum, we can then form a generalized version of $Z[R]$ by using the expansion in Figure~\ref{stateskeinsimple} where {\it we use the raw states} of this figure, and we do not filter them by the skein template algorithm, but simply ask that each final state is a union of unlinked knots. The result will then be a combinatorially well-defined double-tier state sum. It is this state sum $Z[R]$ that can be examined in the light of ideas and techniques in statistical mechanics. The first tier expansion is highly non-local, and just pays attention to dividing up the diagrams so that the first tier of states are each collections of unlinked knots. Then each knot can be regarded as a localized physical system and evaluated with the analogue of a Potts model. This is the logical structure of our double state summation, and it is an open question whether it has a significant physical interpretation. \section{Discussing mathematical directions}\label{directions} In this paper we first gave a direct skein-theoretic proof of the existence and uniqueness of the generalization of the regular isotopy version of the Homflypt polynomial and its ambient isotopy counterpart including the new invariant $\Theta (q,\lambda,E)$ \cite{chjukala}. We then generalized to new skein link invariants the Dubrovnik and the Kauffman polynomials and we provided closed defining combinatorial formuli for all these new invariants. We finally proceeded with constructing new state summations and double summations. We shall now discuss some possible research directions emanating from our results. \begin{enumerate} \item Computer calculation of the new skein link invariants need to be done in order to check their topological strength. \item Just as the invariant $\Theta(q,\lambda,E) = P[P]$ is related to the Yokonuma--Hecke algebra and to the algebra of braids and ties \cite{AJ1}, it would be interesting to see the invariants $D[D]$ and $K[K]$, or rather $Y[Y]$ and $F[F]$ defined in (\ref{yzfromdt}) and (\ref{fsfromkq}), related to some knot algebras, such as the framization of the BMW algebra proposed in \cite{jula4,jula5}, see also \cite{AJ3}. \item The categorification of the new skein invariants is another interesting problem, and for the invariant $\theta(q,E)$, which generalizes the Jones polynomial and is a specialization of $\Theta(q,z,E)$ \cite{goula2}, it is the object of research of the second author with Chlouveraki, Goundaroulis and Kontogeorgis. \item We expect that all the work in the present paper can be straightforwardly generalized to invariants of tied links, using the methods of Aicardi and Juyumaya \cite{AJ2}. This will be investigated in a subsequent paper. \end{enumerate} \section{Discussing applications}\label{applications} We contemplate how these new ideas can be applied to physical situations. We present these indications of possible applications here with the full intent to pursue them in subsequent publications. \begin{enumerate} \item Reconnection (in vortices). In a knotted vortex in a fluid or plasma (for example in solar flares) \cite{Irv} one has a cascade of changes in the vortex topology as strands of the vortex undergo reconnection. The process goes on until the vortex has degenerated into a disjoint union of unknotted simpler vortices. This cascade or hierarchy of interactions is reminiscent of the way the skein template algorithm proceeds to produce unlinks. Studying reconnection in vortices may be facilitated by making a statistical mechanics summation related to the cascade. Such a summation will be analogous the state summations we have described here. \item In DNA, strand switching using topoisomerase of types I and II is vital for the structure of DNA recombination and DNA replication \cite{Sumners}. The mixed interaction of topological change and physical evolution of the molecules in vitro may benefit from a mixed state summation that averages quantities respecting the hierarchy of interactions. \item Remarkably, the process of separation and evaluation that we have described here is analogous to proposed processing of Kinetoplast DNA \cite{Kinetoplast} where there are huge links of DNA circles and these must undergo processes that both unlink them from one another and produce new copies for each circle of DNA. The double-tiered structure of DNA replication for the Kinetoplast appears to be related to the mathematical patterns of our double state summations. For chainmail DNA. If the readers examines the Wiki on Kinetoplast DNA, she will note that that Topoisomerase II figures crucially in the self-replication \cite{Kineto}. \item We wondered whether we could have physical situations that would have the kind of a mixture that is implicit in this state summation, where the initial skein template state sum yields a sum over $R$-evaluations, and $R$ may itself have a state summation structure. One possible example in the physical world is a normal statistical mechanical situation, where one can have multiple types of materials, all present together, each having different energetic properties. This can lead to a mixed partition function, possibly not quite ordered in the fashion of our algorithm. This would involve a physical hierarchy of interactions so that there would be a double (or multiple) tier resulting from that hierarchy. \item Mixed state models can occur in physical situations when we work with systems of systems. There are many examples of this multiple-tier situation in systems physical and biological. We look for situations where a double state sum would yield new information. For example, in a quantum Hall system \cite{Haldane}, the state of the system is in its quasi-particles, but each quasi-particle is itself a vortex of electrons related to a magnetic field line. So the quasi-particles are themselves localized physical systems. Some of this is summarized in the Laughlin wave function for quantum Hall \cite{Haldane}. Not a simple situation, but a very significant one. There should be other important examples. \end{enumerate}
1,108,101,563,769
arxiv
\section{Introduction} The angular distribution of the gamma ray burst population has been shown to be highly isotropic (\cite{mea92}; \cite{bea96}). This suggests that the bursts are either located in an extended galactic halo (e.g., \cite{p91}) or that they are cosmological in origin (e.g., \cite{p86}). Recent measurements of time dilation of burst durations (\cite{nea94}, 1995; \cite{wp94}; however, see \cite{mea96}), of pulse durations (\cite{nea96a}), and of interpulse durations (\cite{d95}; \cite{nea96b}) in the BATSE data, as well as measurements of peak energy shifting (\cite{mea95}), favor the latter explanation. Models, both galactic and cosmological, are typically fitted to the differential peak flux distribution of BATSE's long duration ($T_{90} >$ 2 s) bursts. Furthermore, this distribution is typically truncated at a peak flux of 1 ph cm$^{-2}$ s$^{-1}$ to avoid threshold effects. Here, we fit two models, one with a standard candle luminosity and one with a power law luminosity distribution, to not only BATSE's 3B differential distribution, but also to the pulse duration time dilation factors (corrected for energy stretching and similar effects) of Norris et al. (1996a), the interpulse duration time dilation factors of Norris et al. (1996b), and the peak energy shifting factors of Mallozzi et al. (1995). These three independent sets of measurements are shown to be self-consistent in \S4. (All three are for long duration bursts only.) Furthermore, via the analysis of Petrosian \& Lee (1996a), BATSE's differential distribution is extended down to a peak flux of 0.316 ph cm$^{-2}$ s$^{-1}$, which corresponds to a trigger efficiency of $\sim \frac{1}{2}$ on BATSE's 1024 ms timescale. Together, the differential distribution and the time dilation and energy shifting factors place strong bounds on the evolution of the burst population. These bounds favor moderate evolution and are incompatible with homogeneity, assuming only minimal luminosity evolution. This result is compatible with the analyses of Fenimore \& Bloom (1995), Nemiroff et al. (1996), and Horack, Mallozzi, \& Koshut (1996). Furthermore, under these conditions of moderate evolution, the 90\% width of the {\it observed} luminosity distribution is shown to be less constrained than others have demonstrated it to be assuming no evolution (see \S5). Finally, redshift considerations indicate that if the redshifts of BATSE's faintest bursts are to be compatible with that which is currently associated with the formation of the earliest galaxies, the mean luminosity of the bursts should be $\sim$ 10$^{57}$ ph s$^{-1}$ or lower. \section{Cosmological Models} Both the standard candle luminosity model and the power law luminosity distribution model assume a power law redshift distribution, given by \begin{equation} n(z) = n_0(1+z)^{D}, \end{equation} where $n(z)$ is the number density of bursts of redshift $z$. This distribution is bounded by 0 $< z < z_M$, where $z_{M}$ is the maximum burst redshift. The luminosity distributions of the two models are given by \begin{equation} \phi(L) = \cases{\phi_0\delta(L - L_0) & (standard candle) \cr \phi_0L^{-\beta} & (power law)}. \end{equation} The standard candle is of luminosity $L_0$ and the power law luminosity is bounded by minimum and maximum luminosities $L_m < L < L_M$. All luminosities are peak photon number luminosities and all fluxes are peak photon number fluxes (measured over BATSE's 50 - 300 keV triggering range); however, see recent papers by Bloom, Fenimore, \& in 't Zand (1996) and Petrosian \& Lee (1996b) which introduce the fluence measure. \subsection{Integral Distribution} Assuming a power law spectrum and an Einstein-de Sitter cosmology, the bursts' integral distribution, i.e. the number of bursts with peak fluxes greater than an arbitrary value $F$, is given for either model by (\cite{mm95}) \begin{equation} N(>F) = \frac{32\pi n_0c^3}{H_0^3}\int_{L_m}^{L_M}\phi(L)dL\int_0^{\chi_0}(1-\chi)^{8-2D}\chi^2d\chi, \label{3a} \end{equation} where \begin{equation} \chi_0 = min(\chi_1,\chi_2), \end{equation} \begin{equation} \chi_1 =\frac{1}{1+{\frac{4c}{H_0}}\left({\frac{\pi F}{L}}\right)^{\frac{1}{2}}}, \label{3c} \end{equation} and \begin{equation} \chi_2 = 1 - \frac{1}{(1+z_M)^{\frac{1}{2}}}. \label{3d} \end{equation} A photon number spectral index of -1 (or a power-per-decade spectral index of 1) has been assumed. This value is typical of burst spectra, especially at those frequencies at which most of the photons are received (e.g., Band et al. (1993)). In the case of the standard candle model, eq. \ref{3a} becomes \begin{equation} N(>F) \propto \int_0^{\chi_0}(1-\chi)^{8-2D}\chi^2d\chi, \label{9a} \end{equation} where $L = L_0$ in eq. \ref{3c}. The factor of proportionality has been dropped because only normalized integral distributions (see \S 3.1) and ratios of integral distributions (see \S 2.2) are fit to. Eq. \ref{9a} has the analytic solution \begin{equation} N(>F) \propto f(\chi_0,8-2D), \label{10a} \end{equation} where \begin{equation} f(\chi,q)= \frac{2(1 - (1-\chi)^{3+q})}{(1+q)(2+q)(3+q)} -\frac{2\chi(1 - \chi)^{2+q}}{(1+q)(2+q)}-\frac{\chi^2(1-\chi)^{1+q}}{1+q}. \end{equation} In the case of the power law model, eq. \ref{3a} becomes \begin{equation} N(>F) \propto \int_1^K x^{-\beta}dx\int_0^{\chi_0}(1-\chi)^{8-2D}\chi^2d\chi, \label{14a} \end{equation} where \begin{equation} K = \frac{L_M}{L_m} \end{equation} and $L = xL_m$ in eq. \ref{3c}. Eq. \ref{14a} has the integral solution \begin{equation} N(>F) \propto \int_1^Kf(\chi_0,8-2D)x^{-\beta}dx. \label{18} \end{equation} \subsection{Time Dilation and Energy Shifting Factors} In an idealized scenario of two identical bursts at different redshifts, $z_1$ and $z_2$, their time dilation and energy shifting factors, $\tau_{12}$ and $\epsilon_{12}$, are both simply equal to the ratios of their scale factors (neglecting the effects of energy stretching which are inherent in pulse duration measurements (\cite{fb95})): \begin{equation} \tau_{12} = \epsilon_{12}^{-1} = \frac{1 + z_1}{1+z_2}. \end{equation} In practice, however, measures of the scale factor are averaged over peak flux ranges and time dilation and energy shifting factors are determined for pairs of these ranges. M\'esz\'aros \& M\'esz\'aros (1996) demonstrated that such mean values of the scale factor, averaged over a peak flux range $F_l < F < F_u$, are simple functions of the integral distribution, as modeled by eqs. \ref{10a} and \ref{18}: \begin{equation} \overline{(1+z)}(F_l,F_u) = \frac{N_{D+1}(F_l,F_u)}{N_D(F_l,F_u)}, \end{equation} where \begin{equation} N(F_l,F_u) = N(>F_u) - N(>F_l). \label{diff} \end{equation} Consequently, time dilation and energy shifting factors between two such ranges, $F_{1,l} < F_1 < F_{1,u}$ and $F_{2,l} < F_2 < F_{2,u}$, are given by \begin{equation} \tau_{12} = \epsilon_{12}^{-1} = \frac{N_{D+1}(F_{1,l},F_{1,u})N_D(F_{2,l},F_{2,u})}{N_D(F_{1,l},F_{1,u})N_{D+1}(F_{2,l},F_{2,u})}. \label{8} \end{equation} The effects of energy stretching are not modeled here because they are removed empirically from the pulse duration measurements of Norris et al. (1996a) in \S3.2. The interpulse duration measurements of Norris et al. (1996b) and the peak energy measurements of Mallozzi et al. (1995) do not require such corrections. \section{Data Analysis} \subsection{Integral Distribution} BATSE's sensitivity becomes less than unity at peak fluxes below $\sim$ 1 ph cm$^{-2}$ s$^{-1}$ (\cite{fea93}). Petrosian, Lee, \& Azzam (1994) demonstrated that BATSE is additionally biased against short duration bursts: BATSE triggers when the mean photon count rate, defined by \begin{equation} \bar C(t) = \frac{1}{\Delta t}\int_t^{t + \Delta t}C(t)dt \end{equation} where $\Delta t =$ 64, 256, and 1024 ms are BATSE's predefined timescales, exceeds the threshold count rate, $\bar C_{lim}$, on a particular timescale. Consequently, peak photon count rates are underestimated for bursts of duration $T \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \Delta t$, sometimes to the point of non-detection. Peak fluxes are similarly underestimated. Petrosian \& Lee (1996a) developed (1) a correction for BATSE's measured peak fluxes and (2) a non-parametric method of correcting BATSE's integral distribution. A burst's corrected peak flux is given by \begin{equation} F = \bar F\left(1 + \frac{\Delta t}{T_{90}}\right), \label{20} \end{equation} where $\bar F$ is the burst's measured peak flux and $T_{90}$ is the burst's 90$\%$ duration. Consequently, if $T_{90} \gg \Delta t$, $F \simeq \bar F$; however if $T_{90} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \Delta t$, $F > \bar F$. Petrosian \& Lee (1996a) demonstrated that eq. \ref{20} adequately corrects BATSE's measured peak fluxes (1) on the 1024 ms timescale, (2) for bursts of duration $T_{90} >$ 64 ms, and (3) for a variety of burst time profiles. BATSE's corrected integral distribution is given by \begin{equation} N(>F_i) = \cases{1 & $(i = 1)$ \cr \prod_{j=2}^i (1 + \frac{1}{M_j}) & $(i > 1)$}, \label{21} \end{equation} where $F_i > F_{i+1}$, $F_i > F_{lim,i}(T_{90})$, and $M_i$ is the number of points in the {\it associated} set $\cal M$$_i = \{(F_j,F_{lim,j}(T_{90})) : F_j > F_i$ and $F_{lim,j}(T_{90}) < F_i\}$. The corrected threshold flux, $F_{lim}(T_{90})$, is the minimum value of the corrected peak flux that satisfies the trigger criterion: $\bar F > \bar F_{lim}$, where \begin{equation} \bar F_{lim} = \bar C_{lim} \left(\frac {\bar F}{\bar C}\right) \end{equation} and $\bar C$ is the measured peak photon count rate. By eq. \ref{20}, $F_{lim}(T_{90})$ is indeed a function of $T_{90}$ and is similarly given by \begin{equation} F_{lim}(T_{90}) = \bar F_{lim}(1 + \frac{\Delta t}{T_{90}}). \end{equation} We apply the peak flux and integral distribution corrections of Petrosian \& Lee (1996a) with one restriction: Kouvelioutou et al. (1993), Petrosian, Lee, \& Azzam (1994), and Petrosian \& Lee (1996a) have demonstrated that the distribution of BATSE burst durations is bimodal, with the division occuring at $T_{90} \sim$ 2 s. This suggests that short ($T_{90} <$ 2 s) and long ($T_{90} >$ 2 s) duration bursts may be drawn from separate populations. This notion is further supported by the tendency of short duration bursts (1) to have steeper integral distributions than long duration bursts (\cite{pl96a}), and (2) to have lower energy shifting factors than long duration bursts, especially at low peak fluxes (\cite{mea95}). Consequently, we exclude short duration bursts from our sample. Of the 1122 bursts in the 3B catalog, information sufficient to perform these corrections, subject to the above restriction, exists for 423 bursts. The corrected integral distribution is plotted in fig. 1. It can be seen that the corrected distribution differs significantly from the uncorrected distribution only at peak fluxes below $F \sim$ 0.4 ph cm$^{-2}$ s$^{-1}$. For purposes of fitting, we truncate and normalize the integral distribution at $F =$ 0.316 ph cm$^{-2}$ s$^{-1}$, which corresponds to a trigger efficiency of $\sim \frac{1}{2}$. The remaining 397 bursts are divided into eighteen bins: fifteen are of logarithmic length 0.1, and the brightest three are of logarithmic length 0.2. \subsection{Time Dilation and Energy Shifting Factors} The pulse duration time dilation factors of Norris et al. (1996a), computed using both peak alignment and auto-correlation statistics, are subject to energy stretching: pulse durations tend to be shorter at higher energies (\cite{fea95}); consequently, pulse duration measurements of redshifted bursts are necessarily underestimated. Furthermore, Norris et al. (1996a) demonstrated that the unavoidable inclusion of the interpulse intervals in these analyses has a similar effect. To correct for these effects, Norris et al. (1996a) provided a means of calibration: they stretched and shifted, respectively, the time profiles and the energy spectra of the bursts of their reference bin by factors of 2 and 3, and from these ``redshifted" bursts, they computed ``observed" time dilation factors. For each statistic, we have fitted these ``observed" time dilation factors to the ``actual" time dilation factors of 2 and 3 with a power law which necessarily passes through the origin. Calibrated time dilation factors are determined from these fits and are plotted in fig. 2. These calibrated time dilation factors are consistent with both the interpulse duration time dilation factors of Norris et al. (1996b) and the energy shifting factors (long duration bursts only) of Mallozzi et al. (1995) (see \S4), neither of which require significant energy stretching corrections. The interpulse duration time dilation factors were computed for various combinations of temporal resolutions and signal-to-noise thresholds. Norris et al. (1996b) provided error estimates for two such combinations, which they described as ``conservative" with respect to their statistical significance. These time dilation factors and the energy shifting factors of Mallozzi et al. (1995) are additionally plotted in fig. 2. All 22 of the time dilation and energy stretching factors are fit to in \S4. \section{Model Fits} Both the standard candle luminosity model and the power law luminosity distribution model have been $\chi^2$-fitted to the corrected and binned differential distribution of fig. 1 (see \S3.1) and to the time dilation and energy shifting factors of fig. 2 (see \S3.2). Additionally, both models have been $\chi^2$-fitted to the union of these data sets. In the case of the standard candle model, $\Delta\chi^2$ confidence regions, as prescribed by Press et al. (1989), are computed on a 100$^2$-point grid. In the case of the power law model, $\Delta\chi^2$ confidence regions are computed on a 50$^4$-point grid and are projected into three two-dimensional planes. \subsection{Standard Candle Luminosity Model} The standard candle model consists of three parameters: $h^2L_0$, $D$, and $z_M$, where $h = H_0/100$. By eqs. \ref{3c} and \ref{3d}, $z_M$ is constrained by \begin{equation} z_M > \left(1 + \frac{H_0}{4c}\left(\frac{L_0}{\pi F_m}\right)^{\frac{1}{2}}\right)^2 - 1, \label{26} \end{equation} where $F_m =$ 0.201 ph cm$^{-2}$ s$^{-1}$ is the peak flux of BATSE's faintest burst. However, above this limit, $z_M$ is independent of the data. The standard candle model fits both the differential distribution ($\chi^2_m =$ 18.3, $\nu =$ 16) and the time dilation and energy shifting factors ($\chi^2_m =$ 16.2, $\nu =$ 20). The significance of the latter fit testifies to the consistency of the independent time dilation and energy shifting measurements. The $\Delta\chi^2$ confidence regions of these fits (fig. 3), while demonstrating strong correlations between $h^2L_0$ and $D$, do not place bounds on either parameter. However, the latter fit places strong bounds on $h^2L_0$ for reasonable values of $D$. The standard candle model additionally fits the union of these data sets ($\chi^2_m =$ 38.2, $\nu =$ 38). The $\Delta\chi^2$ confidence region of this joint fit (fig. 4) places strong bounds on both $h^2L_0$ and $D$: $h^2L_0 =$ 2.3$^{+0.8}_{-0.7}\times10^{57}$ ph s$^{-1}$ and $D =$ 3.6$^{+0.3}_{-0.3}$. By eq. \ref{26}, this implies that $z_M >$ 6.0$^{+1.5}_{-1.3}$, of which the implications are discussed in \S5. \subsection{Power Law Luminosity Distribution Model} The power law model consists of five parameters: $h^2\bar L$, $D$, $\beta$, $K$, and $z_M$, where \begin{equation} \bar L = L_m\left(\frac{1-\beta}{2-\beta}\right)\left(\frac{K^{2-\beta}-1}{K^{1-\beta}-1}\right) \end{equation} is the mean luminosity of the luminosity distribution, $\phi(L)$. The fifth parameter, $z_M$, is again constrained by eq. \ref{26}, except with $L_0 \rightarrow L_m$. However, unlike in the standard candle model, $z_M$ is not necessarily independent of the data above this limit. For purposes of fitting, we assume that $z_M$ is indeed beyond what BATSE observes. The limitations of this assumption are discussed in \S5. The power law model fits the differential distribution ($\chi^2_m=11.2$, $\nu =$ 14), the time dilation and energy shifting factors ($\chi^2_m =$ 13.6, $\nu =$ 18), and the union of these data sets ($\chi^2_m =$ 34.1, $\nu =$ 36). The $\Delta\chi^2 $ confidence region of the joint fit (fig. 5) places strong bounds on $D$: $D =$ 3.7$^{+0.4}_{-0.5}$ and for $h^2\bar L <$ 10$^{57}$ ph s$^{-1}$, 3.4 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} D \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 3.8 to 1-$\sigma$. This region is additionally divisible into four unique subregions (see tab. 1). Using the terminology of Hakkila et al. (1995, 1996), the luminosity distribution of each subregion is described as $L_m$ dominated (independent of $L_M$), $L_M$ dominated (independent of $L_m$), range dominated (dependent upon both $L_m$ and $L_M$), or similar to a standard candle ($L_m \sim L_M$). For each subregion, bounds are placed on $\bar L$, $\beta$, $K$, and $K_{90}$, where $K_{90}$ is the 90\% width of the {\it observed} luminosity distribution and is given by (following the convention of Ulmer \& Wijers (1995)) \begin{equation} K_{90} = \frac{L_{95}}{L_5}, \end{equation} where $L_p$, the ``$p$\% luminosity" of this distribution, is defined by \begin{equation} N_{L < L_{p}}(>F_m) = \left(\frac{p}{100}\right)N_{L < L_M}(>F_m). \label{p} \end{equation} It is important to note that others (e.g., Horack, Emslie, \& Meegan (1994)) define $K_{90}$ differently: \begin{equation} K_{90} = \cases{\frac{L_{90}}{L_m} & ($L_m$ dominated) \cr \frac{L_M}{L_{10}} & ($L_M$ dominated)}, \end{equation} which results in reduced values. The former definition is applied here. \section{Conclusions} Assuming no evolution ($D =$ 3), Fenimore \& Bloom (1995), Nemiroff et al. (1996), and Horack, Mallozzi, \& Koshut (1996) have demonstrated that BATSE's differential distribution is inconsistent with a time dilation factor of $\sim$ 2 between the peak flux extremes of Norris et al. (1996a, 1996b). This has prompted suggestions that either the bursts' observed time dilation is largely intrinsic or that strong evolutionary effects are present in the differential distribution. The former explanation, however, is discredited by the degree to which the time dilation and energy shifting measurements are consistent. Hakkila et al. (1996), also assuming no evolution, have demonstrated that the differential distribution alone is incompatible with a standard candle luminosity. These results agree with our results for $D = 3$. We additionally determine at what values of $D$ that these incompatibilities disappear: $D =$ 3.6$^{+0.3}_{-0.3}$ for the standard candle model and $D =$ 3.7$^{+0.4}_{-0.5}$ for the power law model. For mean luminosities $h^2\bar L <$ 10$^{57}$ ph s$^{-1}$, evolution is even more tightly constrained: 3.4 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} D \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 3.8 (to 1-$\sigma$). Horack, Emslie, \& Meegan (1994), Emslie \& Horack (1994), Ulmer \& Wijers (1995), Hakkila et al. (1995, 1996), and Ulmer, Wijers, \& Fenimore (1995) have demonstrated that $K_{90} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10 for a variety of galactic halo and cosmological models. When cosmological, these models assume no evolution. However, when $D >$ 3, $K_{90}$ need not be so tightly constrained (Horack, Emslie, \& Hartmann 1995, \cite{hea_96}). We find that for 10$^{57}$ ph s$^{-1}$ $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} h^2\bar L \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10$^{57.5}$ ph s$^{-1}$, $K_{90}$ is only constrained to be less than $\sim 10^2$ (see fig. 5). Furthermore, for $h^2\bar L \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10$^{56}$ ph s$^{-1}$, $K_{90} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 10. The former result is more conservative than estimates which assume no evolution. The latter is the result of new solutions which do not fit the data for $D =$ 3. In the standard candle model, the redshift of BATSE's faintest burst is 6.0$^{+1.5}_{-1.3}$, which is much greater than that which is measured for galaxies. The power law model, under certain conditions, provides more reasonable estimates. In tab. 2, 1-$\sigma$ bounds are placed on the redshift of BATSE's faintest burst for three representative luminosities: $L_{10}$, $L_{50}$, and $L_{90}$, where $L_p$ is as defined in eq. \ref{p}. (For example, $L_{50}$ is the median luminosity of the {\it observed} luminosity distribution, and 80\% of the {\it observed} bursts have luminosities between $L_{10}$ and $L_{90}$.) Defining the redshift $z_p$ as the maximum redshift at which bursts of luminosity $L_p$ can be detected, we find that 2.9 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z_{50} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 4.6 for $h^2\bar L \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10$^{57}$ ph s$^{-1}$ and 4.2 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z_{50} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 9.4 otherwise. However, $z_{10} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 4.2 for all mean luminosities and $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 2.3 for $h^2\bar L \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10$^{57}$ ph s$^{-1}$. If $L_p \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} L_{90}$, the redshift of this burst is again quite large. Consequently, a mean luminosity of $h^2\bar L \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10$^{57}$ ph s$^{-1}$ coupled with a luminosity for BATSE's faintest burst of $L_p < L_{50}$ is favored. In conclusion, the results presented in this paper demonstrate that when both the differential distribution and the time dilation and energy shifting factors are fitted to, moderate evolution is required if an Einstein-de Sitter cosmology, a power law spectrum of photon number index -1, no luminosity evolution, and in the case of the power law model, a non-observable maximum burst redshift are assumed. We have additionally demonstrated that under these conditions, the 90\% width of the {\it observed} luminosity distribution is not necessarily $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10, as appears to be the case if no evolution is assumed. Finally, redshift considerations indicate that if the redshifts of the faintest bursts are to be compatible with that which is currently known about galaxies, the standard candle model is unacceptable and for the power law model, a mean burst luminosity $h^2\bar L \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10$^{57}$ ph cm$^{-2}$ s$^{-1}$ is favored. \acknowledgments This work was supported in part by NASA grant NAG5-2857 and an AAS/NSF-REU grant. We are also grateful to E. E. Fenimore and E. D. Feigelson for useful discussions. \clearpage \begin{deluxetable}{cccccc} \tablecolumns{6} \tablewidth{0pc} \tablecaption{Power Law Model $\Delta\chi^2$ Confidence Subregions} \tablehead{ \colhead{Subregion} & \colhead{$\phi(L)$} & \colhead{$\bar L$} & \colhead{$\beta$} & \colhead{$K$} & \colhead{$K_{90}$}} \startdata 1 & $L_M$ dominated & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} L_0$ & unbounded\tablenotemark{a} & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 10$^3$ & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 10$^{0.5}$\tablenotemark{b} \nl 2 & range dominated & $\sim L_0$ & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 1.5 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10$^3$ & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10$^2$ \nl 3 & standard candle & $\sim L_0$ & unbounded & $\sim$ 1 & $\sim$ 1 \nl 4 & $L_m$ dominated & $\sim L_0$ & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 2.5 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 10$^{2.5}$ & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10 \nl \enddata \tablenotetext{a}{$<$ 2 for cosmological values of $\bar L$} \tablenotetext{b}{$<$ 10$^2$ for cosmological values of $\bar L$} \end{deluxetable} \clearpage \begin{deluxetable}{ccc} \tablecolumns{6} \tablewidth{0pc} \tablecaption{Power Law Model Redshift of BATES's Faintest Burst\tablenotemark{a}} \tablehead{ \colhead{$L_p$} & \colhead{$h^2\bar L \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 10$^{57}$ ph s$^{-1}$} & \colhead{$h^2\bar L \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 10$^{57}$ ph s$^{-1}$}} \startdata $L_{10}$ & 1.0 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z_{10} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 2.3 & 1.2 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z_{10} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 4.2 \nl $L_{50}$ & 2.9 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z_{50} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 4.6 & 4.2 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z_{50} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 9.4 \nl $L_{90}$ & 5.1 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z_{90} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 6.1 & 5.3 $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z_{90} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 13.1 \nl \enddata \tablenotetext{a}{to 1-$\sigma$} \end{deluxetable} \clearpage
1,108,101,563,770
arxiv
\section{Introduction} The growth of cosmological structure in the Universe is determined primarily by (Newtonian) gravitational forces. Unlike the electrostatic force, which can be both attractive and repulsive and for which shielding is important, the ubiquitous attraction of the gravitational force leads to extremely dense structures, relative to the average density in the Universe. Galaxies, for example, are typically $10^4$ times more dense than their surrounding environment, and substructure within them can be orders of magnitude more dense. Modelling such large density contrasts is difficult with fixed grid methods and, consequently, particle-based solvers are an indispensable tool for conducting simulations of the growth of cosmological structure. The Lagrangian nature of particle codes makes them inherently adaptive without requiring the complexity associated with adaptive Eulerian methods. The Lagrangian Smoothed Particle Hydrodynamics (SPH,\cite{GM77}) method also integrates well with gravitational solvers using particles, and because of its simplicity, robustness and ability to easily model complex geometries, has become widely used in cosmology. Further, the necessity to model systems in which orbit crossing, or phase wrapping, occurs (either in collisionless fluids or in collisional systems) demands a fully Lagrangian method that tracks mass. While full six-dimensional (Boltzmann) phase-space models have been attempted, the resolution is still severely limited on current computers for most applications. Particle solvers of interest in cosmology can broadly be divided into hybrid direct plus grid-based solvers such as Particle-Particle, Particle-Mesh methods (P${}^3$M,\cite{He81}) and ``Tree'' methods which use truncated low order multipole expansions to evaluate the force from distant particles \cite{BH86}. Full multipole methods \cite{GR87}, are slowly gaining popularity but have yet to gain widespread acceptance in the cosmological simulation community. There are also a number of hybrid tree plus particle-mesh methods in which an efficient grid-based solver is used for long-range gravitational interactions with sub-grid forces being computed using a tree. Special purpose hardware \cite{Su90} has rendered the direct PP method competitive in small simulations (fewer than 16 million particles), but it remains unlikely that it will ever be competitive for larger simulations. The P${}^3$M algorithm has been utilized extensively in cosmology. The first high resolution simulations of structure formation were conducted by Efstathiou \& Eastwood \cite{EE81} using a modified P${}^3$M plasma code. In 1998 the Virgo Consortium used a P${}^3$M code to conduct the first billion particle simulation of cosmological structure formation \cite{Ev02}. The well-known problem of slow-down under heavy particle clustering, due to a rapid rise in the number of short-range interactions, can be largely solved by the use of adaptive, hierarchical, sub-grids \cite{C91}. Only when a regime is approached where multiple time steps are beneficial does the adaptive P${}^3$M (AP${}^3$M) algorithm become less competitive than modern tree-based solvers. Further, we note that a straightforward multiple time-step scheme has been implemented in AP${}^3$M with a factor of 3 speed-up reported \cite{DE93}. P${}^3$M has also been vectorized by a number of groups including Summers \cite{SU93}. Shortly after, both Ferrell \& Bertschinger \cite{FB94} and Theuns \cite{T94} adapted P${}^3$M to the massively parallel architecture of the Connection Machine. This early work highlighted the need for careful examination of the parallelization strategy because of the load imbalance that can result in gravitational simulations as particle clustering develops. Parallel versions of P${}^3$M that use a 1-dimensional domain decomposition, such as the P4M code of Brieu \& Evrard \cite{BE00} develop large load imbalances under clustering rendering them useful only for very homogeneous simulations. Development of vectorized treecodes \cite{H90,HK89} predates the early work on P${}^3$M codes and a discussion of a combined TREE+SPH (TREESPH) code for massively parallel architectures is presented by Dav\'{e} {\it et al.\thinspace}\, \cite{D97}. There are now a number of combined parallel TREE+SPH solvers \cite{W03,K03,V01,L00} and TREE gravity solvers \cite{MM00,UA01,D96}. Pearce \& Couchman \cite{PC97} have discussed the parallelization of AP${}^3$M+SPH on the Cray T3D using Cray Adaptive Fortran (CRAFT), which is a directive-based parallel programming methodology. This code was developed from the serial HYDRA algorithm \cite{CT95} and much of our discussion in this paper draws from this first parallelization of AP${}^3$M+SPH. A highly efficient distributed memory parallel implementation of P${}^3$M using the Cray SHMEM library has been developed by MacFarland {\it et al.\thinspace}\, \cite{M98}, and further developments of this code include a translation to MPI-2, the addition of AP${}^3$M subroutines and the inclusion of an SPH solver \cite{T03}. Treecodes have also been combined with grid methods to form the Tree-Particle-Mesh solver \cite{Jim,BO00,W02,B02,D03,VS05}. The algorithm is somewhat less efficient than AP${}^3$M in a fixed time-step regime, but its simplicity offers advantages when multiple time-steps are considered \cite{VS05}. Another interesting, and highly efficient N-body algorithm is the Adaptive Refinement Tree (ART) method \cite{KK97} which uses a short-range force correction that is calculated via a multi-grid solver on refined meshes. There are a number of factors in cosmology that drive researchers towards parallel computing. These factors can be divided into the desire to simulate with the highest possible resolution, and hence particle number, and also the need to complete simulations in the shortest possible time frame to enable rapid progress. The desire for high resolution comes from two areas. Firstly, simultaneously simulating the growth of structure on the largest and smallest cosmological scales requires enormous mass resolution (the ratio of mass scales between a supercluster and the substructure in a galaxy is $>10^9$). This problem is fundamentally related to the fact that in the currently favoured Cold Dark Matter \cite{B84} cosmology structure grows in a hierarchical manner. A secondary desire for high resolution comes from simulations that are performed to make statistical predictions. To ensure the lowest possible sample variance the largest possible simulation volume is desired. For complex codes, typically containing tens of thousands of lines, the effort in developing a code for distributed-memory machines, using an API such as MPI \cite{MPI}, can be enormous. The complexity within such codes arises from the subtle communication patterns that are disguised in serial implementations. Indeed, as has been observed by the authors, development of an efficient communication strategy for a distributed memory version of the P${}^3$M code has required substantially more code than the P${}^3$M algorithm itself (see \cite{M98}). This is primarily because hybrid, or multi-part solvers, of which P${}^3$M is a classic example, have data structures that require significantly different data topologies for optimal load balance at different stages of the solution cycle. Clearly a globally addressable work space renders parallelization a far simpler task in such situations. It is also worth noting that due to time-step constraints and the scaling of the algorithm with the number of particles, doubling the linear resolution along an axis of a simulation increases the computational work load by a factor larger than 20; further doubling would lead to a workload in excess of 400 times greater. The above considerations lead to the following observation: modern SMP servers with their shared memory design and superb performance characteristics are an excellent tool for conducting simulations requiring significantly more computational power than that available from a workstation. Although such servers can never compete with massively parallel machines for the largest simulations, their ease of use and programming renders them highly productive computing environments. The OpenMP (http://www.openmp.org) API for shared-memory programming is simple to use and enables loop level parallelism by the insertion of pragmas within the source code. Other than their limited expansion capacity, the strongest argument against purchasing an SMP server remains hardware cost. However, there is a trade-off between science accomplishment and development time that must be considered above hardware costs alone. Typically, programming a Beowulf-style cluster for challenging codes takes far longer and requires a significantly greater monetary and personnel investment on a project-by-project basis. Conversely, for problems that can be efficiently and quickly parallelized on a distributed memory architecture, SMP servers are not cost effective. The bottom line remains that individual research groups must decide which platform is most appropriate. The code that we discuss in this paper neatly fills the niche between workstation computations and massively parallel simulations. There is also a class of simulation problems in cosmology that have particularly poor parallel scaling, regardless of the simulation algorithm used (the fiducial example is the modelling of single galaxies, see \cite{TC00}). This class of problems corresponds to particularly inhomogeneous particle distributions that develop a large disparity in particle-update timescales (some particles may be in extremely dense regions, while others may be in very low density regions). Only a very small number of particles\-insufficient to be distributed effectively across multiple nodes\-will require a large number of updates due to their small time-steps. For this type of simulation the practical limit of scalability appears to be order 10 PEs. The layout of the paper is as follows: in section 2 we review the physical system being studied. This is followed by an extensive exposition of the P${}^3$M algorithm and the improvements that yield the AP${}^3$M algorithm. The primary purpose of this section is to discuss some subtleties that directly impact our parallelization strategy. At the same time we also discuss the SPH method and highlight the similarities between the two algorithms. Section 2 concludes with a discussion of the serial HYDRA code. Section 3 begins with a short discussion of the memory hierarchy in RISC (Reduced Instruction Set Computer) systems, and how eliminating cache-misses and ensuring good cache reuse ensures optimal performance on these machines. This is followed by a discussion of a number of code optimizations for RISC CPUs that also lead to performance improvements on shared memory parallel machines (primarily due to increased data locality). In particular we discuss improvements in particle bookkeeping, such as particle index reordering. While particle reordering might be considered an expensive operation, since it involves a global sort, it actually dramatically improves run time because of bottlenecks in the memory hierarchy of RISC systems. In section 4 we discuss in detail the parallelization strategies adopted in HYDRA\_OMP. To help provide further understanding we compare the serial and parallel call trees. In section 5 we consolidate material from sections 3 \& 4 by discussing considerations for NUMA machines and in particular the issue of data placement. Performance figures are given in Section 6, and we present our conclusions in section 7. \section{Review of the serial algorithm} \subsection{Equation set to be solved} The simulation of cosmic structure formation is posed as an initial value problem. Given a set of initial conditions, which are usually constrained by experimental data, such as the WMAP data \cite{WM03}, we must solve the following gravito-hydrodynamic equations; \begin{enumerate} \item the {continuity equations}, \begin{equation} { d \rho_g \over dt}+\rho_g \nabla. {\bf {v}}_g=0,\;\;\;{ d \rho_{dm} \over dt}+\rho_{dm}\nabla. {\bf {v}}_{dm}=0 \label{gravito1} \end{equation} where $g$ denotes gas and $dm$ dark matter. \item the Euler and acceleration equations, \begin{equation} {d {\bf {v}}_g \over dt}=-{1 \over \rho_g} \nabla P-\nabla \phi, \;\;\; {d {\bf {v}}_{dm} \over dt}=-\nabla \phi, \end{equation} \item the {Poisson equation}, \begin{equation} \nabla^2 \phi = 4 \pi G (\rho_g+\rho_{dm}), \label{poisson} \end{equation} \item the entropy conservation equation, \begin{equation} {ds \over dt}=0, \end{equation} \end{enumerate} where the conservation of entropy is a result of ignoring dissipation, viscosity and thermal conductivity ({\it i.e.\ } an ideal fluid). The dynamical system is closed by the equation of state $P=P(\rho_g,s)$. We assume an ideal gas equation of state, with $\gamma=5/3$ in our code, although many others are possible. Alternatively, the entropy equation can be substituted with the conservation of energy equation, \begin{equation} {d u \over dt} = -{P \over\rho_g} \nabla. {\bf v}_g, \label{gravito2} \end{equation} and the equation of state is then $P=P(\rho_g,u)$. We note that the use of a particle-based method ensures that the continuity equations are immediately satisfied. \subsection{Gravitational solver} Let us first discuss the basic features of the P${}^3$M algorithm, a thorough review can be found in \cite{He81}. The fundamental basis of the P${}^3$M algorithm is that the gravitational force can be separated into short and long range components, {\it i.e.\ }, \begin{equation} {\bf F}_{grav}={\bf F}_{short}+{\bf F}_{long}, \end{equation} where ${\bf F}_{long}$ will be provided by a Fourier-based solver and ${\bf F}_{short}$ will be calculated by summing over particles within a given short range radius. The ${\bf F}_{long}$ force is typical known as the PM force, for Particle-Mesh, while the ${\bf F}_{short}$ range force is typical known as the PP force, for Particle-Particle. The accuracy of the ${\bf F}_{grav}$ force can be improved by further smoothing the mesh force, ${\bf F}_{long}$, and hence increasing the range over the which the short-scale calculation is done, at the cost of an increased number of particle--particle interactions. The first step in evaluating that PM force is to interpolate the mass density of the particle distribution on to a grid which can be viewed as a map from a Lagrangian representation to an Eulerian one. The interpolation function we use is the the `Triangular Shaped Cloud' (TSC) `assignment function' (see \cite{He81} for a detailed discussion of possible assignment functions). Two benefits of using TSC are good suppression of aliasing from power above the Nyquist frequency of the grid and a comparatively low directional force error around the grid spacing. The mass assignment operation count is ${\bf O} (N)$, where $N$ is the number of particles. Once the mass density grid has been constructed it is Fourier transformed using an FFT routine, which is an ${\bf O} (L^3 \log L)$ operation, where $L$ is the extent of the Fourier grid in one direction. The resulting k-space field is then multiplied with a Green's function that is calculated to minimize errors associated with the mass assignment procedure (see Hockney \& Eastwood for a review of the `Q-minimization' procedure). Following this convolution, the resulting potential grid is differenced to recover the force grid. We use a 10-point differencing operator which incorporates off-axis components and reduces directional force errors, but many others are possible. Finally, the PM accelerations are found from the force grid using the mass assignment function to interpolate the acceleration field. The PM algorithm has an operation cost that is approximately ${\bf O} (\alpha N+\beta L^3 \log L)$ where $\alpha$ and $\beta$ are constants (the ${\bf O} (L^3)$ cost of the differencing is adequately approximated by the logarithmic term describing the FFT) . Resolution above the Nyquist frequency of the PM code, or equivalently sub PM grid resolution, is provided by the pair-wise (shaped) short-range force summation. Supplementing the PM force with the short-range PP force gives the full P${}^3$M algorithm, and the execution time scales approximately in proportion to $\alpha$N+$\beta$L${}^3$ log L + $\gamma \sum $N${}^2_{pp}$, where $\gamma$ is a constant and N${}^2_{pp}$ corresponds to the number of particles in the short range force calculation within a specified region. The summation is performed over all the PP regions, which are identified using a chaining mesh of size {\em Ls}${}^3$; see figure~\ref{chaining} for an illustration of the chaining mesh overlaid on the potential mesh. P${}^3$M suffers the drawback that under heavy gravitational clustering the short range sum used to supplement the PM force slows the calculation down dramatically - the N${}^2_{pp}$ term dominates as an increasingly large number of particles contribute to the short range sum. Although acutely dependent upon the particle number and relative clustering in a simulation, the algorithm may slow down by a factor between 10-100 or possibly more. While finer meshes partially alleviate this problem they quickly become inefficient due to wasting computation on areas that do not need higher resolution. \begin{figure}[t] \vspace{10cm} \special{psfile=chaining2.eps hscale=90 vscale=90 hoffset=-70 voffset=-220} \caption{Overlay of the chaining mesh on top of the potential mesh to show spacing and the search radius of the short range force. These two meshes do not need to be commensurate except on the scale of the box, the required matching of forces is achieved by shaping the long and short range force components, ${\bf F}_{short}$ and ${\bf F}_{long}$.} \label{chaining} \end{figure} Adaptive P${}^3$M remedies the slow-down under clustering of P${}^3$M by isolating regions where the N${}_{pp}^2$ term dominates and solving for the short range force in these regions using FFT methods on a sub-grid, which is then supplemented by short range calculations involving fewer neighbours. This process is a repeat of the P${}^3$M algorithm on the selected regions, with an isolated FFT and shaped force. At the expense of a little additional bookkeeping, this method circumvents the sometimes dramatic slow-down of P${}^3$M. The operation count is now approximately, \begin{equation} \alpha N + \beta L^3 \log L + \sum_{j=1}^{n_{ref}} \left[ \alpha_j N_j + \beta_j L_j^3 \log L + \gamma_j \sum N_{j_{pp}}^2 \right], \end{equation} where $n_{ref}$ is the number of refinements. The $\alpha_j$ and $\gamma_j$ are all expected to be very similar to the $\alpha$, and $\gamma$ of the main solver, while the $\beta_j$ are approximately four times larger than $\beta$ due to the isolated Fourier transform. Ideally during the course of the simulation the time per iteration approaches a constant, roughly 2-4 times that of a uniform distribution (although when the SPH algorithm is included this slow-down can be larger). \subsection{SPH solver} When implemented in an adaptive form \cite{W81}, with smoothing performed over a fixed number of neighbour particles, SPH is an order $N$ scheme and fits well within the P${}^3$M method since the short-range force-supplement for the mesh force can be used to find the particles which are required for the SPH calculation. There are a number of excellent reviews of the SPH methodology \cite{HK89,M92,S96} and we present, here, only those details necessary to understand our specific algorithm implementation. Full details of our implementation can be found in \cite{rob2}. We use an explicit `gather' smoothing kernel and the symmetrization of the equation of motion is achieved by making the replacement, \begin{equation} \nabla_j \overline{W}({\bf r}_i-{\bf r}_j,h_j,h_i) = - \nabla_i \overline{W}({\bf r}_i-{\bf r}_j,h_i,h_j) + {\bf O}(\nabla h) \end{equation} in the `standard' SPH equation of motion (see \cite{S96}, for example). Note that the sole purpose of `kernel averaging' in this implementation, denoted by the bar on the smoothing kernel $W$, is to ensure that the above replacement is correct to ${\bf O}(h)$. Hence the equation of motion is, \[ {d {\bf v}_i \over dt}= - \sum_{j=1,r_{ij}<2h_i}^N m_j \; ({P_i \over \rho_i^2}+{\Pi_{ij} \over 2}) \; \nabla_i \overline{W}({\bf r}_i-{\bf r}_j,h_i,h_j) \] \begin{equation} \;\;\;\;\;\;\;\;\;\;\;\;\; + \sum_{j=1,r_{ij}<2h_j}^N m_j \; ({P_j \over \rho_j^2}+{\Pi_{ji} \over 2}) \; \nabla_j \overline{W}({\bf r}_i-{\bf r}_j,h_j,h_i). \end{equation} The artificial viscosity, $\Pi_{ij}$, is used to prevent interpenetration of particle flows and is given by, \begin{equation} \Pi_{ij}={ -\alpha \mu_{ij} \bar{c}_{ij} + \beta \mu_{ij}^2 \over \tilde{\rho}_{ij}}f_i, \end{equation} where, \begin{equation} \mu_{ij}=\cases{ \bar{h}_{ij} {\bf v}_{ij}.{\bf r}_{ij} / (r_{ij}^2+\nu^2), & ${\bf v}_{ij}.{\bf r}_{ij}<0$; \cr 0, & ${\bf v}_{ij}.{\bf r}_{ij} \geq 0$, } \end{equation} \begin{equation} \tilde{\rho}_{ij} = \rho_i(1+(h_i/h_j)^3)/2, \end{equation} and \begin{equation} f_i={|\!< \nabla {\bf. v}>_i\!\!| \over |\!<\nabla {\bf. v}>_i\!\!| + |\!<\nabla {\bf \times} {\bf v}>_i\!\!| + 0.0001c_i/h_i }. \end{equation} with bars being used to indicate averages over the $i,j$ indices. Shear-correction \cite{B95,NS97}, is achieved by including the $f_i$ term which reduces the\-unwanted\- artificial viscosity in shearing flows. Note that the lack of $i-j$ symmetry in $\Pi_{ij}$ is not a concern since the equation of motion enforces force symmetry. The energy equation is given by, \begin{equation} {d u_i \over dt}=\sum_{j=1,r_{ij}<2h_i}^N m_j ({P_i \over \rho^2_i}+{\Pi_{ij} \over 2}) \; ({\bf v}_i-{\bf v}_j).\nabla_i \overline{W}({\bf r}_i-{\bf r}_j,h_i,h_j). \end{equation} The solution of these equations is comparatively straightforward. As in the AP${}^3$M solver it is necessary to establish the neighbour particle lists. The density of each particle must be evaluated and then, in a second loop, the solution to the force and energy equations can be found. Since the equation of motion does not explicitly depend on the density of particle $j$ (the artificial viscosity has also been constructed to avoid this) we emphasize that there is no need to calculate all the density values first and then calculate the force and energy equations. If one does calculate all densities first, then clearly the list of neighbours is calculated twice, or alternatively, a large amount of memory must be used to store the neighbour lists of all particles. Using our method the density can be calculated, one list of neighbours stored, and then the force and energy calculations can be quickly solved using the stored list of neighbours (see \cite{CT95}). \subsection{Summary of solution cycle for each iteration} As emphasized, the list data-structure used in the short-range force calculation provides a common feature between the AP${}^3$M and SPH solvers. Hence, once a list of particle neighbours has been found, it is simple to sort through this and establish which particles are to be considered for the gravitational calculation and the SPH calculation. Thus the incorporation of SPH into AP${}^3$M necessitates only the coordination of scalings and minor bookkeeping. The combined adaptive P${}^3$M-SPH code, `{\small HYDRA}', in serial FORTRAN 77 form is available on the World Wide Web from http://coho.physics.mcmaster.ca/hydra. The solution cycle of one time-step may be summarized as follows, \begin{enumerate} \item Assign mass to the Fourier mesh. \item Convolve with the Green's function using the FFT method to get potential. Difference this to recover mesh forces in each dimension. \item Apply mesh force and accelerate particles. \item Decide where it is more computationally efficient to solve via the further use of Fourier methods as opposed to short-range forces and, if so, place a new sub-mesh (refinement) there. \item Accumulate the gas forces (and state changes) as well as the short range gravity for all positions not in sub-meshes. \item Repeat 1-5 on all sub-meshes until forces on all particles in simulation have been accumulated. \item Update time-step and repeat \end{enumerate} Note that the procedure of placing meshes is hierarchical in that a further sub-mesh may be placed inside a sub-mesh. This procedure can continue to an arbitrary depth but, typically, even for the most clustered simulations, speed-up only occurs to a depth of six levels of refinement. A pseudo call-tree for the serial algorithm can be seen in figure~\ref{ctree}. The purpose of each subroutine is as follows, \begin{itemize} \item{STARTUP} Reads in data and parameter files \item{INUNIT} Calculates units of simulation from parameters in start-up files \item{UPDATERV} Time-stepping control \item{OUTPUT} Check-pointing and scheduled data output routines \item{ACCEL} Selection of time-step criteria and corrections, if necessary, for comoving versus physical coordinates \item{FORCE} Main control routine of the force evaluation subroutines \item{RFINIT \& LOAD} Set up parameters for PM and PP calculation, in LOAD data is also loaded into particle buffers for the refinement. \item{CLIST \& ULOAD} Preparation of particle data for any refinements that may have been placed, ULOAD also unloads particle data from refinement buffers \item{REFFORCE} Call PM routines, controls particle bookkeeping, call PP routines. \item{GREEN \& IGREEN} Calculation of Green's functions for periodic (GREEN) and isolated (IGREEN) convolutions. \item{MESH \& IMESH} Mass assignment, convolution call, and calculation of PM acceleration in the periodic (MESH) and isolated (IMESH) solvers. \item{CNVLT \& ICNVLT} Green's function convolution routines. \item{FOUR3M} 3 dimensional FFT routine for periodic boundary conditions. \item{LIST} Evaluation of chaining cell particle lists \item{REFINE} Check whether refinements need to be placed. \item{SHFORCE} Calculate force look-up tables for PP \item{SHGRAVSPH} Evaluate PP and SPH forces \end{itemize} \begin{figure}[t] \vspace{7cm} \special{psfile=ctrees3.ps angle=-90 hscale=51 vscale=51 hoffset=-5 voffset=250} \caption{Call tree of the HYDRA serial algorithm. Only significant subroutines are shown for clarity. The refinement routines are the same as the top level routines, modulo the lack of periodic wrap-around. In an object-oriented framework these routines would be prime candidates for overloading.} \label{ctree} \end{figure} \section{Optimizations of the serial code for RISC processors} \subsection{Memory hierarchy of the RISC architecture} The architecture of RISC CPUs incorporates a memory hierarchy with widely differing levels of performance. Consequently, the efficiency of a code running on a RISC processor is dictated almost entirely by the ratio of the time spent in memory accesses to the time spent performing computation. This fact can lead to enormous differences in code performance. The relative access times for the hierarchy are almost logarithmic. Access to the first level of cache memory takes 1-2 processor cycles, while access to the second level of cache memory takes approximately 5 times as long. Access to main memory takes approximately 10 times longer. It is interesting to note that SMP-NUMA servers provide further levels to this hierarchy, as will be discussed later. To improve memory performance, when retrieving a word from main memory three other words are typically retrieved: the `cache line'. If the additional words are used within the computation on a short time scale, the algorithm exhibits good cache reuse. It is also important to not access memory in disordered fashion, {\it i.e.\ } optimally one should need memory references that are stored within caches. Thus to exhibit good performance on a RISC processor, a code must exhibit both good cache reuse and a low number of cache misses. In practice, keeping cache misses to a minimum is the first objective since cache reuse is comparatively easy to achieve given a sensible ordering of the calculation (such as a FORTRAN {\tt DO} loop). \subsection{Serial Optimizations} A number of optimizations for particle codes that run on RISC processors are discussed in Decyk {\it et al.\thinspace} \cite{D96}. Almost all of these optimizations are included within our serial code, with the exception of the mass assignment optimizations. Indeed a large number of their optimizations, especially those relating to combining x, y, z coordinate arrays into one 3-d array, can be viewed as good programming style. While Decyk {\it et al.\thinspace}\, demonstrate that the complexity of the periodic mass assignment function prevents compilers from software pipelining the mesh writes, we do not include their suggested optimization of removing the modulo statements and using a larger grid. However, the optimization is naturally incorporated in our isolated solver. The first optimization we attempted was the removal of a `vectorizeable' Numerical Recipes FFT used within the code (FOURN, see \cite{NR}). Although the code uses an optimized 3-d FFT that can call the FOURN routine repeatedly using either 1-d or 2-d FFT strategy (to reduce the number of cache misses exhibited by the FOURN routine when run in 3-d), the overall performance remains quite poor. Therefore we replaced this routine with the FFTPack (see \cite{S82}) routines available from Netlib, and explicitly made the 3-d FFT a combination of 1-d FFTs. Although there is no question that FFTW \cite{FJ98} provides the fastest FFTs on almost all architectures we have found little difference between FFTPack and FFTW within our parallel 3-d FFT routine. The greatest performance improvement is seen in the isolated solver where the 3-d FFT is compacted to account for the fact that multiple octants are initially zero. Linked lists (hereafter the list array is denoted {\tt ll}) are a common data structure used extensively in particle-in-cell type codes (see \cite{He81}, for an extensive review of their use). For a list of particles which is cataloged according to cells in which they reside, it is necessary to store an additional array which holds the label of the first particle in the list for a particular cell. This array is denoted {\tt ihc} for Integer Head of Chain. List traversal for a given cell is frequently programmed in FORTRAN using an {\tt IF...THEN...GOTO} structure (although it can be programmed with a {\tt DO WHILE} loop), with the loop exiting on the {\tt IF} statement finding a value of zero in the linked list. Since the loop `index' (the particle index {\tt i}) is found recursively the compiler cannot make decisions about a number of optimization processes, particularly software pipelining, for which {\tt DO } loops are usually better. Additionally, if the particles' indices are not ordered in the list traversal direction then there will usually be a cache miss in finding the element {\tt ll(i)} within the linked list array. Within the particle data arrays, the result of the particle indices not being contiguous is another series of cache misses. Since a number of arrays must be accessed to recover the particle data, the problem is further compounded, and removal of the cache miss associated with the particle indices should improve performance significantly. The first step that may be taken to improve the situation is to remove the cache misses associated with the searching through the linked list. To do this the list must be formed so that it is ordered. In other words the first particle in cell {\tt j}, is given by {\tt ihc(j)}, the second particle is given by {\tt ll(ihc(j))}, the third by {\tt ll(ihc(j)+1)} {\em et cetera}. This ordered list also allows the short range force calculation to be programmed more elegantly since the {\tt IF..THEN..GOTO} structure of the linked list can be replaced by a {\tt DO } loop. However, since there remains no guarantee that the particle indices will be ordered, the compiler is still heavily constrained in terms of the optimizations it may attempt, but the situation is distinctly better than for the standard linked list. Tests performed on this { ordered list} algorithm show that a 30\% improvement in speed is gained over the linked list code (see figure~\ref{timings}). Cache misses in the data arrays are of course still present in this algorithm. As has been discussed, minimizing cache misses in the particle data arrays requires accessing them with a contiguous index. This means that within a given chaining cell the particle indices must be contiguous. This can be achieved by reordering the indices of particles within chaining cells at each step of the iteration (although if particles need to be tracked a permutation array must be carried). This {\em particle reordering} idea was realized comparatively early and has been discussed in the literature \cite{AS95,D96,rob1,M98}. A similar concept has been applied by Springel \cite{VS05} who uses Peano-Hilbert ordering of particle indices to ensure data locality. However, in P${}^3$M codes, prior to the implementation presented here only Macfarland {\it et al.\thinspace} \cite{M98} and Anderson and Shumaker \cite{AS95}, actually revised the code to remove linked lists, other codes simply reordered the particles every few steps to reduce the probability of cache misses and achieved a performance improvement of up to 45\% \cite{M98}. Since the adaptive refinements in {\small HYDRA} use the same particle indexing method, the particle ordering must be done within the data loaded into a refinement, {\it i.e.\ } hierarchical rearrangement of indices results from the use of refinements. \begin{figure}[t] \vspace{90mm} \special{psfile=timings.eps angle=0 hscale=100 vscale=100 voffset=-45 hoffset=-40} \caption{Effect of changing the list structure on the execution time per iteration of the entire algorithm. We show results for the standard linked list implementation, ordered list and ordered particles. The ordered list times were estimated by taking the ratio between the linked list time and the ordered list time at t=1 and scaling the rest of the linked list values by this factor. The simulation was the Santa Barbara galaxy cluster simulation \cite{SB99} and it was conducted on a 266 Mhz Pentium III PC.} \label{timings} \end{figure} The step-to-step permutation is straightforward to calculate: first the particle indices are sorted according to their z-coordinate and then particle array indices are simply changed accordingly. It is important to note that this method of particle bookkeeping removes the need for an index list of the particles (although in practice this storage is taken by the permutation array). All that need be stored is the particle index corresponding to the first particle in the cell and the number of particles in the cell. On a RISC system particle reordering is so efficient that the speed of the {\small HYDRA} simulation algorithm {\em more than doubled}. For example, at the end of the Santa Barbara galaxy cluster simulation, the execution time was reduced from 380 seconds to 160 seconds on a 266 Mhz Pentium III processor. On a more modern 2 Ghz AMD Opteron, which has four times the L2 cache of a Pentium III, considerably better prefetch, as well as an on-die memory controller to reduce latency, we found the performance improvement for the final iterations to be a reduction in time from 29 seconds to 17. This corresponds to a speed improvement of a factor of 1.7, which, while slightly less impressive than the factor of 2.4 seen on the older Pentium III, is still a significant improvement. A comparison plot of the performance of a linked list, ordered list and ordered particle code is shown in figure~\ref{timings}. \section{Parallel Strategy} Particle-grid codes, of the kind used in cosmology, are difficult to parallelize efficiently. The fundamental limitation to the code is the degree to which the problem may be subdivided while still averting race conditions and unnecessary buffering or synchronization. For example, the fundamental limit on the size of a computational atom in the PP code is effectively a chaining cell, while for the FFT routine it is a plane in the data cube. In practice, load balance constraints come into play earlier than theoretical limits as the work within the minimal atoms will rarely be equal (and can be orders of magnitude different). Clearly these considerations set an upper bound on the degree to which the problem can be subdivided, which in turn limits the number of processors that may be used effectively for a given problem size. The code is a good example of Gustafson's conjecture: a greater degree of parallelism may not allow arbitrarily increased execution speed for problems of fixed size, but should permit larger problems to be addressed in a similar time. At an abstract level, the code divides into essentially two pieces: the top level mesh and the refinements. Parallelization of the top level mesh involves parallelizing the work in each associated subroutine. Since an individual refinement may have very little work a parallel scheme that seeks to divide work at all points during execution will be highly inefficient. Therefore the following division of parallelism was made: conduct all refinements of size greater than $N_r$ particles across the whole machine, for refinements with less than $N_r$ particles use a list of all refinements and distribute one refinement to each processor (or thread) in a task farm arrangement. On the T3D the limiting $N_r$ was found to be approximately 32,768 particles, while on more modern machines we have found that 262,144 is a better limit. In the following discussion the term processor element (PE) is used to denote a parallel execution thread. Since only one thread of execution is allotted per processor (we do not attempt load balancing via parallel slackness), this number is equivalent to the number of CPUs, and the two terms are used interchangeably. The call tree of the parallel algorithm is given in figure~\ref{ptree}. \begin{figure}[t] \vspace{100mm} \special{psfile=ptree2.eps angle=-90 hscale=54 vscale=54 voffset=300 hoffset=-20} \caption{Call tree of the HYDRA\_OMP algorithm. Only significant subroutines are shown for clarity. The call tree is similar to the serial algorithm except that a new class of routines is included for large refinements. Where possible conditional parallelism has been used to enable the reuse of subroutines in serial or parallel.} \label{ptree} \end{figure} \subsection{The OpenMP standard} The OpenMP API supports a number of parallel constructs, such as executing multiple serial regions of code in parallel (a single program multiple data model), as well as the more typical loop-based parallelism model (sometimes denoted `PAR DO's), where the entire set of loop iterations is distributed across all the PEs. The pragma for executing a loop in parallel, {\tt C\$OMP PARALLEL DO} is placed before the {\tt DO} loop within the code body. Specification statements are necessary to inform the compiler about which variables are loop `private' (each processor carries its own value) and `shared' variables. A full specification of the details for each loop takes only a few lines of code, preventing the `code bloat' often associated with distributed memory parallel codes. \subsection{Load balancing options provided by the OpenMP standard} We use loop level parallelism throughout our code. To optimize load balance in a given routine it is necessary to select the most optimal iteration scheduling algorithm. The OpenMP directives allow for the following types of iteration scheduling: \begin{itemize} \item static scheduling - the iterations are divided into chunks (the size of which may be specified if desired) and the chunks are distributed across the processor space in a contiguous fashion. A cyclic distribution, or a cyclic distribution of small chunks is also available. \item dynamic scheduling - the iterations are again divided up into chunks, however as each processor finishes its allotted chunk, it dynamically obtains the next set of iterations, via a master-worker mechanism. \item guided scheduling - is similar to static scheduling except that the chunk size decreases exponentially as each set of iterations is finished. The minimum number of iterations to be allotted to each chunk may be specified. \item runtime scheduling - this option allows the decision on which scheduling to use to be delayed until the program is run. The desired scheduling is then chosen by setting an environment variable in the operating system. \end{itemize} The {\small HYDRA} code uses both static and dynamic scheduling. \subsection{Parallelization of particle reordering and permutation array creation} While the step-to-step permutation is in principle simple to calculate, the creation of the list permutation array must be done carefully to avoid race conditions. An effective strategy is to calculate the chaining cell residence for each particle and then sort into bins of like chaining cells. Once particles have been binned in this fashion the rearrangement according to z-coordinates is a local permutation among particles in the chaining cell. Our parallel algorithm works as follows: \begin{enumerate} \item First calculate the chaining cell that each particle resides in, and store this in an array \item Perform an increasing-order global sort over the array of box indices \item Using a loop over particle indices, find the first particle in each section of contiguous like-indices (the {\tt ihc} array) \item Use this array to establish the number of particles in each contiguous section (the {\tt nhc} array) \item Write the z-coordinates of each particle within the chaining cell into another auxiliary array \item Sort all the non-overlapping sublists of z-coordinates for all cells in parallel while at the same time permuting an index array to store the precise rearrangement of particle indices required \item Pass the newly calculated permutation array to a routine that will rearrange all the particle data into the new order \end{enumerate} The global sort is performed using parallel sorting by regular sampling, \cite{ll93} with a code developed in part by J. Crawford and C. Mobarry. This code has been demonstrated to scale extremely well on shared-memory architectures provided the number of elements per CPU exceeds 50,000. This is significantly less than our ideal particle load per processor (see section 6). For the sorts within cells, the slow step-to-step evolution of particle positions ensures data rearrangement is sufficiently local for this to be an efficient routine. Hence we expect good scaling for the sort routines at the level of granularity we typically use. \subsection{Parallelization of mass assignment and Fourier convolution} A race condition may occur in mass assignment because it is possible for PEs to have particles which write to the same elements of the mass array. The approaches to solving this problem are numerous but consist mainly of two ideas; (a) selectively assign particles to PEs so that mass assignment occurs at grid cells that do not overlap, thus race condition is avoided or (b) use ghost cells and contiguous slabs of particles which are constrained in their extent in the simulation space. The final mass array must be accumulated by adding up all cells, including ghosts. Ghost cells offer the advantage that they allow the calculation to be load-balanced (the size of a slab may be adjusted) but require more memory. Controlling which particles are assigned does not require more memory but may cause a load imbalance. Because the types of simulation performed have particle distributions that can vary greatly, both of these algorithms have been implemented. \subsubsection{Using controlled particle assignment}\label{cpa} The particles in the simulation are ordered in the z-direction within the chaining cells. Because the chaining cells are themselves ordered along the z-axis (modulo their cubic arrangement) a naive solution would be to simply divide up the list of particles. However, this approach does not prevent a race condition occurring, it merely makes it less likely. In the CRAFT code the race condition was avoided by using the `{\em{atomic update}}' facility which is a lock{\em--}fetch{\em--}update{\em--}store{\em--}unlock hardware primitive that allows fast updating of arrays where race conditions are present. Modern cache coherency protocols are unable to provide this kind of functionality. Using the linked/ordered list to control the particle assignment provides an elegant solution to the race condition problem. Since the linked list encodes the position of a particle to within a chaining cell, it is possible to selectively assign particles to the mass array that do not have overlapping writes. To assure a good load balance it is better to use columns ($Ls\times B\times B$, where $Ls$ is the size of the chaining mesh and $B$ is a number of chaining cells) of cells rather than slabs ($Ls \times Ls \times B$). Since there are more columns than slabs a finer grained distribution of the computation can be achieved and thus a better load balance. This idea can also be extended to a 3-d decomposition, however in simple experiments we have found this approach to be inefficient for all but the most clustered particle distributions (in particular cache reuse is lowered by using a 3-d decomposition). Chaining mesh cells have a minimum width of 2.2 potential mesh cells in {\small HYDRA} and figure~\ref{chaining} displays a plot of the chaining mesh overlaid on the potential mesh. When performing mass assignment for a particle, writes will occur over all 27 grid cells found by the TSC assignment scheme. Thus providing a buffer zone of one cell is not sufficient to avoid the race condition since particles in chaining cells one and three may still write to the same potential mesh cell. A spacing of two chaining mesh cells is sufficient to ensure no possibility of concurrent writes to the same mesh cell. The ``buffer zones'' thus divide up the simulation volume into a number of regions that can calculated concurrently and those that cannot. Moreover, there will be need to be a series of barrier synchronizations as regions that can be written concurrently are finished before beginning the next set of regions. The size of the buffer zone means that there are two distinct ways of performing the mass assignment using columns: \begin{itemize} \item $Ls\times 1 \times 1$ columns in $3\times 3$ groups. Assign mass for particles in each of the columns simultaneously and then perform a barrier synchronization at the end of each column. Since the columns are in $3\times3$ groups there are nine barriers. \item $Ls\times 2\times 2$ columns which are grouped into $2\times 2$ groups. In this case the number of barriers is reduced to four, and if desired, the size of the column can be increased beyond two while still maintaining four barriers. However, load-imbalance under clustering argues against this idea. \end{itemize} See figure~\ref{2by2} for a graphical representation of the algorithm. \begin{figure}[t] \vspace{8cm} \special{psfile=2by22.eps hscale=90 vscale=90 hoffset=-80 voffset=-244} \caption{Cell grouping and sorting in the $2\times2$ configuration scheme.} \label{2by2} \end{figure} To improve load balance, a list of the relative work in each column (that can be evaluated before the barrier synchronization) is calculated by summing over the number of particles in the column. Once the workload of each column has been evaluated, the list of relative workloads is then sorted in descending order. The calculation then proceeds by dynamically assigning the list of columns to the PEs as they become free. The only load imbalance then possible is a wait for the last PE to finish which should be a column with a low workload. Static, and even cyclic, distributions offer the possibility of more severe load imbalance. For portability reasons, we have parallelized the FFT by hand rather than relying on a threaded library such as provided by FFTW. The 3-d FFT is parallelized over `lines' by calling a series of 1-d FFTs. We perform the transpose operation by explicitly copying contiguous pieces of the main data array into buffers which have a long stride. This improves data locality of the code considerably as the stride has been introduced into the buffer which is a local array. The FFTs are then performed on the buffer, and values are finally copied back into the data arrays. The convolution which follows the FFT relies upon another set of nested loops in the axis directions. To enable maximum granularity we have combined the z- and y-directions into one larger loop which is then statically decomposed among the processors. Parallel efficiency is high for this method since if the number of processors divides the size of the FFT grid we have performed a simple slab decomposition of the serial calculation. \subsection{Parallelization of the PP and SPH force components} The short range forces are accumulated by using 3 nested loops to sort through the chaining mesh. As in mass assignment, a race condition is present due to the possibility of concurrent writes to the data arrays. Again, in the CRAFT code, this race condition was avoided by using the atomic update primitive. Because a particle in a given chaining mesh cell may write to its 26 nearest-neighbour cells it is necessary to provide a two cell buffer zone. We can therefore borrow the exact same column decomposition that was used in mass assignment. Tests showed that of the two possible column sorting algorithms discussed in section~\ref{cpa}, $Ls\times 2\times 2$ columns are more efficient than the $Ls\times 1\times1$ columns. The difference in execution time in unclustered states was negligible, but for highly clustered distributions (as measured in the Santa Barbara cluster simulation \cite{SB99}), the $Ls\times 2\times 2$ method was approximately 20\% faster. This performance improvement is attributable to the difference in the number of barrier synchronizations required by each algorithm (four versus nine) and also the better cache reuse of the $Ls\times 2\times 2$ columns. \subsection{Task farm of refinements} As discussed earlier, the smaller sub-meshes ($N_r\leq262,144$) are distributed as a task farm amongst the PEs. As soon as one processor becomes free it is immediately given work from a pool via the dynamic scheduling option in OpenMP. Load imbalance may still occur in the task farm if one refinement takes significantly longer than the rest and there are not enough refinements to balance the workload over the remaining PEs. Note also the task farm is divided into levels, the refinements placed within the top level, termed `level one refinements' must be completed before calculating the `level two refinements', that have been generated by the level one refinements. However, we minimize the impact of the barrier wait by sorting refinements by the number of particles contained within them and then begin calculating the largest refinements first. This issue emphasizes one of the drawbacks of a shared memory code\-it is limited by the parallelism available and one has to choose between distributing the workload over the whole machine or single CPUs. It is not possible in the OpenMP programming environment to partition the machine into processor groups. This is the major drawback that has been addressed by the development of an MPI version of the code \cite{T03}. \section{Considerations for NUMA architectures} Because of the comparatively low ratio of work to memory read/write operations the code is potentially sensitive to memory latency issues. To test this sensitivity in a broad sense, we have examined the performance of the code for a range of problem sizes, from $2\times16^3$ particles to $2\times128^3$, the smallest of which is close to fitting in L2 cache. A strong latency dependence will translate into much higher performance for problem sizes resident in cache as opposed to those requiring large amounts of main memory. We also consider the performance for both clustered and unclustered particle distributions since the performance envelope is considerably different for these two cases. The best metric for performance is particle updates per second, since for the unclustered distribution P${}^3$M has an operation dependence dominated by ${\bf O}(N)$ factors, while in the clustered state the algorithm dominated by the cost of the SPH solution which also scales as ${\bf O}(N)$. The results are plotted in figure \ref{latency}, as a function of memory consumption. We find that the $2\times16^3$ simulations show equal performance for both the linked list and ordered particle code under both clustering states. However, for larger problem sizes the unclustered state shows a considerable drop-off in performance for the linked list code, while the ordered particle code begins to level off at the $2\times64^3$ problem size. The clustered distributions show little sensitivity to problem size, which is clearly indicative of good cache reuse and a lack of latency sensitivity. We conclude that the algorithm is comparatively insensitive to latency because the solution time is dominated largely by the PP part of the code which exhibits good cache reuse. \begin{figure}[t] \vspace{10cm} \special{psfile=latency.eps hscale=65 vscale=65 hoffset=10 voffset=-100} \caption{Performance of the code for clustered and unclustered distributions of sizes between $2\times16^3$ and $2\times128^3$, on a 2Ghz AMD Opteron. The logarithm of memory consumption in megabytes, rather than particle number, is plotted along the x-axis. Both the ordered particle (OP) and linked list (LL) versions of the code were used. The clustered state exhibits a similar level of RMS clustering to the Santa Barbara simulation discussed in section 6. Comparatively little dependence upon latency is observed.} \label{latency} \end{figure} The increased performance improvement seen for the ordered particle code is caused by the increased data locality. On NUMA architectures this has a direct benefit as although the penalty for distant memory fetches is large (several hundreds of nanoseconds) the cache reuse ensures this penalty is only felt rarely. We have found that the locality is sufficiently high to render direct data placement largely irrelevant on the SGI Origin. The only explicit data placement we perform is a block distribution of the particle data over PEs. The constant reordering of particles ensures that this is an effective distribution. For the remainder of the arrays we use the ``first touch'' placement paradigm, namely that the first PE to request a specific memory page is assigned it. Despite the simplicity, this scheme works very effectively. Since the granularity of the chaining cells is smaller than the smallest memory page size, prefetching is better strategy than memory page rearrangement. This works particularly effectively in the PP part of the algorithm where a comparatively large amount of work is done per particle. In this section of code we specify that two cache lines should always be retrieved for each cache miss, and we also allow the compiler to make further (aggressive) prefetching predictions. The net effect of this is to almost completely hide the latency on the Origin. This can be seen in the performance scaling, where excellent results are achieved up to 64 nodes (see section~\ref{perf}). However, there is one particularly noticeable drawback to NUMA architectures. A number of the arrays used within the PM solver are equivalenced to a scratch work space within a common block. First touch placement means that the pages of the scratch array are distributed according to the layout of the first array equivalenced to the common block. If the layout of this array is not commensurate with the layout of subsequent arrays that are equivalenced to the scratch area then severe performance penalties result. Our solution has simply been to remove the scratch work space and suffer the penalty of increased memory requirements. \section{Performance}\label{perf} \subsection{Correctness Checking} Our initial tests of correctness of large simulations ($2\times256^3$), comparing serial to parallel runs, showed variation in global values, such as the total mass within the box at the 0.01 percent level. However, this turned out to be a precision issue, as increasing the summation variables to double precision removed any variation in values. With these changes made, we have confirmed that the parallel code gives identical results to the serial code to machine-level rounding errors. An extensive suite of tests of the {\small HYDRA} code are detailed in \cite{CT95} and \cite{rob2}. \subsection{Overall Speed} Our standard test case for benchmarking is the `Santa Barbara cluster' used in the paper by Frenk {\it et al.\thinspace} \cite{SB99}. This simulation models the formation of a galaxy cluster of mass $1.1\times 10^{15}$ \hbox{$\rm\thinspace M_{\odot}\; $} in a Einstein-de Sitter sCDM cosmology with parameters $\Omega_d=0.9$, $\Omega_b$=0.1, $\sigma_8$=0.6, $H_0=0.5$, and box size 64 Mpc. Our base simulation cube has $2\times64^3$ particles, which yields 15300 particles in the galaxy cluster, and we use an S2 softening length of 37 kpc. Particle masses are $6.25\times10^{10}$ \hbox{$\rm\thinspace M_{\odot}\; $} for dark matter and $6.94\times10^9$ \hbox{$\rm\thinspace M_{\odot}\; $} for gas. To prepare a larger data set we simply tile the cube as many times as necessary. An output from z=7.9 is used as an `unclustered' data set, and one from z=0.001 as a `clustered' data set. We were given access to two large SMP machines to test our code on, a 64 processor SGI Origin 3000 (O3k, hereafter) at the University of Alberta and a 64 processor Hewlett Packard GS1280 Alphaserver. Both of these machines have NUMA architectures, the O3k topology being a hypercube, while the GS1280 uses a two dimensional torus. The processors in the O3k are 400 Mhz MIPS R12000 (baseline SPECfp2000 319) while the GS1280 processors are 21364 EV7 Alpha CPUs running at 1150 Mhz (baseline SPECfp2000 1124). There is an expected raw performance difference of over a factor of three between the two CPUs, although in practice we find the raw performance difference to be slightly over two. We conducted various runs with differing particle and data sizes to test scaling in both the strong (fixed problem size) and weak (scaled problem size) regimes. The parallel speed-up and raw execution times are summarized in tables \ref{tab1} \& \ref{tab2} and speed-up is shown graphically in figure \ref{scaling}. Overheads associated with I/O and start-up are not included. Further, we also do not include the overhead associated with placing refinements on the top level of the simulation, as this is only performed every 20 steps. With the exception of the clustered $2\times64^3$ run, parallel scaling is good (better than 73\%) to 32 processors on both machines for all runs. The clustered $2\times64^3$ simulation does not scale effectively because the domain decomposition is not sufficiently fine to deal with the load imbalance produced by this particle configuration. Only the largest simulation has sufficient work to scale effectively beyond 32 processors. To estimate the scaling of the $2\times256^3$ run we estimated the speed-up on 8 nodes of the GS1280 as 7.9 (based upon the slightly lower efficiencies observed on the $2\times128^3$ compared to the O3k), while on the O3k we estimated the speed up as 8.0. We then estimated the scaling from that point. Speed-up relative to the 8 processor value is also given in table 1, and thus values may be scaled as desired. \renewcommand{\arraystretch}{1.0} \begin{table} \caption{Parallel scaling efficiencies and wall clock timings for a full gravity-hydrodynamic calculation on the SGI Origin 3000. Results in parenthesis indicate that the values are estimated. The 64 processor results for the two smallest runs have been omitted because they resulted in a slowdown relative to the 32 processor run.} \begin{center} \begin{tabular}{c l c c c c c} \hline N & Mesh & PEs & Redshift & Wall Clock/s & Speed-up & Efficiency\\ \hline $2\times 64^3$ & $128^3$& 1 & 7.9 & 12.2 & 1.00 & 100\% \\ $2\times 64^3$ & $128^3$& 2 & 7.9 & 6.27 & 1.95 & 98\% \\ $2\times 64^3$ & $128^3$& 4 & 7.9 & 3.27 & 3.73 & 93\% \\ $2\times 64^3$ & $128^3$& 8 & 7.9 & 1.70 & 7.18 & 90\% \\ $2\times 64^3$ & $128^3$& 16 & 7.9 & 0.94 & 13.0 & 81\% \\ $2\times 64^3$ & $128^3$& 32 & 7.9 & 0.60 & 20.3 & 63\% \\ & & & & & & \\ $2\times 64^3$ & $128^3$& 1 & 0.001 & 53.9 & 1.00 & 100\% \\ $2\times 64^3$ & $128^3$& 2 & 0.001 & 27.2 & 1.98 & 99\% \\ $2\times 64^3$ & $128^3$& 4 & 0.001 & 14.0 & 3.85 & 88\% \\ $2\times 64^3$ & $128^3$& 8 & 0.001 & 9.27 & 5.81 & 73\% \\ $2\times 64^3$ & $128^3$& 16 & 0.001 & 8.16 & 6.61 & 41\% \\ $2\times 64^3$ & $128^3$& 32 & 0.001 & 8.10 & 6.65 & 21\% \\ & & & & & & \\ $2\times 128^3$ & $256^3$&1 & 7.9 & 105 & 1.00 & 100\% \\ $2\times 128^3$ & $256^3$&2 & 7.9 & 51.9 & 2.02 & 101\% \\ $2\times 128^3$ & $256^3$&4 & 7.9 & 26.8 & 3.91 & 98\% \\ $2\times 128^3$ & $256^3$&8 & 7.9 & 13.7 & 7.66 & 96\% \\ $2\times 128^3$ & $256^3$&16 & 7.9 & 7.28 & 14.4 & 90\% \\ $2\times 128^3$ & $256^3$&32 & 7.9 & 3.88 & 27.1 & 85\% \\ $2\times 128^3$ & $256^3$&64 & 7.9 & 2.53 & 41.5 & 65\% \\ & & & & & & \\ $2\times 128^3$ & $256^3$&1 & 0.001 & 407 & 1.00 & 100\% \\ $2\times 128^3$ & $256^3$&2 & 0.001 & 208 & 1.96 & 98\% \\ $2\times 128^3$ & $256^3$&4 & 0.001 & 105 & 3.88 & 97\% \\ $2\times 128^3$ & $256^3$&8 & 0.001 & 53.6 & 7.59 & 95\% \\ $2\times 128^3$ & $256^3$&16 & 0.001 & 27.6 & 14.7 & 92\% \\ $2\times 128^3$ & $256^3$&32 & 0.001 & 15.4 & 26.4 & 83\% \\ $2\times 128^3$ & $256^3$&64 & 0.001 & 13.5 & 30.1 & 47\% \\ & & & & & & \\ $2\times 256^3$ & $512^3$& 8 & 7.9 & 115. & (8.0) & (100\%)\\ $2\times 256^3$ & $512^3$& 16 & 7.9 & 57.5 & (16.0)[2.00] & (100\%)\\ $2\times 256^3$ & $512^3$& 32 & 7.9 & 30.9 & (29.8)[3.72] & (93\%)\\ $2\times 256^3$ & $512^3$& 64 & 7.9 & 16.7 & (55.1)[6.89] & (86\%)\\ & & & & & & \\ $2\times 256^3$ & $512^3$& 8 & 0.001 & 484 & (8.0) & (100\%)\\ $2\times 256^3$ & $512^3$& 16 & 0.001 & 245 & (15.8)[1.98] & (100\%)\\ $2\times 256^3$ & $512^3$& 32 & 0.001 & 130 & (29.8)[3.72] & (93\%)\\ $2\times 256^3$ & $512^3$& 64 & 0.001 & 64.7 & (59.8)[7.48] & (93\%)\\ \hline \end{tabular} \end{center} \vspace*{.6cm} \noindent \label{tab1} \end{table} \linespread{0.75} \begin{table} \caption{Parallel scaling efficiencies and wall clock timings for a full gravity-hydrodynamic calculation calculation on the HP GS1280. Results in parenthesis indicate that the values are estimated.} \begin{center} \begin{tabular}{c l c c c c c} \hline N & Mesh & PEs & Redshift & Wall Clock/s & Speed-up & Efficiency\\ \hline $2\times 64^3$ & $128^3$& 1 & 7.9 & 5.13 & 1.00 & 100\% \\ $2\times 64^3$ & $128^3$& 2 & 7.9 & 2.50 & 2.06 & 103\% \\ $2\times 64^3$ & $128^3$& 4 & 7.9 & 1.33 & 3.86 & 97\% \\ $2\times 64^3$ & $128^3$& 8 & 7.9 & 0.75 & 6.84 & 86\% \\ $2\times 64^3$ & $128^3$& 16 & 7.9 & 0.37 & 13.8 & 86\% \\ $2\times 64^3$ & $128^3$& 32 & 7.9 & 0.20 & 25.7 & 80\% \\ $2\times 64^3$ & $128^3$& 64 & 7.9 & 0.19 & 27.2 & 43\% \\ & & & & & & \\ $2\times 64^3$ & $128^3$& 1 & 0.001 & 20.7 & 1.00 & 100\% \\ $2\times 64^3$ & $128^3$& 2 & 0.001 & 10.5 & 1.98 & 99\% \\ $2\times 64^3$ & $128^3$& 4 & 0.001 & 5.38 & 3.84 & 96\% \\ $2\times 64^3$ & $128^3$& 8 & 0.001 & 3.94 & 5.25 & 67\% \\ $2\times 64^3$ & $128^3$& 16 & 0.001 & 3.21 & 6.45 & 40\% \\ $2\times 64^3$ & $128^3$& 32 & 0.001 & 2.99 & 6.92 & 22\% \\ $2\times 64^3$ & $128^3$& 64 & 0.001 & 2.80 & 7.39 & 12\% \\ & & & & & & \\ $2\times 128^3$ & $256^3$&1 & 7.9 & 41.2 & 1.00 & 100\% \\ $2\times 128^3$ & $256^3$&2 & 7.9 & 21.0 & 1.96 & 98\% \\ $2\times 128^3$ & $256^3$&4 & 7.9 & 11.0 & 3.75 & 94\% \\ $2\times 128^3$ & $256^3$&8 & 7.9 & 5.92 & 6.96 & 87\% \\ $2\times 128^3$ & $256^3$&16 & 7.9 & 3.26 & 12.7 & 79\% \\ $2\times 128^3$ & $256^3$&32 & 7.9 & 1.77 & 23.3 & 73\% \\ $2\times 128^3$ & $256^3$&64 & 7.9 & 1.06 & 38.9 & 61\% \\ & & & & & & \\ $2\times 128^3$ & $256^3$&1 & 0.001 & 154 & 1.00 & 100\% \\ $2\times 128^3$ & $256^3$&2 & 0.001 & 77.7 & 1.98 & 99\% \\ $2\times 128^3$ & $256^3$&4 & 0.001 & 39.7 & 3.88 & 97\% \\ $2\times 128^3$ & $256^3$&8 & 0.001 & 20.7 & 7.44 & 93\% \\ $2\times 128^3$ & $256^3$&16 & 0.001 & 10.9 & 14.1 & 88\% \\ $2\times 128^3$ & $256^3$&32 & 0.001 & 6.2 & 24.8 & 76\% \\ $2\times 128^3$ & $256^3$&64 & 0.001 & 5.3 & 29.3 & 46\% \\ & & & & & & \\ $2\times 256^3$ & $512^3$& 8 & 7.9 & 49.5 & (7.9) & (99\%)\\ $2\times 256^3$ & $512^3$& 16 & 7.9 & 26.4 & (14.9)[1.88] & (93\%)\\ $2\times 256^3$ & $512^3$& 32 & 7.9 & 13.8 & (28.4)[3.59] & (89\%)\\ $2\times 256^3$ & $512^3$& 64 & 7.9 & 8.13 & (48.1)[6.09] & (75\%)\\ & & & & & & \\ $2\times 256^3$ & $512^3$& 8 & 0.001 & 215 & (7.9) & (99\%)\\ $2\times 256^3$ & $512^3$& 16 & 0.001 & 110 & (15.4)[1.95] & (96\%)\\ $2\times 256^3$ & $512^3$& 32 & 0.001 & 56.7 & (29.9)[3.79] & (93\%)\\ $2\times 256^3$ & $512^3$& 64 & 0.001 & 30.0 & (56.6)[7.16] & (88\%)\\ \hline \end{tabular} \end{center} \vspace*{.6cm} \noindent \label{tab2} \end{table} \linespread{1.0} \begin{figure}[t] \vspace{6cm} \special{psfile=scaling.eps hscale=34 vscale=34 hoffset=5 voffset=-50} \special{psfile=origin.eps hscale=34 vscale=34 hoffset=190 voffset=-50} \caption{ Parallel speed-up for various particle configurations and processor counts in the strong scaling regime for the GS1280 and O3k. Open polygons correspond to the z=7.9 dataset, pointed stars to the z=0.001 data set. Dashed lines correspond to the $2\times64^3$ data, dotted lines to the $2\times128^3$ and the thin solid line to the $2\times256^3$ data. Perfect linear scaling is given by the thick solid line. Provided there is sufficient parallel work available, scaling is excellent to 32 processors (only the clustered $2\times64^3$ run exhibits a notable lack of scaling). Beyond 32 processors only the $2\times256^3$ runs have sufficient work to scale well.} \label{scaling} \end{figure} To quantify our results further we summarize the performance of the code using a popular performance metric for cosmological codes, namely the number of particle updates per second. As a function of the number of nodes within the calculation this also gives a clear picture of the scaling achieved. Because the simulation results we obtained were run using the combined gravity-hydrodynamic solver it is necessary for us to interpolate the gravitational speed. To do this we calculated the ratio of the code speed with and without hydrodynamics, and also without the PP correction, on 1 CPU of our local GS160 Alphaserver, and on 1 CPU of the O3k. To ensure this approximation is as reasonable as possible we calculated the ratios for both the z=7.9 and z=0.001 datasets. Relative to the speed obtained for the combined solver, the gravity-only solver was found to be 1.63(1.29) times faster for the z=7.9 dataset and 1.84(1.49) times faster for the z=0.001 dataset, for the GS1280 (and O3k). The PM speed was found to 2.4(2.5) times faster for the z=7.9 dataset and 9.21(10.3) times faster for the z=0.001 dataset. In figure \ref{pups} we show the estimated number of gravitational updates per second achieved on in both the clustered and unclustered state of the $2\times128^3$ simulation (other simulation sizes show almost identical speeds) on the GS1280. The clustered state is approximately three times slower than the unclustered state for all simulation sizes. To provide comparison to other published work we have also included results presented by Dubinski {\it et al.\thinspace} for a $256^3$ simulation conducted on a $512^3$ grid using a distributed memory Tree-PM code (``GOTPM''). Although a direct comparison of speed is not as instructive as might be hoped, since both the machine specifications and particle distributions differ, it is intriguing that the raw PM speed of both codes are very similar, with our code showing a moderate speed advantage (between 2.4 and 1.8 times faster depending on clustering). Comparing the speed of the full solutions (for the $2\times256^3$ simulation) in the clustered state shows HYDRA to be 2.3 times faster, although the initial configuration is 3.9 times faster, while reportedly Tree-PM codes have a roughly constant cycle time with clustering \cite{B02}. This highlights the fact that while Tree-PM codes have a roughly constant cycle time with clustering, there is still significant room for improving the execution on unclustered data sets. It is also worth noting that, as yet, our implementation of AP${}^3$M lacks any multiple time-step capability, and implementing a mechanism that steps refinements within different time bins has potentially very significant performance gains. Such an integrator would bear similarities to the mixed-variable symplectic integrators used in planetary integrations \cite{WH92}. \begin{figure}[t] \vspace{6cm} \special{psfile=allgs1280.ps hscale=34 vscale=34 hoffset=5 voffset=-50} \special{psfile=allo3k.ps hscale=34 vscale=34 hoffset=190 voffset=-50} \caption{Performance of the gravitational solver measured by particle updates per second (PUPS) on the GS1280 and O3k. Values are given for both the raw PM speed as well as the full AP${}^3$M solution, with the shaded area denoting the performance region for the code from unclustered to clustered distributions. For comparison, data for the Tree-PM (``GOTPM'') code of Dubinski et al for a data-set with comparable mass resolution, at an expansion epoch of z=1, are provided. Note that the GOTPM figures are for a less clustered distribution than our z=0.001 dataset, however the processors used were approximately 14\% slower than those of the GS1280, and they used a high bandwidth Gigabit ethernet interconnect. Note that even the PM algorithms are not truly comparable since HYDRA uses a ten point difference for forces compared to the four point difference used in GOTPM. } \label{pups} \end{figure} \subsection{Timing breakdown} Although overall performance is the most useful measure of utility for the code, analysis of the time spent in certain code sections may elucidate performance bottlenecks. Hence, for timing purposes, we break the code into three main sections; the top level PM, the top level PP and the refinement farm. The speed of list making and particle book-keeping is incorporated within these sections. The execution time is initially dominated by the solution time for the top level grid, but the growth of clustering makes the solution time strongly dependent upon the efficiency of the refinement farm. While the top level solution (necessarily) involves a large number of global barriers, the refinement farm only uses a small number and performs a large number of independent operations. The only exception is a critical section where the global list of refinements is updated, however we ensure the critical section is only entered if a refinement has indeed derived new refinements. Thus, potentially, the refinement farm can scale better than the top level solution. In figure \ref{farmvstop} we plot the relative scaling of the top level solution compared to the refinement farm for a several different particle numbers. Provided sufficient work is available for distribution, the refinement farm is seen to scale extremely well, with parallel efficiencies of 99\% and 83\% observed for the $2\times256^3$ data set on 64 processors for the O3k and GS1280 respectively. \begin{figure}[t] \vspace{6cm} \special{psfile=gs1280farmvtop.eps hscale=34 vscale=34 hoffset=0 voffset=-50} \special{psfile=o3kfarmvtop.eps hscale=34 vscale=34 hoffset=190 voffset=-50} \caption{Comparison of parallel speed-up for the refinement farm versus the top level solver for different particle and processor counts. Scaling of the refinement farm on the O3k is better than the GS1280 in both cases, and is almost perfect for the largest run out to 64 processors. The refinement farm does not scale as well in the $2\times128^3$ run as there is insufficient parallel work to scale out to 64 processors, however scaling to 32 is excellent. } \label{farmvstop} \end{figure} \section{Summary and Discussion} Conducting high resolution simulations of cosmological structure formation necessitates the use of parallel computing. Although distributed architectures provide an abundance of cheap computing power, the programming model for distributed systems is fundamentally complex. Shared memory simplifies parallel programming greatly since the shared address space means that only the calculation itself need be distributed across nodes. In this paper we have discussed a code for parallel shared memory computers that exhibits only marginally higher complexity than a serial version of the code and which also exhibits excellent performance. Additional constructs for parallel execution introduce only a small (10\%) penalty for running on 1 node compared to the serial code. Although the code does have some problems with regards load balancing, in particular a deficit in performance occurs when a refinement is too large to be calculated as part of the task farm but is not large enough to be efficient across the whole machine, these situations are comparatively rare. The poor scaling of SPH under heavy clustering is the most significant cause of load imbalance. In particular, if the heavy calculational load is confined to one refinement that is part of the task farm all threads will block until this refinement is completed. The most satisfactory solution to this problem is to substitute an alternative algorithm for the SPH in high density regions. We will present details of an algorithm that improves the SPH cycle time for high density regions elsewhere (Thacker {\it et al.\thinspace} in prep). Most of the performance limitations can be traced to applying a grid code in a realm where it is not suitable. As has been emphasized before, treecodes are particularly versatile, and can be applied to almost any particle distribution. However, for periodic simulations they become inefficient since Ewald's method must used to calculate periodic forces. FFT-based grid methods calculate the periodic force implicitly, and exhibit particularly high performance for homogeneous particle distributions under light to medium clustering. Highly clustered (or highly inhomogeneous) particle distributions are naturally tailored to the multi-timestepping capability of treecodes. Although we see scope for introducing a multi-time stepping version of AP${}^3$M where sub-grids are advanced in different time step bins it is unclear in details what efficiencies could be gained. There are clearly parts of the algorithm, such as mass assignment, that are unavoidably subject to load imbalances. We expect that since the global grid update would be required infrequently the global integrator can still be made efficient. An efficient implementation of multiple time-steps is the last area where an order of magnitude improvement in simulation time can be expected for this class of algorithm. In terms of raw performance, the code speed is high relative to the values given by Dubinski et al. On the GS1280 the full solution time for the unclustered distribution even exceeds that of the PM solution quoted for GOTPM on 64 processors. AP${}^3$M has been criticized previously for exhibiting a cycle time that fluctuates depending upon the underlying level of clustering. The data we have presented here shows the range in speeds is comparatively small (a factor of 4). We would also argue that since the cost of the short range correction is so small at early times, this criticism is misplaced. While recent implementations of Tree-PM have an approximately constant cycle time irrespective of clustering, the large search radius used in the tree correction leads to the tree part of the algorithm dominating execution time for all stages of the simulation. Conversely, only at the end of the simulation is this true for HYDRA. Arguments have also been presented that suggest the PM cycle introduces spurious force errors that can only been corrected by using a long range PP correction (out to 5 PM cells). It is certainly true that PM codes implemented with the so called `Poor Man's Poisson solver' \cite{BR69}, and Cloud-in-cell interpolation do suffer from large ($\sim$50\%) directional errors in the force around 2-3 grid spacings. However, as has been shown, first by Eastwood (see \cite{He81} for references) and more recently by Couchman, a combination of higher order assignment functions, Q-minimized Green's functions, and directionally optimized differencing functions can reduce errors in the inter-particle forces to sub 0.3\% levels (RMS). Surprisingly, although CIC gives a smooth force law (as compared to NGP), it does not reduce the angular isotropy of the mesh force. Indeed, in two dimensions, moving from CIC to TSC interpolation reduces directional errors from 50\% to 6\% and Q-minimization of the Green's function reduces the anisotropy to sub 0.5\% levels \cite{E74}. Furthermore, the technique of interlacing can be used to improve the accuracy of the PM force still further, but the additional FFTs required for this method rapidly lead to diminished returns. To date we have used this code to simulate problems ranging from galaxy formation to large-scale clustering. As emphasized in the introduction, the simple programming model provided by OpenMP has enabled us to rapidly prototype new physics algorithms which in turn has lead to the code being applied across a diverse range of astrophysics. Developing new physics models with this code takes a matter of hours, rather than the days typical of MPI coding. We plan to make a new version of the code, incorporating more streamlined data structures and minor algorithmic improvements, publically available in the near future. \section{Acknowledgments} We thank an anonymous referee for comments which improved the paper. Runs on the GS1280 machine were performed on our behalf by Andrew Feld of Hewlett Packard. We thank John Kreatsoulas for arranging time for us on this machine. Figures 1, 2, 4 and 5 were prepared by Dr L. Campbell. RJT is funded in part by a CITA National Fellowship. HMPC acknowledges the support of NSERC and the Canadian Institute for Advanced Research. SHARCNET and WestGrid computing facilities were used during this research.
1,108,101,563,771
arxiv
\section{Introduction} The $\Lambda_b^0$, consisting of $b$, $u$ and $d$ quarks, is the lowest-lying $b$-flavored baryon, about which comparatively little is known. Recently the CDF collaboration reported an improved measurement of the $\Lambda_b^0$ mass \cite{LP2003} of 5620.4 $\pm$ 1.6 $\pm$ 1.2 MeV. The lifetime has long been measured to be somewhat lower than theoretical expectations \cite{Lblife}. There is, however, no measurement available on the direct production of exclusive $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ in $e^+e^-$ annihilation. Such events would be very useful for establishing absolute branching ratios and other properties. CLEO has accumulated data using $e^+e^-$ collisions in the center-of-mass energy range from 11.227 to 11.383 GeV, close to or just above the $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production threshold. It is possible to observe a resonant signal, similar to the $\Upsilon (4S)$ for $B^+$ and $B^0$ mesons, or just an increase in relative production above threshold. We report here limits on such resonant or non-resonant production. \section{Data and Monte Carlo Simulated Sample} \label{sec:two} The CLEO III detector is described in detail elsewhere \cite{CLEOIII_d} \cite{CLEOIII_RICH}. The inner part of the detector is surrounded by a 1.5 T solenoidal magnetic field. From the region near the $e^+e^-$ interaction vertex radially outward it consists of a silicon strip based vertex detector and a drift chamber used to measure the momenta of charged tracks based on their curvature. Beyond the drift chamber is a Ring Imaging Cherenkov Detector, RICH, used to identify charged hadrons, followed by an Electromagnetic Calorimeter, EC, consisting of nearly 8000 CsI crystals. Next to the EC there is the solenoidal coil followed by an iron return path with wire chambers interspersed in 3 layers to provide muon identification. This study is based on the total 710 pb$^{-1}$ data sample that was acquired at 3 MeV intervals between center-of-mass energies, $E_{CM}$, of 11.227 GeV to 11.383 GeV, to be close to or above threshold for $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production. The luminosity in each of these scan points varies from 14 to 20 pb$^{-1}$. In addition, there are data points taken at a $E_{CM}$ of 11.150 and 11.203 GeV, respectively. The two data points with lowest and highest energies have integrated luminosities of 70 and 120 pb$^{-1}$, respectively. We also use data taken in the four-flavor continuum below the $\Upsilon (4S)$ to measure the $b\overline{b}~$ cross section above the $\Upsilon (4S)$. For the Monte Carlo, MC, study of the high energy data, we generated five times more hadronic $q\overline{q}~$ events than at each beam energy contained in our data sample. Events were generated separately for ``light" four-flavor continuum ($ c, s, u, d$) and $b\overline{b}~$ continuum events and then combined in the expected 10:1 ratio absent any resonance production. The decay channels and the branching fractions of the $\Lambda_b$ are less well known than the $B^0$ and $B^+$ mesons. We list the $\Lambda_b$ decay modes and branching fractions we used for the signal Monte Carlo in Table~\ref{tab:smc}. For the $\Lambda_b^0 \to \Lambda_c^+ \ell^- \bar{\nu}$ branching fraction we re-scaled the $B^0 \to X \ell \bar{\nu}$ branching fraction by the ratio of lifetimes, $\tau(\Lambda_b)/\tau(B^0)$. The entries denoted by *$q\bar{q}$* indicate that the processes are generated using a fragmentation process for the quark-antiquark pair. \begin{table}[htb] \begin{center} \caption{\label{tab:smc} $\Lambda_b$ decay modes and branching fractions used in the Monte Carlo simulation.} \begin{tabular}{lccr} \hline\hline Decay modes & Branching fraction (\%)\\ \hline $\Lambda_b \to\Lambda^+_c e^- \nu_e$ & 8.4 \\ $\Lambda_b \to\Lambda^+_c \mu^- \nu_{\mu}$ & 8.4 \\ $\Lambda_b \to\Lambda^+_c \pi^-$ & 4.2 \\ $\Lambda_b \to\Lambda^+_c \rho^-$ & 1.0 \\ $\Lambda_b \to\Lambda^+_c a1^-$ & 2.1 \\ $\Lambda_b \to\Lambda^+_c D^-_s$ & 2.1 \\ $\Lambda_b \to\Lambda^+_c D^{*-}_s $ & 4.2 \\ $\Lambda_b \to\Lambda \eta_c$ & 0.1 \\ $\Lambda_b \to\Lambda J/\psi$ & 0.5 \\ $\Lambda_b \to\Lambda^+_c \pi^-\pi^+\pi^-$ & 2.1 \\ $\Lambda_b \to \Lambda K^0 \pi^-\pi^-\pi^+\pi^-$ & 2.1 \\ $\Lambda_b \to p^+D^0 \pi^-$ & 2.1 \\ $\Lambda_b \to\Lambda^+_c *d\overline{u}* $ & 44.9 \\ $\Lambda_b \to \Sigma^+_c *d\overline{u}* $ & 8.4 \\ $\Lambda_b \to \Omega^+_{cc} *d\overline{u}* $ & 7.3 \\ $\Lambda_b \to p^+ *d\overline{u}* $ & 1.1 \\ $\Lambda_b \to \Xi'^+_c *d\overline{u}* $ & 1.0 \\ \hline\hline \end{tabular} \end{center} \end{table} \section{Event Selection} The major backgrounds to $\Lambda_b$ are non-$b\overline{b}~$ type hadronic events, two-photon events ($e^+e^-\to e^+e^- X$) and $\tau^+\tau^-$ pairs. To suppress these backgrounds we require the following hadronic event selection criteria: (1) At least five charged tracks; a track candidate is acceptable if it is a cosine with respect to the beam line of less than 0.9 and has at least half of the potential tracking chamber hits along its length. This requirement rejects 81\% of the $\tau^+\tau^-$ pairs. \begin{figure}[hbt] \epsfig{figure=two_phot1.eps,height=3in} \epsfig{figure=mc_r2_1.eps,height=3in} \vspace{.1cm} \caption{(a) $E_{vis}/E_{beam}$ above $\Lambda_b$ threshold data (triangle), five flavor continuum MC (solid) and simulated two-photon events (circles). (b) $R_2$ distribution for $b\overline{b}~$ (dashed) and non-$b\overline{b}~$ (solid) type events.} \label{fig:evis2} \end{figure} (2) The total visible energy, $E_{vis}$, is required to be greater than the beam energy, $E_{beam}$. $E_{vis}$ receives contributions from both charged tracks and unmatched neutral energy clusters greater than 30 MeV. This requirement helps suppresses two-photon events. Fig.~\ref{fig:evis2}(a) shows the $E_{vis}/E_{beam}$ distributions for data, five flavor Monte-Carlo continuum and simulated two-photon events \cite{two-photon}. Imposing the requirement $E_{vis}>E_{beam}$ reduces the two-photon background by 75\% with a small (3\%) loss of hadronic events. (3) The ratio of the 2$^{nd}$ and 0$^{th}$ Fox-Wolfram moments, $R_2$, is less than 0.25 \cite{r2paper}. Fig.~\ref{fig:evis2}(b) shows MC simulated distributions of $R_2$ for both $b\overline{b}~$ and non-$b\overline{b}~$ continuum events. Both areas are normalized to unity. Requiring $R_2 < 0.25$ selects the more spherically shaped events in momentum space and greatly enhances the $b\overline{b}$ fraction, by rejecting 65\% of four-flavor continuum events while losing only 8\% of the $b\overline{b}$ events. To subtract four-flavor continuum background we use data taken at a $E_{CM}$ 30 MeV below the $\Upsilon (4S)$ mass. Since we make a specific cut on $R_2$ we need to take into account that the shape of the $R_2$ distribution can change when the $E_{CM}$ changes. The $R_2$ distribution from below-$\Upsilon(4S)$ data is compared with the distribution using data taken in the $\Lambda_b$ scan region in Fig.~\ref{fig:boost}(a). The data are normalized by luminosity and $1/s$, where $s$ is the square of the center-of-mass energy. The distributions differ in two respects. The first is the obvious enhancement at small $R_2$ values in the $\Lambda_b$ scan region giving evidence for $b\overline{b}$ production. The second is the disagreement in shape at values of $R_2~>~0.5$, where $b\overline{b}$ production is absent. We confirm this change in shape with energy by comparing $\Upsilon(4S)$ ``on-resonance" data and below-$\Upsilon(1S)$ resonance data ($E_{CM}$=9.43 GeV) in Fig.~\ref{fig:boost}(b). The subtracted spectra show an anomalous peak near $R_2 = 0.5 $. The number of events in this peak can be as large as $\sim$30\% of the total number of $b\overline{b}~$ events in higher $E_{CM}$ region. Thus, it is important to transform correctly the below-$\Upsilon(4S)$ resonance data in order to correctly subtract the background when we apply a tight $R_2$ requirement. Simple kinematic considerations suggest that $R'_2(E')/R_2(E) \sim E'/E$, where $E'>E$. The boundary considerations that at $R_2$ values of both 0 and 1 the initial and corrected distributions be equal, result in a simple parameterization of the corrected, or ``boosted" $R_2$ distribution: \begin{equation} R'_2(E')= \frac{E'}{E} R_2(E)+\left( 1-\frac{E'}{E} \right) R^2_2(E)~. \label{eq:boost} \end{equation} This expression describes the energy dependence of the $R_2$ shape excellently. In Fig.~\ref{fig:boosted} we compare the boosted $R_2$ distribution for below-$\Upsilon(4S)$ data, normalized by luminosity and $1/s$, with the same distribution for the high energy data. The distributions match above $R_2$ of 0.5, as required. \begin{figure}[hbt] \epsfig{figure=Lam-off4S1.eps,height=3in} \epsfig{figure=4S-1S1.eps,height=3in} \caption{The $R_2$ distribution above $\Lambda_b$ threshold compared with below-$\Upsilon(4S)$ data (a) and $\Upsilon(4S)$ on resonance data compared with below-$\Upsilon(1S)$ data (b). Circles show the subtracted distributions.} \label{fig:boost} \end{figure} \begin{figure}[hbt] \centerline{\epsfig{figure=boosted1.eps,height=3in}} \caption{ $R_2$ Distribution at one energy point above $\Lambda_b$ threshold compared with below-$\Upsilon(4S)$ after the boost (data).} \label{fig:boosted} \end{figure} We have several strategies for observing the production of $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ events. One possibility is to look for enhancements in the (1) $b\overline{b}~$ cross-section. Another is to look for an increase in (2) $\Lambda$ or (3) anti-proton production. We don't use protons because there is a large background rate from hadron interactions in the beam pipe and from residual beam gas collisions. $\Lambda$'s are promising because we expect that $\Lambda_b^o\to\Lambda_c X$ has a large branching ratio, $\sim$96\% and $\Lambda_c^+\to \Lambda X$ is approximately 50\%. Detecting anti-protons is very promising because $\Lambda_b^0$ decays always produce either one proton or neutron. In the case of non-resonant $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production we can expect that the cross-section will increase from zero at threshold to some constant fraction of the total $b\overline{b}~$ cross-section. In order to ascertain an optimal search strategy, we assume this fraction is 7.9\%, as predicted by the JETSET 7.3 Monte Carlo model \cite{lambda_ratio}. This is consistent with the PDG value for $b\overline{b}~$ $\to$ baryon of 10\% \cite{PDG}. Further support for this value comes from the ratio of $\Lambda_c \overline{\Lambda}_c$ to $c\overline{c}$ rates. As input to this estimate we use a measured value of $\mathcal{B}(\Lambda_c^+ \to pK^-\pi^+)\times\sigma(\Lambda_c^+) = (10.0\pm1.5\pm1.5)$ pb \cite{cleo_lambdac}, from our below-$\Upsilon(4S)$ continuum data sample. We take the $c\overline{c}$ cross section as 4/10 of the total hadronic cross section, implying $\sigma(c\overline{c})=1.12\pm0.02$ nb \cite{CLEOR}, and we use the PDG mean value for $\mathcal{B}(\Lambda_c^+ \to pK^-\pi^+)=(5.0\pm 1.3)\%$ \cite{PDG}, yielding the ratio or $\Lambda_c\overline{\Lambda_c}$/$c\overline{c} = (8.9\pm 3.0)$\%. The relative size of the $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ component for our different search strategies is shown in Fig.~\ref{signoise}(a). Here we normalized the MC simulated five-flavor visible hadronic cross section to unity, defined here as ``continuum'' $udsc$ and $b$, and then added the signal $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ to the total $udscb$ cross section(i.e., the $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ enhancement here represents an additional 7.9\% above expected inclusive $b\overline{b}~$ hadronic cross-section, rather than simply presenting an additional channel available to $b\overline{b}~$ hadronization). $\Lambda$'s have the highest relative yield closely followed by anti-protons. We optimize our search criteria by maximizing signal divided by square root of the background, $S/\sqrt{B}$, for our different search methods. The results are summarized in Fig.~\ref{signoise}(b), where we show the statistical significance for signal we obtain for different analysis strategies for different $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ cross-sections (statistical errors only). \begin{figure}[hbt] \begin{center} \epsfig{figure=vis_yield_hand1.eps,height=2.8in} \epsfig{figure=s2back1.eps,height=2.8in} \caption{(a) Relative yield of the $udsc$ (lower), $b$ (middle) and $\Lambda_b$ (upper) visible cross section for the inclusive selection of $b\overline{b}~$, $\overline{p}$ and $\Lambda$ assuming a 7.9\% increase of the total $b\overline{b}~$ cross section above $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ threshold. (b) ${\rm Signal/\sqrt{Background}}$ for different analysis strategies and cross-sections.} \label{signoise} \end{center} \end{figure} Our studies indicate that baryon production (namely anti-protons and $\Lambda$'s) is the most sensitive measure of $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ . However, the systematic uncertainties in $\Lambda_b \to$ protons and $\Lambda_b \to \Lambda$ diminish their sensitivity relative to inclusive $b\overline{b}$ production. We also considered identifying $\Lambda$'s and protons with an additional lepton in the event but these methods offer less significance. The efficiencies for detecting hadronic events, and more importantly, for detecting events with one or more protons are listed in Table~\ref{tab:eff}; their evaluation will be discussed in more detail in the next section. We use both charged particle ionization loss in the drift chamber (dE/dx) and RICH information to identify anti-protons. The RICH is used for momenta larger than 1 GeV. Information on the angle of detected Cherenkov photons is translated into a likelihood of a given photon being due to a particular particle. Contributions from all photons associated with a particular track are then summed to form an overall likelihood denoted as ${\cal L}_i$ for each particle hypothesis. To differentiate between kaon and proton candidates, we use the difference: $-\log({\cal L}_{K})+\log({\cal L}_{proton}$). This cut is set at -4. To utilize the dE/dx information we calculate $\sigma_{K}$ as the difference between the expected ionization loss for a kaon and the measured loss divided by the measurement error. Similarly, $\sigma_{proton}$ is defined in the same manner using the expected ionization for a proton. We use both the RICH and dE/dx to select anti-proton candidates in the following manner: (a) If neither the RICH nor dE/dx information is available, then the track is rejected. (b) If dE/dx is available and RICH is not then we insist that proton candidates have $PID_{dE}\equiv\sigma_{K}^2-\sigma_{proton}^2 <0$ (c) If RICH information is available and dE/dx is not available, then we require that $PID_{RICH}\equiv -\log({\cal L}_{K})+\log({\cal L}_{proton})<-4$. (d) If both dE/dx and RICH information are available, we require that $(PID_{dE}+PID_{RICH}) <-4$. $\Lambda$ candidates are formed from a pair of oppositely charged tracks one of which is consistent with a proton or anti-proton hypothesis, with a looser criteria than that stated above, which are constrained to come from a single vertex. We also require that the invariant mass be within 5 times the width of the $\Lambda$ mass peak, which has an r.m.s. width of 1.4 MeV. \subsection{Efficiency Determinations} To derive event selection efficiencies we simulated hadronic events using the JETSET 7.3 $q \overline{q}$ event generator \cite{Jetset}, then followed through the full GEANT 3.21-based \cite{cleog} CLEO-III detector simulation. For five-flavor hadronic and $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ events in the $\Lambda_b$ scan region, we generated Monte Carlo samples using the same generator with the properties described in section~\ref{sec:two}. The efficiencies obtained from these simulations are presented in Table~\ref{tab:eff}, where we list the both the hadronic event selection efficiency and the efficiency for detecting a hadronic event with an anti-proton. These efficiencies include the branching ratios for the various processes into anti-protons in the second column. We take ${\cal{B}}(\Lambda_b^o\to \overline{p} X)=0.50$. The row for $b\overline{b}$ includes only $B$ meson production with additional pions allowed. As one would expect, the efficiencies for $b\overline{b}$ and $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ are very similar. The slightly lower efficiency for $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ arises from higher average jettiness for $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ events. \begin{table*}[htb] \begin{center} \caption{\label{tab:eff} Selection efficiencies for hadronic events and those with anti-protons.} \vspace{0.2cm} \begin{tabular}{lccr} \hline\hline Data samples & Selection efficiency for & Selection efficiency for \\ & hadronic events (\%) & hadronic events with an $\overline{p}$ (\%) \\ \hline Below-$\Upsilon(4S)$ continuum & 25.5 $\pm$ 0.2 $\pm$ 0.8& 2.1 $\pm$ 0.1 $\pm$ 0.1\\ $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ & 85.5 $\pm$ 0.9 $\pm$ 2.6& 26.8 $\pm$ 0.1 $\pm$ 5.4\\ 4 flavor ($udsc$) continuum & 21.9 $\pm$ 0.4 $\pm$ 0.7& 1.8 $\pm$ 0.2 $\pm$ 0.1\\ at $E_{beam} \sim m(\Lambda_b)$ \\ $b \overline{b}$ & 89.9 $\pm$ 1.2 $\pm$ 2.7& 4.0 $\pm$ 0.2 $\pm$ 0.3\\ 5 flavor ($udscb$) continuum & 28.1 $\pm$ 2.5 $\pm$ 0.8& 2.0 $\pm$ 0.3 $\pm$ 0.2\\ $\tau \overline{\tau}$ & 0.024 $\pm$ 0.005 $\pm$ 0.001& $ < 10^{-5}$ \\ \hline\hline \end{tabular} \end{center} \end{table*} The errors listed in Table~\ref{tab:eff} are statistical and systematic, respectively. The systematic error for the hadronic event selection requirement is estimated from the variation in the number of hadronic events (corrected by efficiency and background) when changing selection requirements. The systematic error for the proton identification has been evaluated from proton efficiency measurements using reconstructed $\Lambda$ events from data and then comparing with the equivalent MC estimation. Our simulations also give us the selection efficiency for detecting an event containing either a $\Lambda$ or an $\overline{\Lambda}$ from $\Lambda_b\overline{\Lambda}_b$ decay of {$16.6\pm 0.1_{-0.0}^{+1.0}$\%, including the ${\cal{B}}(\Lambda\to p\pi^-)$. Note that the PDG world average for ${\cal{B}}(\Lambda_c \rightarrow p~ {\rm anything}$) is ($50 \pm 16$)\%. Similarly ${\cal{B}}(\Lambda_c \rightarrow \Lambda~ {\rm anything}$) is ($35 \pm 11 $)\%~\cite{PDG}. The errors on these rates will be included separately as systematic effects. \subsection{Systematic Errors} The systematic errors in determining $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production are given in Table~\ref{tab:sys}. The largest error is due to the unknown branching fraction of $\mathcal{B}(\Lambda_c \to p X)$ to which we assign a 32\% error. We also include errors on the hadron selection efficiency and the background in the hadronic event sample, evaluated by varying our selection criteria as well as taking into account the variation with $E_{CM}$, the anti-proton identification efficiency evaluated by examining a larger sample of $\Lambda\to p\pi^-$ data, and the luminosity measurement uncertainty estimated as $1\%$~ \cite{lumi_error}. The total systematic error found by adding these elements in quadrature is 2.7\%, 32\% and 31\% on the determination of $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production using $b\overline{b}$, anti-protons and $\Lambda$'s, respectively. \begin{table}[htb] \begin{center} \caption{\label{tab:sys}List of systematic errors in determining $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production } \vspace{0.2cm} \vspace{0.2cm} \begin{tabular}{lccr} \hline\hline Source & Error (\%) \\ \hline Hadron efficiency & $\pm$3 \\ $\Lambda_b^o\to\Lambda_c^+ X$ branching ratio & $\pm$4\\ Proton identification efficiency & $\pm$4 \\ $\Lambda_c^+\to p X$ branching fraction & $\pm$32 \\ $\Lambda_c^+\to\Lambda X$ branching fraction & $\pm$31 \\ Total background of hadronic events & $\pm$2 \\ Luminosity & $\pm1$ \\ \hline\hline \end{tabular} \end{center} \end{table} \section{The Estimated $b \overline{b}$ Cross Section} The hadronic cross section is generally expressed in terms of its ratio $R$ to the point cross section $e^+e^- \to \mu^+ \mu^-$. To search for resonant or non-resonant production of $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ in $e^+e^-$ collisions we measure the $b \overline{b}$ cross section over the energy range of the scan. Theoretically, $R_{b \overline b}$ can be expressed as follows: \begin{equation} R_{b \overline b}= R_{b \overline b}^0\left[1+\alpha_s/ \pi+C_2(\alpha_s/ \pi)^2+C_3(\alpha_s/ \pi)^3 \right], \label{eq:Rbb} \end{equation} where $R_{b \overline b}^0=N_cq_b$. $N_c$ is the number of quark colors, $q_b$ is the $b$ quark charge and $\alpha_s$ is the strong coupling constant. The constants are $C_2=1.409$ and $C_3=-11.767$ ~\cite{chet}. In our energy regime, we expect a value for $R_{b \overline b}$ of 0.35. To find the $b \overline{b}$ cross section we subtract the $R_2$ four-flavor continuum data distribution from the higher energy data, correct for the efficiency of the $R_2$ cut and the hadronic selection criteria and divide by the relevant luminosity. We use a value of the cross-section for $e^+e^-\to\mu^+\mu^-$ equal to 86.8 nb/$s$, where $s$ is the square of the center-of-mass energy in units of GeV. However, we do not make a precise measurement of $b\overline{b}~$ cross section due to uncertainties in the correct scaling factors of two-photon events and initial state radiation contributions in different energy regions. Here we wish to measure any possible enhancement above the $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ threshold. Our results are presented in Fig.~\ref{bbXsec} (a). \begin{figure}[hbt] \epsfig{figure=fit_Rbb_line.eps,height=3in} \epsfig{figure=fit_Rbb_at_2M1.eps,height=3in} \caption{\label{bbXsec} The estimated $b \overline b$ cross section in units of R. The error bars on the data points represent both the statistical and the systematic errors summed in quadrature. (a) The solid line shows a fit to a horizontal line. (b) The solid line shows a fit to Equation~\ref{eq:bes}. The fits are described in the text.} \end{figure} \section{Upper Limits on $\Lambda_b$ Production} In this energy regime we expect that the R value will be constant in the absence of any resonant or threshold increase due to $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production. There are no statistically significant excesses above a constant value of R, suggesting no resonant production of $b\overline{b}$ types of events. There is an important caveat concerning the limit using the $b\overline{b}$ cross-section. It may very well be that opening up the $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ channel comes at the expense of a lower in rate of other channels so that the total $b\overline{b}$ rate remains constant. Should this occur our limit, in this ($b\overline{b}~$) case, would be meaningless. In fact, a fit to flat line for $b\overline{b}~$ yields a $\chi^2$ of 14.2 for 29 degrees of freedom. This fit is shown on Fig.~\ref{bbXsec} (a). We can look for an increase in $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production that mimics the threshold turn on as a function of center-of-mass energy of $e^+e^-\to \tau^+\tau^-$. The line in Fig.~\ref{bbXsec} (b) represents a two-component fit. The first component is a straight line without any slope allowed up to a $E_{CM}$ of 11.24 GeV, twice the $\Lambda_b$ mass. The second component uses a shape similar to one proposed by the BES collaboration~\cite{bes_paper}, but simplified by explicitly calculating the Coulomb interaction and final state radiation; the final form of this function is: \begin{equation} \sigma(s)= A \times \theta(\sqrt{s}-2m(\Lambda_b^0))(\sqrt{s}-2m(\Lambda_b^0))^{0.62}+R_0~~, \label{eq:bes} \end{equation} where $A$ is a fit parameter, $\theta(y)$ is step function, 0 for y$<$0 and 1 for y$>0$, $m(\Lambda_b^0)$ is the mass and $R_0$ is the observed cross section below threshold. (We are assuming this form applies only near threshold.) \begin{figure}[hbt] {\epsfig{figure=fit_Aproton_at_2M1.eps,height=3in}} \caption{\label{VisXsecp} The cross section for events with at least one anti-proton normalized by $\sigma(e^+e^-\to \mu^+\mu^-)$. (The data have not been corrected for hadronic event efficiencies.) The solid lines show fits to Equation~\ref{eq:bes}. The errors are statistical only.} \end{figure} The cross sections for events with anti-protons are shown in Fig.~\ref{VisXsecp}. The data have been corrected for the momentum dependent efficiency of identifying anti-protons, but not for hadronic event selection. The data are fit to the BES function given in Eq.~(\ref{eq:bes}). The fitted parameters used to set upper limits are listed in Table~\ref{tab:param}. \begin{figure}[hbt] {\epsfig{figure=fit_Lambda_at_2M1.eps,height=3in}} \caption{\label{VisXsecL} The cross section for events with at least one $\overline{\Lambda}$ normalized by $\sigma(e^+e^-\to \mu^+\mu^-)$. (The data have not been corrected for hadronic event efficiencies.)The solid lines show fits to Equation~\ref{eq:bes}. The errors are statistical only.} \end{figure} The cross sections for events with $\Lambda$'s are shown in Fig.~\ref{VisXsecL}. The data have been corrected for both $\Lambda$ reconstruction efficiency and the branching ratio for $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ into $\Lambda$ plus $\overline{\Lambda}$. The fit to data uses the BES function given in Eq.~(\ref{eq:bes}). The fitted parameters used to set upper limits are listed in Table~\ref{tab:param}. \begin{table*}[htb] \begin{center} \caption{\label{tab:param}Numerical values of parameters found by fitting Eq.~\ref{eq:bes} to our data.} \vspace{0.2cm} \vspace{0.2cm} \begin{tabular}{lcc} \hline\hline Selection criteria & $A_i$ & ${R_0}_i$ \\ \hline $b\overline{b}~$ & $(0.21 \pm 3.82)\times 10^{-2}$ & 0.322 $\pm$ 0.007 \\ Anti-proton & $(0.84 \pm 1.20)\times 10^{-2}$ & 0.333 $\pm$ 0.002 \\ $\Lambda$ & $(0.15 \pm 5.49)\times 10^{-2}$ & 0.201 $\pm$ 0.010\\ \hline\hline Twice the $\Lambda_b$ mass is fixed to 11.24 GeV. \end{tabular} \end{center} \end{table*} There is no significant resonance peak in the scan range, nor any evidence for a growth above threshold. Using these fits we calculate 95\% confidence level upper limits for $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production above threshold, as shown in Fig.~\ref{upper_limit}. Here we take the upper limit as \begin{equation} \sigma(s)^{upper}_i = \left(A_i+1.64\times\delta A_i\right) *\left(\sqrt{s}-2m(\Lambda_b^0)\right)^{0.62}/ \epsilon_i~~, \end{equation} where $A_i$ is the fit value from Table~\ref{tab:param}, $\delta A_i$ is its error, $\epsilon_i$ is the relative $\Lambda_b$ efficiency for each of the three different methods of 0.95, 0.29, and 0.86, for $b\overline{b}$, $\overline{p}$ and $\Lambda$ searches, respectively. The 0.95 results from the relative efficiency of continuum $b\overline{b}$ production to $\Lambda_b\overline{\Lambda}_b$, the 0.27 is the product of the $\Lambda_b\overline{\Lambda}_b$ decay rate into anti-protons and the efficiency of the hadronic event selection, and the 0.86 is hadronic event selection for $\Lambda_b\overline{\Lambda}_b$. The systematic errors are included only in the limits using $b\overline{b}$ production. In the other two cases the systematic errors on the inclusive $\overline{p}$ and $\Lambda$ branching ratios worsen the upper limits by 32\% and 31\%, respectively. We determine upper limits for production of a resonance that would decay into $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$, similar in spirit to $\Upsilon$(4S) $\to B\overline{B}$. Here we take two possible intervals for either a narrow 6 MeV wide resonance or a wider, arbitrarily chosen, 18 MeV resonance. For the first case we fit a horizontal line to our data up to the $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ threshold of 11.24 GeV and then estimate the upper limit for a cross-section excess in each 6 MeV interval of center of mass energy. These 95\% confidence level upper limits are shown in Fig.~\ref{res_upper_limit}(a). For the second case, we fit all our data to a horizontal straight line while excluding an 18 MeV wide interval of center-of-mass energy. We then calculate the 95\% confidence level upper limit by calculating the difference of the data relative to the fit line. These limits are shown in Fig.~\ref{res_upper_limit}(b). \begin{figure}[hbt] \begin{center} \epsfig{figure=upper_enh_ratio_11.eps,height=2.6in}\hspace{0.2in} \caption{\label{upper_limit} The fractional upper limits at 95\% c. l. for $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production obtained using $\Lambda$ (solid line), anti-proton (dashed line) and the $b\overline{b}~$ (dotted line) yields set by using the BES function. For the $b\overline{b}~$ case only, systematic errors have been included.} \end{center} \end{figure} \begin{figure}[hbt] \begin{center} \epsfig{figure=upper_binned_a.eps,height=3in} \epsfig{figure=upper_lineM.eps,height=3in} \caption{\label{res_upper_limit} Upper limits at 95\% c. l. for $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production obtained using $\Lambda$ (solid line), anti-proton (dashed line) and the $b\overline{b}~$ (dotted line) yields. (a) The upper limits have been set in 6 MeV center of mass energy intervals in the scan region. (b) Upper limits in 18 MeV wide intervals. For the $b\overline{b}~$ case only, systematic errors have been included.} \end{center} \end{figure} No resonant enhancement reminiscent of the $\Upsilon (4S)$ resonance is observed. Using the threshold function we can set an upper limit at our highest energy point of 11.383 GeV on the ratio of $\Lambda_b^0$ to $b\overline{b}$ production. These limits are given in Table~\ref{tab:ul}. For $b\overline{b}~$ production we use two values - the first is $R_{b \overline b}^0$ as defined in Eq.~\ref{eq:Rbb}; the second is determined by fitting $R_{b \overline b}$ values assuming no enhancement along scan range. These values are $R_{b \overline b}^0 = 1/3$ and $R_{b \overline b}= 0.322 \pm 0.004$. The limits based on this function become lower toward lower energy as we approach the production threshold. The anti-proton and $\Lambda$ samples are somewhat correlated in that anti-protons from $\overline{\Lambda}$ decay are often included in both samples, so we choose not to combine these limits. \begin{table*}[htb] \begin{center} \label{tab:ul} \caption{Upper limits at 95\% c.l. on the ratio of $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ to $b\overline{b}~$ production at 11.383 GeV.} \vspace{0.2cm}\vspace{0.2cm} \begin{tabular}{lcccc} \hline\hline method & \multicolumn{2}{c}{95\% c.l. (statistical only)} & \multicolumn{2}{c}{95\% c.l. (statistical \& systematic)}\\ & $R_{\Lambda_b^0\overline{\Lambda}_b^0}$/$R_{b\overline{b}}^0$ & $R_{\Lambda_b^0\overline{\Lambda}_b^0}$/$R_{b\overline{b}}$ & $R_{\Lambda_b^0\overline{\Lambda}_b^0}$/$R_{b\overline{b}}^0$ & $R_{\Lambda_b^0\overline{\Lambda}_b^0}$/$R_{b\overline{b}}$\\ \cline{1-5} $b\overline{b}~$ & - & - & 6.0\% & 6.2\%\\ Anti-proton & 9.2\% & 9.5\% & 12.2\% & 12.5\%\\ $\Lambda$ & 9.9\% & 10.2\% & 12.9\% & 13.3\%\\ \hline\hline \end{tabular} \end{center} \end{table*} \section{Conclusions} We do not observe any resonant or threshold enhancement of $\Lambda_b^0\overline{\Lambda}\,\!_b^0~$ production in the center of mass energy region just above threshold, resulting in 95\% confidence level upper limits on the order of 0.05-0.10 units of R. The 95\% confidence level upper limits from anti-proton and $\Lambda$ production are 12.8\% and 12.9\% of $R^0_{b\overline{b}}$, respectively, at our highest energy point if they are modelled as a growth above threshold. In order to effectively study $\Lambda_b$ decays at $e^+e^-$ machines, it may be necessary to go to higher center of mass energies.
1,108,101,563,772
arxiv
\section{References}\therefs} \def{\it i.e.\/}{{\it i.e.\/}} \def{\it e.g.\/}{{\it e.g.\/}} \def{\it etc.\/}{{\it etc.\/}} \def{\it et al.\/}{{\it et al.\/}} \def{\it a priori\/}{{\it a priori\/}} \def{\it a posteriori\/}{{\it a posteriori\/}} \def{\it via\/}{{\it via\/}} \def{\it vs.\/}{{\it vs.\/}} \def{\it ad hoc\/}{{\it ad hoc\/}} \def$\bullet${$\bullet$} \mark{{}{}} \def\ps@headings{\let\@mkboth\markboth \def\@oddfoot{}\def\@evenfoot{ \def\@evenhead{ \sl \leftmark \hfil \def\@oddhead{\hbox{}\hfil \sl \rightmark } \def\ps@myheadings{\let\@mkboth\@gobbletwo \def\@oddhead{\hbox{}\sl\rightmark \hfil \rm\thepage}% \def\@oddfoot{}\def\@evenhead{\rm \thepage\hfil\sl\leftmark\hbox {}}% \def\@evenfoot{}\def\chaptermark##1{}\def\sectionmark##1{}% \def\subsectionmark##1{}} \catcode`@=12 \seceq \begin{document} \rightline{UTTG-13-94} \rightline{\today} \vspace{24pt} \begin{center} \large{\bf Non-Perturbative Decoupling of Heavy Fermions} \normalsize \vspace{36pt} Moshe Rozali\fnote{\dag}{\sppt} \vspace{8pt} \utgp \end{center} \abstract {We show that heavy fermions decouple from the low energy physics also in non-perturbative instanton effects. Provided the heavy fermions are lighter than the symmetry breaking scale, all the instanton effects should be expressed as local operators in the effective Lagrangian. The effective theory itself doesn't admit instantons. We present the mechanism which suppresses instantons in the effective theory.} \vfill \pagebreak \baselineskip=21pt \section{Introduction} \indent\indent In a recent paper \cite{BD} non-perturbative instanton effects were claimed to violate decoupling. Several solutions to the problem were suggested \cite{{G},{Hsu}}. In this paper we reformulate the problem in the context of the standard model and clarify the the process of integrating out heavy fermions. In what follows we consider the standard model with three families and inter-family mixing. The color and hypercharge indices are suppressed, and the lepton doublets are left undisplayed, since they are irrelevant for the discussion. The Lagrangian of the model is (see the appendix for notations): \begin{equation}} \def\eeq{\end{equation} {\cal L}=-\frac{1}{2}{\rm Tr}(W_{\mu\nu}W^{\mu\nu})+\frac{1}{2}(\partial_\mu\tilde\sigma)^2-\frac{\lambda}{4} \,(\tilde\sigma^2-v^2)^2+\frac{\tilde\sigma}{4}\,{\rm Tr}(D_\mu U^{\dagger} D^\mu U) +L_{\rm fermionic}\,. \eeq For one quark doublet the fermion action is: \begin{equation}} \def\eeq{\end{equation} {\cal L}_\psi=i\psi^{\dagger} \sigma^\mu D_\mu\psi+i\bar\psi^{\dagger}\sigma^\mu\partial_\mu\bar\psi-\left(\bar\psi^T \frac{{\cal M}_{q}}{v}\,\tilde\sigma U\psi+{\rm c.c.}\right) \eeq We assume equality of the top and bottom mass, ${\cal M}_q=m_t\fo$, unless otherwise indicated. $\langle\tilde\sigma\rangle=\sigma$ breaks the symmetry, $\sigma\equiv\tilde\sigma-v$ is the physical Higgs. $U$ is the angular Higgs (the would-be Goldstone bosons). In theories with Higgs fields no exact instanton solutions exist. However, there exist approximate solutions, known as ``constrained instantons'' \cite{AF}. In the next section we review the constrained instanton construction and the resulting 'tHooft operator \cite{'tH}. This operator violates baryon number by 3 units. We integrate out one heavy quark doublet, assuming $m_\sigma$ and $m_W$ are lighter than $m_t$, so the effective theory contains the same bosonic fields. After integrating out the fermions the theory's predictions for low energy processes should be obtainable from a local effective Lagrangian --- this is what is meant by ``decoupling'' in this context. First, the effective Lagrangian should reproduce the original 't Hooft operator. This matching is shown in section 3. Secondly, it must predict vanishing amplitudes for $\Delta B=2$ processes, which would naively appear as 't Hooft operators in a two family model. This is shown in detail in Section 4. We present our conclusions in Section 5. \section{The Constrained Instanton:} \indent\indent In this section we review instanton effects in the standard model. We follow some of the notation of \cite{ES}, and the reader is referred to the latter for more details. Fermion number violating amplitudes arise in a semi-classical expansion around an Euclidean configuration with a non-vanishing winding number. Schematically one has the following Euclidean Lagrangian: \begin{equation}} \def\eeq{\end{equation} {\cal L}_E=\psi_A\,M(A)\psi_B + B(A) \eeq $\psi_A,\psi_B$ are Euclidean fermions \cite{ES}, and $A$ some bosonic fields. Expanding to leading $\hbar$ order around some configuration $A_0$: \begin{equation}} \def\eeq{\end{equation} {\cal L}_E=\psi_A\,M(A_0)\psi_B+ B(A_0) + {\rm corrections}\;. \eeq The minimal non-vanishing amplitude is then: \begin{equation}} \def\eeq{\end{equation} \langle\psi_1(x_1)\ldots\psi_r(x_r)\rangle =C e^{-S_0}\psi_0^1(x_1)\ldots\psi_0^r(x_r)\,. \eeq $C$ is a constant calculated from the bosonic fluctuations around $A_0~,~S_0=\int d^4 x B(A_0)$ has to be finite to contribute to the path integral. $r$ is the number of zero modes of the operator $M(A_0)$, related to chiral anomalies by index theorems. In the case considered here, the relevant configuration is the constrained instanton \cite{AF}, which is an approximate saddle point with a finite action. After an Euclidean rotation and rescaling, the bosonic action is: \begin{equation}} \def\eeq{\end{equation} {\cal L}_B=\frac{1}{g^2}\,\left[\frac{1}{2}\,{\rm Tr}(W_{\mu\nu}W_{\mu\nu})+\kappa^2{\rm Tr}(D_\nu M^{\dagger} D_\nu M)+ \kappa^2 (\tilde\sigma^2-m_\sigma^2)^2\right] \eeq $M=\tilde\sigma\,U$ is the Higgs Field. $\sigma=\tilde\sigma-m_\sigma$ is the physical Higgs. The semi-classical limit is $g^2$ small, $m_\sigma^2$ and $\kappa=\frac{m\omega}{m\sigma}$ fixed. The Euclidean equations of motion are: \begin{eqnarray}} \def\eeqra{\end{eqnarray} && D_\mu W_{\mu\nu}+ \frac{i\kappa^2}{2}\,(D_\nu M)^{\dagger}M- \frac{i\kappa^2}{2}\,M^{\dagger} D_\nu M=0 \nonumber \\ && D_\mu^{\dagger}D_\mu M+\left(\frac{1}{2}\,{\rm Tr}(M^{\dagger}M)-m_\sigma^2\right)M=0 \eeqra with no source terms $(\lambda=0$ in (1.1)) there's a finite action solution: \begin{eqnarray}} \def\eeqra{\end{eqnarray} W_\mu^0&=& \frac{2\rho^2}{x^2(x^2+\rho^2)}\,x_\nu\bar\tau_{\mu\nu} \nonumber \\ M^0 &=& \left(\frac{x^2}{x^2+\rho^2}\right)^{1/2}\;m_\sigma\fo \eeqra However, the full action (2.4) is infinite for this configuration. Therefore we try to deform (2.6) to get another finite action solution. Try: \begin{eqnarray}} \def\eeqra{\end{eqnarray} M &=& M^0+\delta M \nonumber \\ W_\mu &=& W_\mu^0+\delta\,W_\mu \eeqra We get equations of the form \begin{equation}} \def\eeq{\end{equation} {\cal O}\delta\phi=J\,. \eeq Where $\phi$ is either $W_\mu$ or $M,~{\cal O}$ is some differential operator. $\delta\phi$ is also subjected to some boundary conditions to ensure a finite action (namely $\delta\phi\rightarrow 0$ fast enough at infinity). The two above conditions on $\delta\phi$ are inconsistent. The operator ${\cal O}$ has zero modes, so the source $J$ propagates to infinity and determine the boundary values of $\delta\phi$. These boundary values correspond to the required ones only if $J$ is perpendicular to the zero modes of ${\cal O}$. Resolution of this problem was given in \cite{AF}. Constraining the path integral in a particular way has the effect of adding operators to the Lagrangian. The modified Lagrangian is called the constrained Lagrangian. The new operators are required to vanish fast enough at infinity (faster than $\frac{1}{|x|^4}$ for the configuration (2.6)) so they don't change the finite action boundary conditions. However, they do change the source $J$ in (2.8), and can be tuned to make it perpendicular to the zero modes. Therefore, one can get instanton solutions to the constrained Lagrangian, which are approximate solutions to the original Lagrangian. Asymptotic expressions for the resulting configuration can be obtained in two different regions \cite{{AF},{ES}}. First, in the unbroken phase ($|x|< v^{-1}$) we expect to get approximately an 'tHooft instanton of size $\rho$, which solves the equations: \begin{eqnarray}} \def\eeqra{\end{eqnarray} D_\mu W_{\mu\nu} = 0 \nonumber \\ D^2 M=0 \eeqra These equations cease to be a good approximation to the full equations of motion where the neglected terms start to dominate. This determines the range of validity of (2.9). One gets: \begin{eqnarray}} \def\eeqra{\end{eqnarray} W_\mu &=& \frac{2\rho^2}{x^2(x^2+\rho^2)}\;x_\nu\bar\tau_{\mu\nu}+\cdots \qquad\qquad ~~|x|\ll m_\omega^{-1} \nonumber \\ M &=& \left(\frac{x^2}{x^2+\rho^2}\right)^{1/2}\, m_\sigma\,\fo+\cdots \qquad\qquad ~~|x|\ll m_\sigma^{-1} \eeqra To get an asymptotic expression for $|x|\gg\rho$ one leaves in the equations of motion just the leading terms at infinity (For the configurations (2.10)) \cite{AF}. All other terms in (2.5) are replaced by a delta function source. The resulting equations are: \begin{eqnarray}} \def\eeqra{\end{eqnarray} && \partial^2M - m_\sigma^2 \,M\,\propto\,\delta^{(4)}(x) \nonumber \\ && \partial_\mu W_{\mu\nu} + i\kappa^2 m_\sigma^2\,W_\nu \,\propto\,\delta^{(4)} (x)\;. \eeqra The terms in (2.11) vanish as $\frac{1}{|x|^4}$ as $|x|\rightarrow\infty$, whereas all the other operators are suppressed by powers of $\frac{\rho^2}{|x|^2}$, including the terms in the constrained Lagrangian added to impose the constraints. This gives the asymptotic expansion: \begin{eqnarray}} \def\eeqra{\end{eqnarray} W_\mu &=& -\rho^2\bar\tau_{\mu\nu}\partial_\nu\, G(m_\omega,|x|)+\cdots \nonumber \\ M &=& m_\sigma\,\fo\,\left(1-\frac{1}{2}\,\rho^2G(m_\sigma,|x|)+\cdots\right)\qquad |x|\gg\rho \eeqra where \begin{eqnarray}} \def\eeqra{\end{eqnarray} && G(m,|x|) = m^2\;\frac{K_1(m|x|)}{m|x|} \nonumber \\ && (\Box-m^2)\,G\,(m,|x|)=-4\pi^2\,\delta(x)\,. \eeqra One assumes $\rho v\ll1$, large instantons cannot be treated that way to yield an approximate saddle point. The above expressions are first order in the small parameter $\rho v$. Since $\rho\ll v^{-1}$ the regions of the above expressions are overlapping, so the configuration is well defined everywhere. The two expressions have to be patched in the intermediate region $(\rho<|x|<v^{-1}$). The fermionic zero modes of this configuration have similar expansions \cite{ES}. It is also assumed that ${\cal M}_q<v$, which is the case considered in this paper. \section{The low energy effective Lagrangian:} \indent\indent Consider the energy range $m_\omega<E<\,m_t<v$. In this range all virtual configurations smaller than $m_t^{-1}$ should be expressed as a series of local operators. This includes perturbative configurations (like heavy quark loops) as well as small instantons. Therefore the 't Hooft operator should be included in the effective Lagrangian, as a part of the matching process. The 't Hooft operator involves necessarily a heavy quark. Therefore its not part of the physics to be described by the low energy effective theory. However, with family mixing there are low energy baryon number violating operators, resulting from virtual heavy quarks. An example of a dimension 21 operator is given in Figure 1. This operator and similar ones should be a part of the effective Lagrangian. \vskip 15pt \centerline{ \epsfxsize=5.2in \epsfbox{moshfig1.eps}} \vspace{12pt} \centerline{Figure 1} \vspace{12pt} The situation is similar in the case of integrating out just the top (assuming $m_t\gg m_b)$. The resulting theory is non-linearly realized \cite{{C},{FMM}}. Some of the fermions transform linearly under SU(2), and the bottom transforms non-linearly, as a part of an incomplete multiplet. To build SU(2) invariants it's convenient to use the composite field $U\left(0\atop b\right)$ which transform linearly as an SU(2) doublet \cite{F}. Expanding $U=\exp\left(i\frac{\zeta^a\tau^a}{v}\right)$ shows that replacing the complete doublet by $U\left(0\atop b\right)$ in the 't Hooft operator recovers the original operator with $b$ emitted, but also a series of operators involving $\zeta^a$ (the longtitudal gauge bosons). These additional operators are suppressed by powers of $v^{-1}$, and represent effects of integrating out the top. \section{Instanton Suppression in the effective \newline theory:} \indent\indent An instanton solution to the low energy theory, and corresponding fermionic zero modes for the remaining two quark doublets would contradict the full theory's predictions and therefore violate decoupling. We show now that the low energy theory doesn't admit instantons. Integrating out heavy fermions induces a series of higher dimensional terms in the effective Lagrangian, see for example \cite{{FD},{F}}. It was suggested \cite{G} that some of these operators give rise to infinite action (zero measure) for the original constrained instanton. However, the divergences come from the short distance behavior of these operators, where the effective action cannot be used.It is also conceivable that one can modify the original solution (by changing the constraints, for instance) to another approximate solution which accommodates the higher dimensional terms and has a finite action. This would represent threshold corrections to the instanton coming from integrating out the fermions. In the following we demonstrate that no such solution is possible. We assume, ex absurdo, that the constrained Lagrangian can be adjusted to enable finite (effective) action boundary conditions. We study the resulting asymptotic expression for $|x|\gg \rho$ and conclude that it cannot be patched to any configuration with a non-zero winding number, thus contradicting the assumption. Assume then that we have an instanton configuration (2.10) at short distances. We now try to get an asymptotic expression similar to (2.12). Most new operators in the effective theory will be replaced by a delta function source in (2.11), but some will also contribute to the left hand side of equation (2.11). Consider for example the operator: \begin{equation}} \def\eeq{\end{equation} \Delta{\cal L}=\frac{1}{4\pi^2m_\sigma}\,\sigma{\rm Tr}\;(W_{\mu\nu}\;W_{\mu\nu}) \eeq coming from a diagram shown in figure (2). \centerline{ \epsfxsize=1.2in \epsfbox{moshfig2.eps}} \centerline{Figure 2} \vspace{12pt} This operator cannot be ommited in the region described by equation (2.11), since it dominates the existing terms for $|x| < \frac{\rho}{\sqrt{\rho v}} $. In particular we get the modified equation : \begin{equation}} \def\eeq{\end{equation} \kappa^2(\Box-m^2_\sigma)\sigma+\frac{1}{4\pi^2m_{\sigma}}\,{\rm Tr}\,(W_{\mu\nu}W_{\mu\nu}) =2\pi^2\rho^2\,\delta(x)\;. \eeq All other higher dimensional operators in the equations of motion are suppressed by powers of $\frac{\rho^2}{|x|^2}$ or $(m_t|x|)^{-1}$. Therefore equation (4.2) holds for $|x|\gg m_t^{-1}$ (not for all $|x|\gg\rho$ as in section 2). Equation (4.2) is an equation of the form (2.8), which in general imposes restrictions (orthogonality conditions) on the source. To see these restrictions multiply (4.2) by $G(m_\sigma,|x|$) and integrate from $r>m_t^{-1}$ to infinity. The left hand side picks up just boundary terms at $r$. Choosing $r<m_\sigma^{-1}$ and $r<m_\omega^{-1}$, the unbroken solution (2.10) still holds. Therefore: \begin{equation}} \def\eeq{\end{equation} \frac{1}{v}\, \int_r^\infty \!\!d^4x\, {\rm Tr}(W_{\mu\nu}\,W_{\mu\nu})\, G(m_\sigma,|x|)\sim 0 \left(v\;\frac{\rho^2}{r^2}\right)\,. \eeq (Using: $G(m_\sigma,|x|)\sim \frac{1}{|x|^2}+0(\rho v)$~) \\ or: \begin{equation}} \def\eeq{\end{equation} W_{\mu\nu}\,(|x|=r)\ll 0\left( \frac{\rho v}{r^2}\right)\,. \eeq This statement is modified by corrections to (2.10), (4.3) etc. However, all these corrections are of order $\rho v$. To first order in $\rho v$, (4.4) implies that $W_{\mu\nu}=0$ for $m_t^{-1}<r< m_\omega^{-1}$. This is inconsistent with the assumption that we have an instanton in the unbroken phase. The corrections mentioned above make $W_{\mu\nu}\sim 0(\rho v)$. This would give winding number of order $\rho v$. However, since $\rho v\ll 1$, this is still inconsistent with having an instanton configuration, small corrections cannot generate a winging number. Therefore we conclude that there are no possible approximate instanton solutions in the effective theory. \section{Conclusions} In this paper we have considered small instantons coupled to fermions lighter than the symmetry breaking scale. We have integrated out one heavy quark doublet. In this limit we have shown that there are no instantons in the low energy theory. The result came from considering the region where an instanton is patched to an asymptotic expression. Integrating out the fermions changes the asymptotic expression, making it impossible to patch it to an instanton. Even though the core of the instanton is not describable by the effective theory, our considerations relied only on regions where the effective theory is valid. This was possible due to the existence of an operator in the effective theory that is not suppressed by powers of the heavy fermion mass, and therefore can affect the region describable by the effective theory. \section*{Acknowledgements} The author wishes to thank W. Fischler for suggesting the problem, for his invaluable advice and criticism and for his comments on the manuscript. Useful conversations with J. Distler, J. Feinberg, S. Paban and J.M. Rodriguez are gratefully acknowledged. The author also thanks T. Gould for a useful correspondence. \section*{Appendix} \defA.\arabic{equation}{A.\arabic{equation}} \setcounter{equation}{0} The standard model action can be obtained from the general theory of non-linear realization \cite{C}, see for example \cite{F}. The bosonic Lagrangian is \begin{equation}} \def\eeq{\end{equation} {\cal L}_B=-\frac{1}{2} \,{\rm Tr}(\hat W_{\mu\nu}\,W^{\mu\nu}+\hat B_{\mu\nu}\hat B^{\mu\nu})+\frac{1}{2}\;(D_\mu M^{\dagger}\,D^\mu\, M)+ V\,(M^{\dagger}M) \eeq $W_\mu,B_\mu$ are SU(2) matrices \begin{eqnarray}} \def\eeqra{\end{eqnarray} \hat W_\mu &=& W_\mu^at^a \nonumber \\ \hat B_\mu &=& B_\mu t^3 \eeqra $M=\tilde\sigma U$, the covariant derivatives are: \begin{eqnarray}} \def\eeqra{\end{eqnarray} D_\mu\tilde\sigma &=& \partial_\mu\tilde\sigma \nonumber \\ D_\mu\hat U& = & \partial_\mu\hat U+ig\hat W_\mu\hat U-ig'\hat U\hat B_\mu\;. \eeqra The top-bottom Lagrangian is: \begin{equation}} \def\eeq{\end{equation} {\cal L}_\psi=i\psi^{\dagger}\sigma^\mu D_\mu\psi + i\bar\psi^{\dagger}\sigma^\mu D_\mu\bar\psi-\left[\bar\psi^T\;\frac{M_q}{v}\,M\,\psi+\mbox{c.c.}\right] \eeq All the Fermions used are left handed Weyl Fermions. The covariant derivatives are: \begin{eqnarray}} \def\eeqra{\end{eqnarray} D_\mu\psi &=&\left(\partial_\mu+ig W_\mu+\frac{2i}{3}\,g't_3\hat B_\mu\right)\psi\nonumber \\[6pt] D_\mu\psi &=&\left(\partial_\mu- ig'\left(1+\frac{2t_3}{3}\right)B_\mu\right) \bar\psi \eeqra \clearpage
1,108,101,563,773
arxiv
\section{Introduction} In general relativity, or more general gravitational theories admitting diffeomorphism invariance, the role played by the mass is quite different from that in other branches of physics. The weak equivalence principle makes it impossible to construct a well-defined ``local'' gravitational mass since it is always possible to set the local gravitational energy to vanish by working in a local Lorentz frame. This conceptual obstacle forced us to focus upon globally defined conserved quantities, e.g. the Arnowitt-Deser-Misner (ADM) mass in an isolated system~\cite{Arnowitt:1959ah}. However, one can circumvent this difficulty when the spacetime admits a spherical symmetry. In this case, the gravitational degrees of freedom is localized because of the absence of gravitational waves and it turns out to be extremely useful to define the ``quasilocal'' mass referring to the compact and orientable surfaces, as first demonstrated by Misner and Sharp~\cite{Misner:1964je}. A number of local geometric properties of spacetime are encoded in the Misner-Sharp mass~\cite{Hayward:1994bu}, including the causal property of central singularities, trapped surfaces and asymptotic charges. It follows that the Misner-Sharp mass provides a fruitful venue for dexterously capturing the dynamical aspects of gravitational collapse. Inspired by recent advances of string theory, many people have tried to extend Einstein's gravity. A covariant gravitational theory constructed by Lovelock~\cite{Lovelock} is a natural extension of general relativity into $D(\ge 5)$ dimensions. The most appealing and characteristic feature of Lovelock gravity inherited from general relativity is that the field equations continue to remain second order, irrespective of the fact that they are accompanied by higher-order polynomials of curvature tensors. This traces back to the topological interpretation of each Lovelock term as the dimensionally continued Euler densities, allowing the understanding of Lovelock terms in the context of the BRST cohomology~\cite{Cnockaert:2005jw}. Therefore, there appear no ghost degrees of freedom at the linearized level ~\cite{Zwiebach:1985uq,Zumino:1985dp}, and the Lovelock gravity is a classically well-posed gravitational theory (see \cite{Izumi:2014loa,Reall:2014sla} for a recent discussion opposing this belief). Apart from this theoretical aesthetic beauty, the quadratic Lovelock term dubbed the ``Gauss-Bonnet'' term arises as a low-energy effective action in heterotic string theory~\cite{Gross,Gross2,Metsaev}. This motivates us to explore quantum aspects of higher-curvature terms as well in light of AdS/CFT correspondence~\cite{Brigante:2008gz}. Since higher-curvature terms come into play where the gravitational force becomes very strong, black holes are the best test beds in which deviations from general relativity are capitally encoded. The complexity of Lovelock field equations has restricted the analysis especially to the spacetimes with a high degree of symmetry. Among other things, many works have focused upon the spacetime which is the warped product of two-dimensional Lorentzian spacetime and an $n$-dimensional maximally symmetric space. The simplest solution is the spherically symmetric black hole found by Whitt~\cite{Whitt:1988ax}, as a generalization of the Schwarzschild solution in general relativity. Thermodynamics~\cite{Myers:1988ze,Cai:2003kt} and gravitational instabilities~\cite{Takahashi:2009dz,Takahashi:2009xh,Takahashi:2010ye,Takahashi:2010gz,Takahashi:2011qda,Takahashi:2012np} of this type of black hole have been intensively studied. This class of metrics also contains the Tolman-Bondi inhomogeneous dust spacetime \cite{maeda2006b,Ohashi:2011zza,Ohashi:2012wfa} and the Vaidya-type radiating solution~\cite{Kobayashi:2005ch,Maeda:2005ci,Nozawa:2005uy,Cai:2008mh}, both of which describe the gravitational collapse. The examinations of gravitational collapse have revealed that the global structure turns out to be quite different from that encountered in general relativity, and a peculiar type of massive singularity emerges in every odd dimension. In these analyses, the generalized Misner-Sharp quasilocal mass~\cite{Maeda:2007uu,Maeda:2011ii} plays an essential role as in general relativity. In this sense, the Misner-Sharp quasilocal mass is more advantageous than the Brown-York quasilocal mass~\cite{by1993} constructed based upon the Hamiltonian formalism. In general relativity, the $n$-dimensional maximally symmetric space can be replaced by arbitrary Einstein spaces, since their Weyl curvature fails to contribute to Einstein's equations.\footnote{The replacement to the Einstein space has a significant impact upon the linear instability of black holes~\cite{Gibbons:2002pq}.} In Lovelock gravity, on the other hand, the Weyl tensor appears explicitly in field equations and, therefore, the generic Einstein manifold fails to satisfy the vacuum field equations of the warped metric. The condition that this type of metric admit vacuum solutions imposes two conditions upon the Weyl curvature of the Einstein space. When one takes the Einstein space satisfying these conditions, the causal structures of the black hole considerably differ from those with maximally symmetric horizons, as argued in the Gauss-Bonnet gravity \cite{Dotti:2005rc,Maeda:2010bu,Pons:2014oya} and in the third-order Lovelock gravity \cite{Farhangkhah:2014zka}. This motivates our present attempt to explore the conditions for an Einstein horizon in general Lovelock gravity, by extending the analysis in \cite{Dotti:2005rc,Maeda:2010bu,Pons:2014oya,Farhangkhah:2014zka}\footnote{The solutions to special cases of Lovelock gravity with a more general base space were also studied in \cite{Dadhich:2015nua,Anabalon:2011bw,Dotti:2010bw,Oliva:2012ff}.}. In this paper, we generalize the previous studies \cite{Dotti:2005rc,Maeda:2010bu,Pons:2014oya,Farhangkhah:2014zka} into Lovelock gravity where the spacetime consists of the warped product of the two-dimensional Lorentzian metric and the $n$-dimensional Einstein space. We find that the Weyl tensor of $n$-dimensional Einstein space must obey two conditions. We find that a dozen new Einstein spaces turn out to satisfy the Lovelock field equations. We extend the definition of the Misner-Sharp type quasilocal mass adapted to the present context. It turns out that the quasilocal mass displays some desirable physical properties under suitable energy conditions, provided that some conditions on the Weyl curvature and the coupling coefficients of Lovelock action are satisfied. Using the quasilocal mass, we further explore the properties of trapping horizons and their thermodynamics. The present paper proceeds as follows. In the next section, we give a brief review of Lovelock gravity and derive field equations under the setup described above. In Sec.~\ref{sec:QLM}, we define a quasilocal mass as a generalization of the Misner-Sharp mass. We explore a number of properties of dynamical black holes defined by the trapping horizons using the quasilocal mass in Sec.~\ref{sec:trapping}. Final remarks are described in Sec.~\ref{sec:conclusion}. In the Appendix, we give a variety of exact solutions for vacuum and electrovacuum cases. We follow the conventions of Wald's textbook~\cite{Wald} for curvature tensors. \section{SetUp} \label{sec:setup} The action of Lovelock gravity in $D$ dimensions is~\cite{Lovelock} \begin{align} \label{action} S=\frac{1}2\int {\rm d} ^D x \sqrt{-g} \left( \sum_{m=1}^k\frac{1}{2^m}\frac{a_m}m \delta ^{\mu_1\mu_2 \cdots \mu_{2m-1}\mu_{2m}} _{\nu_1\nu_2 \cdots \nu_{2m-1}\nu_{2m}} R_{\mu_1\mu_2 }{}^{\nu_1\nu_2} \cdots R_{\mu_{2m-1}\mu_{2m} }{}^{\nu_{2m-1}\nu_{2m}} +2\Lambda \right)+S_{\rm mat}\,, \end{align} where $a_m$ are real constants and we set $a_1=1$ and $8\pi G=1$. $k $ is given by $k\equiv \lfloor(D-1)/2\rfloor$, where the symbol $\lfloor x\rfloor$ denotes the integer part of $x$.\footnote{ Note that some literatures have employed the convention $k=\lfloor D/2\rfloor$ different from ours. Since the $D/2$th term in even dimensions amounts to the topological invariant, it fails to contribute to the field equation. Hence, both of these conventions do not make any physical difference. } $S_{\rm mat}$ is the action for the matter field, $\Lambda $ is a cosmological constant and $\delta$ denotes the totally antisymmetric product of Kronecker delta normalized by \begin{align} \label{} \delta ^{\mu_1\mu_2 \cdots \mu_{m-1}\mu_{m}} _{\nu_1\nu_2 \cdots \nu_{m-1}\nu_{m}}= \delta ^{\mu_1}_{\nu_1}\delta ^{\mu_2}_{\nu_2} \cdots \delta^{\mu_{m-1}}_{\nu_{m-1}}\delta^{\mu_m}_{\nu_m}+{\rm cyclic} \,. \end{align} The gravitational field equations derived from the action (\ref{action}) reads \begin{align} \label{EOM} \ma G_{\mu\nu}= T_{\mu\nu} \,, \end{align} where $T_{\mu\nu}=-2 \delta S_{\rm mat}/\delta g^{\mu\nu}$ describes the stress tensor of the matter fields. The Lovelock tensor $\ma G_{\mu\nu}$ is given by \begin{align} \label{} \ma G^\mu{}_\nu = -\sum_{m=1}^k \frac 1{2^{m+1}}\frac{a_m}{m} \delta ^{\mu \rho_1\rho_2 \cdots \rho_{2m-1}\rho_{2m}}_{ \nu \sigma_1\sigma_2 \cdots \sigma_{2m-1}\sigma_{2m}} R_{\rho_1\rho_2}{}^{\sigma_1 \sigma_2}\cdots R_{\rho_{2m-1}\rho_{2m}}{}^{\sigma_{2m-1} \sigma_{2m}}\,, \end{align} obeying the Bianchi identity $\nabla^\nu \ma G_{\mu\nu}=0$. A notable feature of Lovelock gravity is that the equations of motion involve no more than the third derivative of the metric. In this paper, we consider the $D=n+2$-dimensional spacetime $(\mathcal{M}^D,g_{\mu \nu})$ for which the metric takes the cross product of the two-dimensional orbit spacetime $(M^2,g_{ab})$ and the $n$-dimensional Einstein space $(\mathcal{K}^n,\gamma_{ij})$. Namely, the local metric reads \begin{align} \label{metric} {\rm d} s^2=g_{ab}(y){\rm d} y^a{\rm d} y^b+r^2(y)\gamma_{ij}(x^k){\rm d} x^i{\rm d} x^j \,, \end{align} where $r$ is the scalar on $M^2 $ corresponding to the warp factor. Indices $a, b,...$ run over $0,1$ and $i,j,...$ correspond to those of the Einstein space. The Ricci tensor of Einstein space $(\mathcal{K}^n,\gamma_{ij})$ reads $R_{ij}[\gamma] =(n-1) \kappa \gamma_{ij}$, where $\kappa $ is the constant normalized by $\kappa=\pm 1, 0$. We assume that $(\mathcal{K}^n,\gamma_{ij})$ is compact with the area $V_n^\kappa$, and that $(M^2, g_{ab})$ is a time-orientable Lorentzian manifold. The Riemann tensor of (\ref{metric}) decomposes into \begin{align} \label{} R_{abcd}={}^{(2)}R_{abcd}\,, \qquad R_{aibj}=-r(D_aD_b r) \gamma_{ij} \,, \qquad R_{ijkl} =r^2 [R_{ijkl}[\gamma] -2(Dr)^2\gamma_{i[k}\gamma_{l]j} ] \,, \end{align} where $D_a$ is a covariant derivative with respect to $g_{ab}$ and $(Dr)^2 \equiv g^{ab}(D_a r)(D_br)$. The suffix ``2'' is attached with quantities of $M^2$, which are distinguished by those of $(\ma K^n, \gamma_{ij})$ represented by $[\gamma]$. One can express the Weyl tensor of the Einstein space as \begin{align} \label{} R_{ij}{}^{kl} [\gamma] =C_{ij}{}^{kl}[\gamma] +\kappa \delta^{kl}_{ij} \,. \end{align} When ($\ma K^n, \gamma_{ij}$) is maximally symmetric, the present setup collapses to the case analyzed in \cite{Maeda:2011ii}. Note that the Einstein space is necessarily maximally symmetric in $n=3$, the nontriviality arises in $D\ge 6$ dimensions. \subsection{Field equations} In the following calculation, we frequently use the quantities $W(s)^{i}{}_j$ and $W(s)$, which are defined as \begin{equation} W(s)^{i}{}_j\equiv \begin{cases} \delta^i_j & s=0\\ \delta^{i i_1 i_2 \dots i_{2s-1}i_{2s}}_{j{j_1 j_2 \dots j_{2s-1}j_{2s}}}C_{i_1 i_2}{}^{j_1 j_2}[\gamma]\dots C_{i_{2s-1}i_{2s}} {}^{j_{2s-1}j_{2s}}[\gamma] & s\geq 1\,, \end{cases} \end{equation} and \begin{equation} W(s)\equiv \begin{cases} 1 & s=0\\ \delta^{ i_1 i_2 \dots i_{2s-1}i_{2s}}_{{j_1 j_2 \dots j_{2s-1}j_{2s}}}C_{i_1 i_2}{}^{j_1 j_2}[\gamma]\dots C_{i_{2s-1}i_{2s}}{}^{j_{2s-1}j_{2s}}[\gamma] & s\geq 1. \end{cases} \end{equation} $W(s)_{ij}$ are symmetric tensors on ($\ma K^n, \gamma_{ij}$). We also define \begin{align} \left( \begin{matrix} m \\ l\end{matrix}\right) \equiv {}_mC_{l} . \end{align} By a straightforward computation, the Lovelock tensor decomposes into \begin{align} \label{Gab} \ma G^a{}_b=& \sum_{m=1}^k\sum_{l=0}^{m-1} \frac{a_m 2^{l-m+1}}{r^{2m-2}} \Bi{m-1}{l} \left[ \delta^a{}_b \frac{D^2 r}r -\frac{D^a D_b r}r -\delta ^a{}_b (n-2m+1)\frac{\kappa-(Dr)^2}{2(l+1)r^2} \right] \nonumber \\ & \times \left(\prod_{p=0}^{2l}(n-2m+2+p)\right)(\kappa-(Dr)^2) ^l W(m-1-l) \nonumber \\ &-\sum_{m=1}^k \frac{1}{2^{m+1}}\frac{a_m}m \delta^a{}_b \frac{W(m)}{r^{2m}} +\Lambda \delta ^a{}_b \,, \\ \label{Gij} \ma G^i{}_j=& \sum_{m=1}^k \frac{a_m}{2^{m-1}}\frac{D^2 r}{r^{2m-1}}\left[ \sum_{l=0}^{m-1} 2^l \Bi{m-1}{l}\left(\prod_{p=0}^{2l }(n-2m+1+p)\right)(\kappa-(Dr)^2)^l W(m-1-l)^i{}_j \right]\nonumber \\ &-\sum_{m=1}^k \frac{a_m}{2^m}\frac{{}^{(2)} R}{r^{2(m-1)}} \left[ \sum_{l=0}^{m-1}\frac{2^l }{n-(2m-1)} \Bi{m-1}{l}\left(\prod_{p=0}^{2l} (n-(2m-1)+p)\right)(\kappa-(Dr)^2)^lW(m-1-l) ^i{}_j \right] \nonumber \\ &-\sum_{m=1}^k\frac{(m-1)a_m}{2^{m-2}}\frac{\delta^{ab}_{cd}(D_aD^cr)( D_b D^d r)}{r^{2(m-1)}}\left[(n-2m+2)\sum_{l=0}^{m-2}2^l \Bi{m-2}{l} \nonumber \right.\\ & \qquad \times \left. \left(\prod_{p=0}^{2l}(n-(2m-3)+p)\right) (\kappa-(Dr)^2)^lW({m-2-l})^i{}_j\right] \nonumber \\ &-\sum_{m=1}^k \frac{a_m}{2^{m+1}m r^{2m}} \left[\sum_{l=0}^m \frac{2^l }{n-2m} \Bi{m}{l} \left(\prod_{p=0}^{2l}(n-2m+p)\right)(\kappa-(Dr)^2)^l W(m-l)^i{}_j\right] +\Lambda \delta^i{}_j \,. \end{align} For a generic Einstein space $(\ma K^n, \gamma_{ij})$, $W(m)$ are functions dependent on the coordinates $x^i $ and the (trace-free part of) symmetric tensor $W(m)_{ij}$ is nontrivial. In that case, the Lovelock tensor $\ma G^\mu {}_\nu$ involves the convoluted coordinate dependence on $x^i$, as well as the dependence on $y^a$. In order to avoid these technical difficulties and make the discussion focused, we impose in this paper the following two conditions on $(\ma K^n, \gamma_{ij})$: \begin{subequations} \label{Wcond} \begin{align} W(m)^i{}_j&=\frac{n-2m}n \delta^i{}_j W(m) \,, \label{Wcond1}\\ W(m)&={\rm const. } \label{Wcond2} \end{align} \end{subequations} With these conditions, the $x^i$ dependence of $\ma G^\mu{}_\nu$ drops out except for the contribution stemming from the metric $\gamma_{ij}$. In \cite{Dotti:2005rc} a similar condition was imposed in the $m=2$ case. On account of the dimensionally dependent Lovelock identities~\cite{Lovelock2}, the constraint (\ref{Wcond1}) is automatically satisfied for $m\ge \lfloor (n+1)/2\rfloor=k$. Obviously, the conditions (\ref{Wcond}) restrict the permissible horizon topologies for static black holes. Appendix~\ref{app:ex} illustrates some explicit examples of Einstein spaces satisfying (\ref{Wcond}). The stress tensor compatible with these assumptions, therefore, reads \begin{align} \label{SETensor} T_{\mu\nu} {\rm d} x^\mu {\rm d} x^\nu = (\hat T_{ab}(y) -P(y) g_{ab}) {\rm d} y^a {\rm d} y^b +r^2 (y) p(y) \gamma_{ij} (x){\rm d} x^i {\rm d} x^j \,, \end{align} where \begin{align} \label{PhatT} P\equiv -\frac 12 T^a{}_a \,, \qquad g^{ab} \hat T_{ab}=0 \,. \end{align} The stress-energy tensor obeys the conservation law \begin{align} \label{SEcons} 0=\frac 1{r^n}D_b [r^n (\hat T^{ab}-P g^{ab})] -\frac{n}{r} (D^a r) p \,. \end{align} Under these settings, the Lovelock field equations are given by \begin{align} \label{Loveeq_tf} \hat T_{ab}=&-\sum_{m=1}^k\sum_{l=0}^{m-1} \frac{a_m 2^{l-m+1}}{r^{2m-1}} \Bi{m-1}{l} \left(\prod_{p=0}^{2l}(n-2m+2+p)\right)(\kappa-(Dr)^2) ^l W(m-1-l) \nonumber \\ & \times \left( D_a D_b r-\frac 12 D^2 r g_{ab} \right) \,, \\ \label{Loveeq_tr} P=&- \sum_{m=1}^k\sum_{l=0}^{m-1} \frac{a_m 2^{l-m}}{r^{2m-2}} \Bi{m-1}{l} \left[ \frac{D^2 r}r -(n-2m+1)\frac{\kappa-(Dr)^2}{(l+1)r^2} \right] \nonumber \\ & \times \left(\prod_{p=0}^{2l}(n-2m+2+p)\right)(\kappa-(Dr)^2) ^l W(m-1-l) +\sum_{m=1}^k \frac{1}{2^{m+1}}\frac{a_mW(m)}{m r^{2m}} -\Lambda \,, \\ \label{Loveeq_ij} p=& \sum_{m=1}^k \frac{a_m}{2^mnr^{2(m-1)}}\left[{2}\frac{D^2 r}{r}-\frac{{}^{(2)}R}{n-(2m-1)}\right] \sum_{l=0}^{m-1} 2^l \Bi{m-1}{l}\left(\prod_{p=0}^{2l +1}(n-2m+1+p)\right) (\kappa-(Dr)^2)^l W(m-1-l) \nonumber \\ &-\sum_{m=1}^k\frac{(m-1)a_m}{2^{m-2}n}\frac{\delta^{ab}_{cd}(D_aD^cr)( D_b D^d r)}{r^{2(m-1)}}\sum_{l=0}^{m-2}2^l \Bi{m-2}{l} \left(\prod_{p=0}^{{2l+2}}(n-2m+2+p)\right) (\kappa-(Dr)^2)^lW({m-2-l}) \nonumber \\ &-\sum_{m=1}^k \frac{a_m}{2^{m+1}m r^{2m}} \left[\sum_{l=0}^m \frac{2^l(n-2m+2l) }{n(n-2m)} \Bi{m}{l} \left(\prod_{p=0}^{2l}(n-2m+p)\right)(\kappa-(Dr)^2)^l W(m-l)\right] +\Lambda\,. \end{align} If $r$ is not constant, the angular part of field equation (\ref{Loveeq_ij}) follows from (\ref{Loveeq_tf}), (\ref{Loveeq_tr}) and (\ref{SEcons}). In the above we assumed (\ref{Wcond}) to simplify the system. As far as the vacuum solution is concerned, the condition (\ref{Wcond}) actually follows from the consistency with the field equations, as shown in \cite{Dotti:2005rc} for Gauss-Bonnet gravity. In the following sections, we shall discuss the general properties of the metric under suitable energy conditions, without resorting to the exact solutions. In Appendix \ref{app:matter}, we give some exact solutions of physical interest. \section{Quasilocal mass} \label{sec:QLM} In general relativity, the spacetime admitting spherical symmetry allows no freedom of gravitational radiations. This fact enables us to localize the gravitational energy and one can define the quasilocal mass~\cite{Misner:1964je} that plays an important role in the analysis of dynamics~\cite{Hayward:1994bu}. When $(\ma K^n , \gamma_{ij})$ is a maximally symmetric space, analogous quasilocal quantities have been generalized to Gauss-Bonnet~\cite{Maeda:2007uu} and to Lovelock gravities \cite{Maeda:2011ii}. These definitions have been further extended to the case of Einstein spaces in the Gauss-Bonnet gravity \cite{Maeda:2010bu} and in the third-order Lovelock gravity~\cite{Farhangkhah:2014zka}. Here we complete the series of research by studying the nonconstant curvature case in the full Lovelock gravity, which encompasses all the previous studies. By mimicking the quantity in Maeda's paper~\cite{Maeda:2010bu}, our proposed definition of the quasilocal mass reads \begin{align} M (y)\equiv V^\kappa_{n}&\Bigg[ \sum_{m=1}^k\frac{1}{2^{m+1}}\frac{a_m}{m}\frac{r^{n-2m+1}}{(n-2m+1)}W(m)-\frac{r^{n+1}}{(n+1)}\Lambda \notag \\ &+\sum_{m=1}^k \sum_{l =0}^{m-1}a_m2^{l-m}\frac{r^{n-2m+1}}{l +1}\left( \begin{matrix} m-1 \\ l \end{matrix}\right) \left(\prod_{p=0}^{2l }(n-(2m-2)+p)\right)(\kappa -(Dr)^2)^{l +1}W(m-1-l) \Bigg] \,. \label{MSmass} \end{align} Due to the assumption (\ref{Wcond2}), one may view the quasilocal mass as a scalar on ($M^2, g_{ab}$). It is constructed out of the areal radius $r$ and its first derivative, as well as the Weyl tensor of Einstein space. When the space ($\ma K^n , \gamma_{ij}$) is maximally symmetric, the above definition reduces to the one given in \cite{Maeda:2011ii}. It also recovers the well-defined Misner-Sharp mass~\cite{Misner:1964je} in general relativity ($a_{m\ge 2}=0$), and its generalization in Gauss-Bonnet gravity ($a_{m\ge 3}=0$) ~\cite{Maeda:2007uu,Maeda:2010bu}. Using (\ref{Gab}), one can easily verify that $M$ satisfies the variation formula \begin{align} \label{variation} D_aM =V^\kappa_{n}r^{n}( \mathcal{G}_{ab}D^br -\mathcal{G}^b{}_bD_ar )\,. \end{align} This formula takes exactly the same form as those analyzed in previous studies~\cite{Maeda:2007uu,Maeda:2011ii,Maeda:2010bu}. This relation is of crucial importance in the following discussion. \subsection{Locally conserved currents} The physical meaning of $M$ is less clear in the geometric definition (\ref{MSmass}). In this section, we shall demonstrate that $M$ can be rebuilt in terms of a locally conserved energy flux. To proceed, let us first define the Kodama vector~\cite{Kodama:1979vn} \begin{align} \label{Kodama} K^\mu \equiv -\epsilon^{\mu\nu}\nabla_\nu r \, , \end{align} where $\epsilon_{\mu\nu}=\epsilon_{ab}({\rm d} y^a)_\mu ({\rm d} y^b)_\nu $ and $\epsilon_{ab}$ is a volume element of ($M^2, g_{ab}$). This current can be viewed as a vector field on $M^2$ since $K^i =0$. The Kodama vector fulfills the following crucial property: \begin{align} \label{Knorm} K^\mu K_\mu =-(\nabla r)^2 \,. \end{align} This means that the Kodama vector is timelike (spacelike) in the untrapped (trapped) region and specifies the preferred time direction. Using the Kodama vector, one can also define a Kodama current, \begin{align} \label{} J^\mu \equiv -\ma G^\mu{}_\nu K^\nu \,, \end{align} which is again a vector field on $M^2$. Using the Lovelock field equation (\ref{EOM}), one obtains $J^\mu =-T^\mu{}_\nu K^\nu$. Hence $J^\mu$ describes an energy flux. Because of the properties $K^a D_a r=0$ and $\ma G_{ab}D^a K^b=0$, one sees that these vectors are divergence free \begin{align} \label{divKJ} \nabla_\mu K^\mu =0 \,, \qquad \nabla_\mu J^\mu =0 \,. \end{align} Using $D_a r=- \epsilon_{ab}K^b$, one can easily derive the following relations, \begin{align} \label{} K^a =- r^{-n}\epsilon^{ab}D_b (V/V_n^\kappa) \,, \qquad J^a = -r^{-n}\epsilon^{ab}D_b( M/V_n^\kappa) \,, \end{align} where \begin{align} \label{} V\equiv & \frac{V^\kappa_n}{n+1}r^{n+1} \end{align} is a weighted volume of $\ma K^n$. It follows that vector quantities $K^\mu $ and $J^\mu$ are the Hamiltonian vector fields with the corresponding Hamiltonian $V$ and $M$. This expression makes the divergence-free property (\ref{divKJ}) rather manifest. It is also obvious to see \begin{align} \label{volume} V=- \int _\Sigma K^\mu u_\mu {\rm d} \Sigma \,, \qquad M=- \int _\Sigma J^\mu u_\mu {\rm d} \Sigma \,, \end{align} where $\Sigma $ is a $(D-1)$-dimensional hypersurface without an interior boundary and $u_\mu$ is a future pointing unit normal to $\Sigma$ . This accomplishes our first aim to prove that $M$ is a quasilocal quantity associated with the locally conserved energy flux. The definition of the Misner-Sharp quasilocal mass based upon the energy flux illustrates the direct physical relevance rather than the original geometric definition~(\ref{MSmass}). \subsection{Unified first law} The first law of thermodynamics is one of the fundamental laws in nature. Hence, the validity of the first law deserves a nice criterion for the well-defined mass. We can easily check from (\ref{variation}) that the following unified first law ~\cite{hayward1998} holds \begin{align} \label{U1st} {\rm d} M =A\psi_a{\rm d} x^a+P{\rm d} V\,, \end{align} where $P$ has been defined in (\ref{PhatT}), and \begin{align} \label{psiA} \psi^a \equiv &\, \hat T^a{} _bD^br\,, \qquad A \equiv V^\kappa _nr^n \,. \end{align} $A $ is the weighted area of Einstein space and is related to the volume (\ref{volume}) as $D_a V=A D_a r$. As it turns out from the analysis of the next subsection, $\psi^a$ describes a momentum flux. Therefore, (\ref{U1st}) represents the physical circumstance that the energy balance is compensated by the work term $P{\rm d} V$ and the energy inflow provided by $\psi^a$. The unified 1st law allows us to interpret $M$ as an energy contained in the closed surface enclosed by the geometric radius $r $. \subsection{Birkhoff's theorem} The theorem of Birkhoff plays a significant role when one analyzes the gravitational collapse of a spherical body in general relativity. One of the characteristic features of Lovelock gravity is that Birkhoff's theorem (and modified versions thereof) continues to hold, as discussed in \cite{Maeda:2007uu,Maeda:2011ii,Zegers:2005vx,Deser:2005gr,Ray:2015ava}. Let us consider the matter fields with $\hat T_{ab}=0$. Suppose that the first line of equation (\ref{Loveeq_tf}) is nonvanishing.\footnote{We shall not discuss in this paper the case where the first line of (\ref{Loveeq_tf}) identically vanishes. For these artfully chosen values of $a_m$ with a given Einstein space, there appears a solution for which the metric $g_{ab}$ is undetermined. For this class of metrics, we refer the reader to Refs.~\cite{Maeda:2007uu,Maeda:2011ii,Zegers:2005vx}.} This implies that \begin{align} \label{CKV} 0= D_a D_b r-\frac 12 D^2 r g_{ab} \,. \end{align} Assume that $D_a r$ does not vanish. Then, $D_a r$ describes a conformal Killing field on $M^2$, which implies \begin{align} \label{} D_a K_b =\frac 12 D^2 r \epsilon_{ab} \,. \end{align} It follows that $K^a $ is a Killing vector on $M^2$. Thanks to the property $\nabla_\mu K_\nu=D_a K_b({\rm d} y^a)_\mu ({\rm d} y^b)_\nu$, $K^\mu$ describes a hypersurface-orthogonal Killing vector on $\ma M^D$, \begin{align} \label{} K_{[\mu }\nabla_\nu K_{\rho]} =0 \,, \qquad \nabla_{(\mu }K_{\nu)}=0 \,. \end{align} On account of the property (\ref{Knorm}), this means that the spacetime is static in the untrapped region. Hence, $\hat T_{ab}=0$ provides a sufficient condition for the validity of staticity (in the untrapped region). This also justifies the physical interpretation of $\psi^a$ as a flux current, since it vanishes in static spacetimes. In order to obtain the metric explicitly, let us concentrate on the vacuum spacetime ($P=p=0$) in what follows. Then, the variation formula (\ref{variation}) [or the unified first law (\ref{U1st})] immediately gives $M=\mu={\rm const}$. Let us introduce the coordinate $t$ by $K=\partial/\partial t$ and denote the norm of $K^\mu$ by $-f$, i.e, $K_\mu =-f \nabla_\mu t $. Under the condition $(Dr)^2\ne 0$ one can use $r $ as the coordinate on $M^2$ conjugate to $t$, and the general solution reads \begin{align} \label{staticsol} {\rm d} s_2^2 = -f(r){\rm d} t^2+\frac{{\rm d} r^2}{f(r)} \,, \end{align} where $f(r)$ satisfies \begin{align} \label{Mass_static} \mu = &V_n^\kappa \Biggl[\sum_{m=1}^k \frac{1}{2^{m+1}}\frac{a_m}m \frac{r^{n-2m+1}}{n-2m+1} W(m)-\frac{r^{n+1}}{n+1}\Lambda \nonumber \\ & +\sum_{m=1}^k \sum_{l=0}^{m-1}2^{l-m}a_m \frac{r^{n-2m+1}}{l+1}\Bi{m-1}{l} \left(\prod_{p=0}^{2l}(n-(2m-2)+p)\right) (\kappa-f(r))^{l+1}W(m-1-l)\Biggl]\,. \end{align} This is a simple Lovelock generalization of the Dotti-Gleiser solution~\cite{Dotti:2005rc} for Einstein-Gauss-Bonnet gravity. The case $r=r_0={\rm const.}$ also solves (\ref{CKV}). In this case, (\ref{Loveeq_ij}) gives that ${}^{(2)}R $ is constant, thus ($M^2 ,g_{ab}$) is a spacetime of constant curvature. The metric can therefore be written as \begin{align} \label{Nariai} {\rm d} s^2 =- (1 -\lambda x^2) {\rm d} t^2 +\frac{{\rm d} x^2}{1-\lambda x^2} +r_0^2 \gamma_{ij }{\rm d} x^i {\rm d} x^j \,, \end{align} where $r_0$ and $\lambda$ satisfy the following relations \begin{align} \label{} \Lambda=& \sum_{m=1}^k\sum_{l=0}^{m-1} \frac{a_m 2^{l-m}}{r_0^{2m}} \Bi{m-1}{l} \frac{n-2m+1}{l+1} \kappa^{l+1} \left(\prod_{p=0}^{2l}(n-2m+2+p)\right)W(m-1-l) \nonumber \\& +\sum_{m=1}^k \frac{1}{2^{m+1}}\frac{a_mW(m)}{m r_0^{2m}} \,, \\ \Lambda=& \sum_{m=1}^k \frac{a_m }{2^{m-1}nr_0^{2(m-1)}} \frac{\lambda}{n-(2m-1)} \sum_{l=0}^{m-1} 2^l \Bi{m-1}{l}\left(\prod_{p=0}^{2l +1}(n-2m+1+p)\right) \kappa^l W(m-1-l) \nonumber \\ &+\sum_{m=1}^k \frac{a_m}{2^{m+1}m r_0^{2m}}\sum_{l=0}^m \frac{2^l(n-2m+2l) }{n(n-2m)} \Bi{m}{l} \left(\prod_{p=0}^{2l}(n-2m+p)\right)\kappa^l W(m-l)\,. \end{align} (\ref{Nariai}) describes a Nariai-type metric ${\rm (A)dS}_2\times \ma K^n$. \subsection{Physical properties of quasilocal mass} In order to analyze the dynamics of the warped spacetime (\ref{metric}), it is advantageous to work in the double null coordinates \begin{align} {\rm d} s^2=-2e^{-f(u,v)}{\rm d} u{\rm d} v+r^2(u,v)\gamma_{ij}{\rm d} x^i{\rm d} x^j\,, \end{align} where the orientation is fixed to be $\epsilon_{uv}>0$. Then we can see that the variation formula (\ref{variation}) may be cast into \begin{subequations} \label{variation_uv} \begin{align} \partial_u M&=\frac{1}{n}V^\kappa_ne^fr^{n+1}\left( T_{uv}\theta_{-}-T_{uu}\theta_{+}\right)\,, \\ \partial_v M&=\frac{1}{n}V^\kappa_ne^fr^{n+1}\left( T_{uv}\theta_{+}-T_{vv}\theta_{-}\right)\,, \end{align} \end{subequations} where $\theta_\pm$ describe the expansion rate for the null directions \begin{align} \theta_{+}&=n\frac{\partial_v r}{r}\,, \qquad \theta_{-}=n\frac{\partial_u r}{r} \,. \end{align} The variation formula (\ref{variation_uv}) does not involve the Lovelock coupling coefficients nor the information on the Weyl tensor of the Einstein space explicitly. This fact is advantageous for discussing the monotonicity property of the quasilocal mass as described below. We fix the spacetime orientation by declaring that the future-directed null vector $\partial/\partial v$ (resp. $\partial/\partial u$) is outgoing (reps. ingoing). Namely, $\theta_+>0$ and $\theta_-<0$ hold on an untrapped surface. Remark that each value of $\theta_\pm$ is not an invariant quantity by virtue of the remaining freedom of rescaling $u\to U(u), v\to V(v)$. Instead, $e^f \theta_+\theta_-$ enjoys an invariant physical meaning characterizing the trapping nature. In order to extract the physically reasonable results, we impose the energy conditions on the matter fields. The null energy condition for the matter field implies \begin{align} T_{uu}\geq 0 ,\ \ \ T_{vv}\geq 0\,, \end{align} while the dominant energy condition for the matter field implies \begin{align} T_{uu}\geq 0 ,\ \ \ T_{vv}\geq 0,\ \ \ T_{uv}\geq 0\,. \end{align} Note that only the information of radial directions is encoded on these inequalities. Let us now establish that our quasilocal mass exhibits a monotonicity property, which is desirable for $M$ as a physically reasonable mass function. \begin{Prop}[Monotonicity] If the dominant energy condition holds, the quasilocal mass is nondecreasing along outgoing null or spacelike directions on an untrapped surface. \end{Prop} The proof follows immediately from (\ref{variation_uv}).\hfill $\Box$ Let us next move on the the positivity claim. To this end, let us first define the regular center. The central point is said to be regular center if \begin{align} \kappa -\left( Dr \right)^2 \sim Cr^2 \end{align} holds where $C$ is a nonvanishing constant. \begin{Prop}[Positivity-I] If the dominant energy condition holds and the spacetime has a regular center which is surrounded by untrapped surfaces, quasilocal mass is non-negative. \end{Prop} Suppose that $W(k)$ is nonvanishing. Around the regular center, quasilocal mass behaves as \begin{align} M\simeq V^{\kappa}_n \frac{1}{2^{k+1}}\frac{a_k}{k}\frac{r^{n-2k+1}}{(n-2k+1)}W(k)\,. \label{M_center} \end{align} This implies \begin{align} & \partial_v M\simeq V^{\kappa}_n \frac{1}{2^{k+1}}\frac{a_k}{k}\frac{r^{n-2k+1}\theta_{+}}{n}W(k) \,, \label{Mv_center}\\ & \partial_u M\simeq V^{\kappa}_n \frac{1}{2^{k+1}}\frac{a_k}{k}\frac{r^{n-2k+1}\theta_{-}}{n}W(k)\,. \end{align} Combining with the monotonicity property and (\ref{Mv_center}), the dominant energy condition requires \begin{align} a_kW(k)>0 \,. \end{align} This proves the positivity of quasilocal mass around the regular center. The monotonicity property establishes the claim, as we desired.\hfill $\Box$ It is worth noting that for the $\kappa =1$ case, the regular center is always surrounded by untrapped surfaces, while this is not the case for $\kappa =-1$. Remark also that the Misner-Sharp mass behaves $M\propto r^{n+1}$ around the regular center for the case with $\ma K^n$ being the maximally symmetric space, whereas $M\propto r^{n-2k+1}$ for the present case. Next, let us consider the case in which the spatial hypersurface admits a marginal surface as its inner boundary. On the marginal surface we have $(Dr)^2=0$; hence, the following version of the positivity holds. \begin{Prop}[Positivity-II] Suppose the dominant energy condition and $\Lambda \leq 0$. If the spacelike hypersurface admits a marginal surface as its inner boundary, then the quasilocal mass admits a positive lower bound, provided that the Lovelock coefficients and Weyl tensor satisfy the following conditions for all $m$, \begin{align} &a_m\left[ \sum_{l=0}^{m-1}\frac{2^{l+1}}{l+1}\Bi{m-1}{l}\left( \prod_{p=0}^{2l}(n-(2m-2)+p) \right)\kappa^{l+1}W(m-1-l)+\frac{W(m)}{m(n-2m+1)}\right] \geq 0\,. \label{aWcond} \end{align} \end{Prop} This is clear from the monotonicity and the definition of quasilocal mass.\hfill $\Box$ We obtained a condition (\ref{aWcond}) under which the positivity of the mass holds. Inspired by string theory, we may physically fix some of the Lovelock coefficients. However, it appears that the sign of Weyl tensor of the Einstein space is not controllable. It would be better if we have a clearer physical and mathematical meaning of (\ref{aWcond}). We leave this for future investigations. To conclude this section, let us make a brief comment on the asymptotic behavior of the quasilocal mass. If the Einstein space is a round sphere, the metric falls into the standard definition of asymptotic flatness, and it would be meaningful if the asymptotic value of the quasilocal mass converges to the ADM mass, as argued in \cite{Maeda:2007uu,Maeda:2011ii}. If the Einstein space is not the maximally symmetric space, the metric exhibits a slow falloff and it does not allow asymptotically flat/AdS solutions in the standard sense. For this reason, we shall not attempt to discuss the asymptotics for the quasilocal mass. \section{Trapping horizons} \label{sec:trapping} The concept of event horizon is not of practical use because the identification of its locus requires the knowledge of the evolution of Einstein's equations into the entire future. A more convenient manner to characterize locally the strong gravity is the trapping horizon, which was originally proposed by Hayward~\cite{hayward1994}. In this section, we address some properties of trapping horizons in the present settings. The {\it trapping horizons} are the $n+1$-dimensional hypersurface foliated by $n$-dimensional marginal surfaces on which $\theta_+\theta_-=0$ is satisfied. Set $\theta_+=0$ on the marginal surface in what follows. Then the marginal surface is said to be {\it future} for $\theta_-<0$, {\it past} for $\theta_->0$, {\it outer} for $\partial_u \theta_+<0$ and {\it inner} for $\partial_u \theta_+>0$. By definition, the notion of trapping horizons is quasilocal and does not make any references to the asymptotic structure. One may deduce intuitively that the future-outer trapping horizons are of the most relevance for a local description of dynamical black holes, since inside the trapping horizon both of the outgoing and ingoing rays are converging. In the following discussion, we shall be mainly interested in (future-)outer trapping horizons. According to the proposition 12.2.4 of \cite{Wald}, trapped regions cannot be causally connected to null infinity, provided the null convergence condition and the cosmic censorship are valid. Therefore, the existence of the trapped regions implies the event horizon in a physically reasonable condition. The properties of trapping horizons have been analyzed in detail for the Gauss-Bonnet gravity \cite{Nozawa:2007vq} and for the Lovelock gravity \cite{Maeda:2011ii} with the maximally symmetric horizons. In order for the trapping horizons to inherit properties in general relativity, we have to assume a certain inequality involving the Lovelock coefficients and the Weyl tensor for the Einstein space. The next proposition specifies the causal character of the trapping horizon. The proof is the same as in Ref. \cite{Nozawa:2007vq}. \begin{Prop}[Signature law] \label{prop:sign} Under the null energy condition the outer trapping horizon is nontimelike, provided that the following condition on Weyl tensor and the Lovelock coefficients holds for all $m$, \begin{align} &a_m\left[ \sum_{l=0}^{m-1}2^{l+1}\Bi{m-1}{l}\left( \prod_{p=0}^{2l}(n-(2m-2)+p) \right)\kappa^{l}W(m-1-l)\right] \geq 0\,. \label{cond_sig} \end{align} \end{Prop} Let $\xi=\xi^v\partial_v+\xi ^u \partial_u$ be a generator of the outer trapping horizon at which $\theta_+=0 $ and $\partial_u \theta_+<0$. Since the trapping horizon is foliated by marginal surfaces, we have \begin{align} \label{theta_lie} \ma L_\xi \theta_+=\xi^v\partial_v \theta_++\xi^u \partial_u \theta_+=0\,. \end{align} Evaluating the ($v,v$) component of (\ref{Loveeq_tf}) at the trapping horizon we get \begin{align} \label{} T_{vv}=-\sum_{m=1}^k\sum_{l=0}^{m-1}\frac{a_m 2^{l-m+1}}{nr^{2m-2}}\Bi{m-1}{l} \left(\prod_{p=0}^{2l}(n-2m+2+p)\right)\kappa ^l W(m-1-l) \partial_v\theta_+ \,. \end{align} The null energy condition and the inequality (\ref{cond_sig}) thus assures $\partial_v\theta_+ <0$. Hence (\ref{theta_lie}) implies that $\xi^u\xi^v\le 0$ is satisfied. If the trapping horizon is timelike, this inequality is not satisfied. We therefore arrive at the claim.\hfill $\Box$ The most interesting property of the event horizon of a black hole is the area increasing theorem (Proposition of 12.2.6 of \cite{Wald}). It turns out that a similar property holds for the trapping horizon. \begin{Prop}[Area law] \label{prop:area} Under the null energy condition and the conditions in Proposition \ref{prop:sign}, the area of outer trapping horizon, $A(r)=V^{\kappa}_{n}r^n$, increases along its generator. \end{Prop} The proof directly follows from \begin{align} \mathcal{L}_{\xi}A&=nr^{n-1}V^{\kappa}_{n} \left( \xi^u\partial_ur+\xi^v\partial_vr\right) \notag \\ &=r^{n}V^{\kappa}_n \theta_{-}\xi^u>0\,, \end{align} where we have used $\xi^v>0$ for the nonspacelike (spacelike) trapping horizon to be future-pointing (outgoing), hence $\xi^u\le 0$ from the signature law. This completes the proof.\hfill $\Box$ \subsection{Dynamics of trapping horizon} In general relativity, the trapping horizons display laws analogous to ordinary black-hole thermodynamics even in a dynamical circumstance~\cite{hayward1994}. Since the unified first law (\ref{U1st}) represents the energy balance, it can be used to deduce the thermodynamic first law for a trapping horizon. One can recast (\ref{U1st}) into \begin{align} A\psi_a =&D_aM\notag \\ &+\frac{V^\kappa_n}{2}r^nD_ar\sum_{m=1}^k \sum_{l =0}^{m-1}\frac{a_m2^{l-m+1}}{r^{2m-2}}\left( \begin{matrix} m-1 \\ l \end{matrix}\right) \left( \frac{D^2r}{r}-{(n-2m+1)}\frac{(\kappa -(Dr)^2)}{(l +1)r^2}\right)\notag \\ &\qquad\qquad\qquad\qquad\times\left(\prod_{p=0}^{2l }(n-(2m-2)+p)\right)(\kappa -(Dr)^2)^{l}W(m-1-l) \notag \\ &-\frac{V^\kappa_n}{2}r^nD_ar\sum_{m=1}^k\frac{1}{2^{m}}\frac{a_m}{m}\frac{W(m)}{r^{2m}}+V^\kappa_nr^nD_ar\Lambda \end{align} Let $\xi^a $ be a generator of the trapping horizon. Hence, along the trapping horizon $(Dr)^2=0$, we get \begin{align} A\psi_a\xi^a=\kappa_{TH}V^{\kappa}_n\xi^aD_a\Bigg[ \sum_{m=1}^k \sum_{l =0}^{m-1}a_m2^{l-m+1}\frac{r^{n-2m+2}}{n-2m+2}\left( \begin{matrix} m-1 \\ l \end{matrix}\right) \prod_{p=0}^{2l }(n-(2m-2)+p)\kappa^{l}W(m-1-l) \Bigg] \,, \label{Apsi} \end{align} where we have defined \begin{align} \kappa_{\rm TH}\equiv \left.\frac{1}{2}D^2r \right|_{r_h} \,. \label{kappa_TH} \end{align} One can interpret (\ref{kappa_TH}) as a surface gravity of a trapping horizon, since it fulfills~\cite{hayward1998} \begin{align} \label{} K^a D_{[a }K_{b]} =\kappa_{\rm TH} K_b \,, \end{align} where the equality is evaluated on the trapping horizon. Note that this equation resembles the equation defining the surface gravity of a Killing horizon~\cite{Wald}. It deserves to emphasize that the surface gravity is not constant over the trapping horizon, as can be inferred from the Vaidya-type radiating solution (see Appendix \ref{app:matter}.). The unified first law reads $\delta_\xi M=Ai_\xi \psi+P\delta_\xi V$, hence $Ai_\xi \psi$ term should be identified as $T\delta_\xi S$ term. Assuming that the temperature is related to $\kappa_{\rm TH} $ by $T=\kappa_{\rm TH}/(2\pi)$, we can identify the entropy of a trapping horizon as \begin{align} S=2\pi V^\kappa_n\Bigg[ \sum_{m=1}^k \sum_{l =0}^{m-1}a_m2^{l-m+1}\frac{r_h^{n-2m+2}}{n-2m+2}\left( \begin{matrix} m-1 \\ l \end{matrix}\right) \left(\prod_{p=0}^{2l }(n-(2m-2)+p)\right) \kappa^{l}W(m-1-l) \Bigg] \,. \end{align} Eq. (\ref{Apsi}) also justifies the physical interpretation of $\psi^a$ as flux current, since the change of trapping horizon entropy is responsible for the flux through the horizon. In the general relativistic case, the entropy is proportional to the area of the trapping horizon. The Lovelock black holes therefore admit a correction arising from higher-curvature terms~\cite{Myers:1988ze}. The highest term $l=m-1$ is also present for the maximally symmetric horizons, while the other terms represent the contribution coming from the Einstein horizon. Now the entropy of a trapping horizon is obtained, we move to prove the entropy increasing law. This corresponds to the second law of black hole dynamics. \begin{Prop}[Entropy law] Under the null energy condition and the conditions in Proposition \ref{prop:sign}, the entropy of outer trapping horizon increases along its generator. \end{Prop} The variation of the entropy along the generator gives \begin{align} \mathcal{L}_{\xi}S&= 2\pi V^\kappa_n\Bigg[ \sum_{m=1}^k \sum_{l =0}^{m-1}\frac{a_m}{n}2^{l-m+1}r_h^{n-2m+2}\left( \begin{matrix} m-1 \\ l \end{matrix}\right) \left(\prod_{p=0}^{2l }(n-(2m-2)+p)\right) \kappa^{l}W(m-1-l) \Bigg] \theta_{-}\xi^u\,. \end{align} The proof follows immediately from the same argument as the area theorem.\hfill $\Box$ \subsection{Wald's entropy} In the previous subsection, we derived the entropy of a trapping horizon by requiring the first law of thermodynamics for the trapping horizon. Here we reproduce it by Wald's prescription for the Killing horizons~\cite{Wald:1993nt,iyerwald1994}. Suppose that the metric admits a nondegenerate, bifurcate Killing horizon $r=r_h$ with a bifurcation surface $B$. \begin{align} \label{} S_W&=-2\pi\int \left( \frac{\partial \mathcal{L}}{\partial R_{\mu \nu \rho \lambda}}\right) \epsilon_{\mu \nu}\epsilon_{\rho \lambda}{\rm d} V^\kappa_n \,, \end{align} where $\epsilon_{\mu\nu} $ is the binormal to $B$ given by (\ref{Kodama}), \begin{align} S_W &=-2\pi\int \sum_{m=1}^{k}\frac{1}{2^m}\frac{a_m}{m}\left( \frac{\partial\delta^{\mu_1 \mu_2 \dots \mu_{2m-1}\mu_{2m}}_{{\nu_1 \nu_2 \dots \nu_{2m-1}\nu_{2m}}}R_{\mu_1 \mu_2}{}^{\nu_1 \nu_2}\dots R_{\mu_{2m-1}\mu_{2m}}{}^{\nu_{2m-1}\nu_{2\nu}}}{\partial R_{\mu \nu \rho \lambda}}\right) \epsilon_{\mu \nu}\epsilon_{\rho \lambda}dV^\kappa_n\notag \\ &=-2\pi\int \sum_{m=1}^{k}\frac{1}{2^m}a_m\left( \delta^{ac\mu_3 \mu_4 \dots \mu_{2m-1}\mu_{2m}}_{{bd\nu_3 \nu_4 \dots \nu_{2m-1}\nu_{2m}}}R_{\mu_3 \mu_4}{}^{\nu_3 \nu_4}\dots R_{\mu_{2m-1}\mu_{2m}}{}^{\nu_{2m-1}\nu_{2\nu}}\right) \epsilon_{a c}\epsilon^{b d}dV^\kappa_n\notag \\ &=2\pi V^\kappa_n\Bigg[ \sum_{m=1}^k \sum_{\ell =0}^{m-1}a_m2^{\ell-m+1}\frac{r_h^{n-2m+2}}{n-2m+2}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell}W(m-1-\ell) \Bigg] . \end{align} Therefore, we can see that the entropy we defined from the quasilocal mass coincides with Wald's entropy. We have derived the expressions of entropy for the Killing horizons and see that it coincides with the stationary limit of trapping horizons. One can alternatively utilize the Kodama vector instead of the generator of the Killing horizon to directly derive for the trapping horizon as demonstrated in \cite{Hayward:1998ee}. \begin{comment} \subsection{Heat capacity} Quasi-local mass at the killing horizon is \begin{align} M_{LL}{}_h=V^k_{n}&\Bigg[ \sum_{m=1}^k\frac{1}{2^{m+1}}\frac{a_m}{m}\frac{r_h^{n-2m+1}}{(n-2m+1)}W(m)-\frac{r_h^{n+1}}{(n+1)}\Lambda \notag \\ &+\sum_{m=1}^k \sum_{\ell =0}^{m-1}a_m2^{\ell-m}\frac{r_h^{n-2m+1}}{\ell +1}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell +1}W(m-1-\ell) \Bigg] . \end{align} where $r_h$ is defined by $(Dr)^2=0$. Therefore, $dM_{LLh}/dr_h$ is evaluated as \begin{align} \frac{dM_{LL}{}_h}{dr_h}=V^k_{n}&\Bigg[ \sum_{m=1}^k\frac{1}{2^{m+1}}\frac{a_m}{m}r_h^{n-2m}W(m)-r_h^{n}\Lambda \notag \\ &+\sum_{m=1}^k \sum_{\ell =0}^{m-1}a_m2^{\ell-m}(n-2m+1)\frac{r_h^{n-2m}}{\ell +1}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell +1}W(m-1-\ell) \Bigg] . \end{align} Temperature, $T$, of the black hole is evaluated as \begin{align} T&=\frac{1}{2}D^2r\\ &=\frac{\sum_{m=1}^k\frac{1}{2^{m+1}}\frac{a_m}{m}\frac{W(m)}{r_h^{2m}} -\Lambda+\sum_{m=1}^k \sum_{\ell =0}^{m-1}\frac{a_m2^{\ell-m}}{r_h^{2m}(\ell +1)}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell +1}W(m-1-\ell) }{ \sum_{m=1}^k \sum_{\ell =0}^{m-1}\frac{a_m2^{\ell-m+1}}{r_h^{2m-1}}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell +1}W(m-1-\ell) } \end{align} Heat capacity, $C$, is defined by \begin{align} C\equiv \frac{dM}{dT}=\frac{\frac{dM}{dr_h}}{\frac{dT}{dr_h}} \end{align} \subsection{$P-V$ criticality} In order to discuss thermodynamical properties, especially $P-V$ criticality, let us define the pressure as following \begin{align} P=-\Lambda . \end{align} Equation of state is \begin{align} P&=\sum_{m=1}^k \sum_{\ell =0}^{m-1}\frac{a_m2^{\ell-m+1}}{r_h{}^{2m}}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \left(r_hT-\frac{\kappa (n-(2m-1))}{2(\ell +1)}\right)\prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell}W(m-1-\ell) \notag \\ &-\sum_{m=1}^k\frac{1}{2^{m+1}}\frac{a_m}{m}\frac{W(m)}{r_h{}^{2m}} \end{align} Here we introduce the dimensionless quantities as following \begin{align} r_h&=va_k{}^{\frac{1}{2(k-1)}}\\ a_m&=\tilde{a}_{m}a_k{}^{\frac{m-1}{(k-1)}}\\ P&=pa_k{}^{\frac{-1}{(k-1)}}\\ T&=\frac{t}{n}a_k{}^{\frac{-1}{2(k-1)}} \end{align} By using these quantities, we can re-express the Equation of state as \begin{align} p&=\sum_{m=1}^k \sum_{\ell =0}^{m-1}\tilde{a}_m2^{\ell-m+1}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \left(\frac{t}{n}\frac{1}{v{}^{2m-1}}-\frac{\kappa (n-(2m-1))}{2(\ell +1)v{}^{2m}}\right)\prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell}W(m-1-\ell) \notag \\ &-\sum_{m=1}^k\frac{1}{2^{m+1}}\frac{\tilde{a}_m}{m}\frac{W(m)}{v{}^{2m}} \end{align} The critical point occurs when followings are satisfied, \begin{align} \frac{\partial p}{\partial v}&=0,\\ \frac{\partial^2 p}{\partial v^2}&=0. \end{align} $\frac{\partial p}{\partial v}$ and $\frac{\partial^2 p}{\partial v^2}$ are \begin{align} \frac{\partial p}{\partial v}&=\sum_{m=1}^k \sum_{\ell =0}^{m-1}\tilde{a}_m2^{\ell-m+1}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \left(-\frac{t}{n}\frac{(2m-1)}{v{}^{2m}}+m\frac{\kappa (n-(2m-1))}{(\ell +1)v{}^{2m+1}}\right)\prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell}W(m-1-\ell) \notag \\ &+\sum_{m=1}^k\frac{\tilde{a}_m}{2^m}\frac{W(m)}{v{}^{2m+1}} \\ \frac{\partial^2 p}{\partial v^2}&=\sum_{m=1}^k \sum_{\ell =0}^{m-1}\tilde{a}_m2^{\ell-m+1}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \left(\frac{t}{n}\frac{2m(2m-1)}{v{}^{2m+1}}-m(2m+1)\frac{\kappa (n-(2m-1))}{(\ell +1)v{}^{2m+2}}\right)\prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell}W(m-1-\ell) \notag \\ &-\sum_{m=1}^k(2m+1)\frac{\tilde{a}_m}{2^m}\frac{W(m)}{v{}^{2m+2}} \end{align} Therefore critical temperature $t_c$ is evaluated as \begin{align} t_c&=\frac{n\sum_{m=1}^k \sum_{\ell =0}^{m-1}\tilde{a}_m2^{\ell-m+1}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \left(m\frac{\kappa (n-(2m-1))}{(\ell +1)v_c{}^{2m+1}}\right)\prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell}W(m-1-\ell)+\sum_{m=1}^k\frac{\tilde{a}_m}{2^m}\frac{W(m)}{v_c{}^{2m+1}}} {\sum_{m=1}^k \sum_{\ell =0}^{m-1}\tilde{a}_m2^{\ell-m+1}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \left( \frac{(2m-1)}{v_c{}^{2m}}\right)\prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell}W(m-1-\ell)} \end{align} And critical volume $v_c$ is the solutions of following polynomial \begin{align} 0&=\sum_{m=1}^k \sum_{\ell =0}^{m-1}\tilde{a}_m2^{\ell-m+1}\left( \begin{matrix} m-1 \\ \ell \end{matrix}\right) \left(\frac{t_c}{n}\frac{2m(2m-1)}{v_c{}^{2m+1}}-m(2m+1)\frac{\kappa (n-(2m-1))}{(\ell +1)v_c{}^{2m+2}}\right)\prod_{p=0}^{2\ell }(n-(2m-2)+p)\kappa^{\ell}W(m-1-\ell) \notag \\ &-\sum_{m=1}^k(2m+1)\frac{\tilde{a}_m}{2^m}\frac{W(m)}{v{}^{2m+2}} \end{align} \subsubsection{Example: Dotti-Gleiser solution} We first consider the Dotti-Gleiser solution case. Then the Equation of state is \begin{align} p=&\tilde{a}_1 \left(\frac{t}{v}-\frac{\kappa n(n-1)}{2v{}^{2}}\right) +\tilde{a}_2\left(\frac{t}{v^3}-\frac{\kappa n(n-3)}{4v{}^{4}}\right)(n-1)(n-2)\kappa -\frac{1}{16}\tilde{a}_2\frac{W(2)}{v{}^{4}} \end{align} Then $\frac{\partial p}{\partial v}$ and $\frac{\partial^2 p}{\partial v^2}$ are \begin{align} \frac{\partial p}{\partial v}&=\tilde{a}_1 \left(-\frac{t}{v^2}+\frac{\kappa n(n-1)}{v{}^{3}}\right) +\tilde{a}_2\left(-3\frac{t}{v^4}+\frac{\kappa n(n-3)}{v{}^{5}}\right)(n-1)(n-2)\kappa +\frac{1}{4}\tilde{a}_2\frac{W(2)}{v{}^{5}} \\ \frac{\partial^2 p}{\partial v^2}&=\tilde{a}_1 \left(2\frac{t}{v^3}-3\frac{\kappa n(n-1)}{v{}^{4}}\right) +\tilde{a}_2\left(12\frac{t}{v^5}-5\frac{\kappa n(n-3)}{v{}^{6}}\right)(n-1)(n-2)\kappa -\frac{5}{4}\tilde{a}_2\frac{W(2)}{v{}^{6}} \end{align} Therefore $t_c$ and $v_c$ are evaluated as \begin{align} t_c =\frac{\left( \tilde{a}_1 \kappa n(n-1)v_c{}^2+\tilde{a}_2 \kappa n(n-3)(n-1)(n-2)\kappa +\frac{1}{4}\tilde{a}_2W(2)\right) }{\left(\tilde{a}_1{v_c{}^3} +3(n-1)(n-2)\kappa\tilde{a}_2{v_c}\right)} \end{align} \begin{align} & \left(3\tilde{a}_1\frac{\kappa n(n-1)}{v_c{}^{4}}+5\tilde{a}_2\frac{\kappa n(n-1)(n-2)(n-3)\kappa }{v_c{}^{6}}\right)+\frac{5}{4}\tilde{a}_2\frac{W(2)}{v_c{}^{6}} \notag \\ &= 2\frac{\left( \tilde{a}_1 \kappa n(n-1)v_c{}^2+\tilde{a}_2 \kappa n(n-3)(n-1)(n-2)\kappa +\frac{1}{4}\tilde{a}_2W(2)\right) }{\left(\tilde{a}_1{v_c{}^3} +3(n-1)(n-2)\kappa\tilde{a}_2{v_c}\right)}\left( \frac{\tilde{a}_1}{v_c{}^3} + 6\frac{\tilde{a}_2}{v_c{}^5}(n-1)(n-2)\kappa \right) \end{align} \end{comment} \section{Final remarks} \label{sec:conclusion} In this paper we explored various properties of the spacetimes which are the warped product of a two-dimensional Lorentzian spacetime and an $n$-dimensional Einstein space. Assuming the form of the stress-energy tensor to be (\ref{SETensor}), we revealed that the Weyl curvature of the Einstein space must obey certain conditions (\ref{Wcond}). This assumption comes not only from the simplification, but also from the requirement that the metric (\ref{metric}) admits a vacuum solution. Some nontrivial examples are given in Appendix~\ref{app:ex}. We found that all the isotropy irreducible spaces fulfill this property. Our study enlarges considerably the solution space of Lovelock gravity. One immediate conclusion for replacing the $n$-dimensional maximally symmetric subspace by the Einstein space is that the metric shows the fall-off behaviors different from the standard one. This means that the $M-r_h$ diagram for the static black hole is much more complicated than \cite{Whitt:1988ax}. A possible future work in this direction is to examine the $P-V$ criticality of a black hole with Einstein horizons and to expose the thermodynamic phase structure. We then proceeded to define a quasilocal mass and explored its physical properties, by extending the previous works~\cite{Maeda:2007uu,Maeda:2011ii}. The rederivation of the quasi-local mass in terms of the Kodama flux is desirable for the physical interpretation of the quasilocal mass. Up to the certain conditions among the Lovelock coefficients and the Weyl curvature of the Einstein space, it turns out that the quasilocal mass shares the same behavior as that in general relativity \cite{Hayward:1994bu}. This implies that the Misner-Sharp-type quasilocal mass continues to be useful also in Lovelock gravity and can be utilized to obtain a coherent picture of spacetime dynamics as exemplified by gravitational collapse. We hope to come back to the point for the deeper mathematical and physical understanding of the conditions (\ref{aWcond}), (\ref{cond_sig}). Our formulation of Lovelock solutions with the warped $n$-dimensional Einstein space is very robust and has plenty of potential applications. We expect that the geometrodynamics approach to Hamiltonian formulation of Lovelock black holes~\cite{Kunstatter:2012kx} can be extended to the case with Einstein horizons. It is also interesting to consider the effects of nonlinear Maxwell field~\cite{Maeda:2008ha} and higher-rank $p$-form fields~\cite{Bardoux:2010sq}, which would display an intriguing thermodynamic phase by the interplay with Weyl curvatures of Einstein spaces. One may also explore the generalization of C functions~\cite{Anber:2008js} and the maximal entropy principle~\cite{Cao:2014fka} into the present context. \section*{Acknowledgements} We are grateful to Hideki Maeda and Marcello Ortaggio for useful comments. S.O. was supported by a JSPS Grant-in-Aid for Scientific Research No. 25-9997. This work was partly supported by JSPS and INFN.
1,108,101,563,774
arxiv
\section{Introduction} \label{sec:intro} Gamma-ray bursts \citep[GRBs;][]{1973ApJ...182L..85K} are sudden ($\sim$ seconds) flashes of gamma-rays with energy $\sim100~\mathrm{keV}$ that arrive at the Earth several times a day \citep[e.g.,][]{1995ARA&A..33..415F}. GRBs are classified into two categories by its duration: one is that long GRBs last longer than $2$ seconds and usually consist of many pulses in which the total energy of gamma-ray emissions is about $10^{51}~\mathrm{erg}$ \citep{2006ARA&A..44..507W} after correction of the relativistic beaming effect, and the other is that short GRBs last less than $2$ seconds and consist of one pulse with the beaming-corrected total energy of about $10^{50}~\mathrm{erg}$ \citep{2015ApJ...815..102F}. The short GRBs are speculated to be mergers of two compact objects, either two neutron stars (NSs) or NS - black hole (BH) binaries. In merging a binary of neutron stars (BNS), the tidal force of the more massive NS destroys the other, and therefore debris of less massive NS forms a massive accretion disk. This speculation is beautifully proved by the observation of an association between a short GRB (GRB 170817A) and the gravitational wave burst detected by LIGO and VIRGO \citep{2017ApJ...850L..35A}. The long GRBs on the other hand had been surmised to be associated with the core-collapse of a massive star for years. This conjecture was established by the discovery of GRB on April 1998 (GRB980425), which has a connection with an unusual supernova 1998bw \citep{1998Natur.395..670G}. The successive studies of supernovae (SNe) accompanied by GRBs suggest that long GRBs are associated with SNe of type Ic that have $30 - 50$ times more energetic than normal SNe, the so-called hypernovae \citep[HNe;][]{1998Natur.395..672I,2013ARA&A..51..457N}. Such extreme explosion energy of HNe cannot be explained by the conventional theory of SNe. This is one of the major motivation of our study. Meanwhile \citet{1993ApJ...405..273W} proposed "collapsar model" of GRBs in which the entire mass in the core cannot fall down directly to a newly born black hole (BH) or neutron star (NS) but form an accretion disk, if the specific angular momentum of the core is higher than a critical value of $2\sqrt{3}G/c\sim1.5\times 10^{16} ~\mathrm{cm}^{2}\,\mathrm{s}^{-1}$. The accretion disk is powered for a longer period of time by the collapsing star and radiates thermal emission via viscous dissipation \citep{shakura+sunyaev:1973AA}. For hyper-critical accretion rates, $\dot{M} \gg L_\mathrm{Edd}/c^{2}$ where $L_\mathrm{Edd}$ is the Eddington luminosity \citep{1979rpa..book.....R}, optical depth becomes too high for photons to escape, and therefore the neutrino cooling takes over the radiation cooling \citep{1992ApJ...395L..83N}. This regime is what we referred to as a neutrino driven accretion flow (NDAF). \citet{1999ApJ...524..262M} performed hydrodynamic simulations of the core-collapse. The resultant disk turns out to be thick against neutrino interaction and is evolved into a NDAF disk. They showed the neutrino annihilation at the rotation axis of the disk produces a fireball to launch bipolar mildly relativistic jets. They also discuss that the magnetohydrodynamical (MHD) process might produce bipolar jets in a similar manner to the neutrino annihilation, if magnetic energy dissipation took place in and above the disk. Since the kinetic luminosity of the jets is a few $10^{51}~\mathrm{erg}\,\mathrm{s}^{-1}$, they speculated the jets can produce GRBs. The pioneering magneto-hydrodynamic (MHD) simulations of magnetically driven jets from accretion disks were performed by \citet{1985Natur.317..699U} and \citet{1985PASJ...37...31S,1986PASJ...38..631S}. They found that the jets are accelerated by torsional Alfv\'en waves propagating along magnetic field lines anchored to the disks \citep[the so-called "sweeping magnetic twist mechanism"; see also][]{2001Sci...291...84M}. This mechanism requires the strong poloidal magnetic field structure above the disk \citep[also known as "beads on wires mechanism";][]{1982MNRAS.199..883B}. However, \citet{1990ApJ...350..295S} postulated another kind of magnetically driven jet, in which toroidal magnetic fields are dominant in the jet \citep[see also][]{1990PASJ...42..793F}. Such toroidal magnetic fields can be generated inside the accretion disk as a result of magneto-rotational instability \citep[MRI;][]{1991ApJ...376..214B}. In other words, the presence of magnetic fields in the accretion disks is not only the source of turbulence and viscosity causing structure deformation but also the driver of mega-parsec-scale structure formation such as astrophysical jets, radio lobes, and cocoons, the largest structures in the Universe \citep{1996A&ARv...7....1C,1997ARA&A..35..607Z, 2014SSRv..183..405H,2016A&ARv..24...10T}. Through these examples, one obtains the notion of the magnetic fields in the Universe as a stimulus or catalyst of the Universe's structure formation. This idea was demonstrated for the first time by \citet{2002PASJ...54..121K}. Later, \citet{2004ApJ...605..307K} have found the formation of self-collimated magnetic field structure emanating from the magnetized accretion disk \citep[so-called "magnetic tower";][]{1996MNRAS.279..389L}. For collapsar model, the emergence of magnetic tower from the NDAFs is promising \citep{2007ApJ...669..546U} because large-scale toroidal magnetic field of $B=10^{15}~\mathrm{G}$ or beyond can be generated in a rapidly rotating core-collapsing star \citep{2003ApJ...584..954A,2015Natur.528..376M}. According to \citet{2013ARA&A..51..457N}, there are two branches of supernovae whose progenitors are main-sequence stars with the mass of $20 - 25~\mathrm{M}_\odot$ depending on its angular momentum: one is an energetic bright "hypernova" branch for a fast-rotation core, and the other is a faint, low-energy "failed supernova" branch for a slow-rotation core. It has been suggested that the difference between the former and the latter is caused by MHD processes mentioned above. In both short and long GRBs, the central engine is NDAF disks. \citet{1999ApJ...518..356P} investigated the energy extraction from both NDAFs and rotating BHs \citep[see also][]{,2002ApJ...579..706D,2013ApJ...766...31K}. Under the assumption that the black hole spin is moderately fast and the magnetic pressure near the horizon is limited by the inner disk pressure of NDAFs, they discussed that Blandford-Znajek (BZ) mechanism \citep{1977MNRAS.179..433B} exceed the energy deposition rate expected from neutrino pair-annihilation above the NDAF. However, the neutrino pair-annihilation is not the only mechanism for extracting the energy from the NDAF disks and, more importantly, MHD processes other than the BZ mechanism may play an important role. For example, the presence of magnetic fields in the NDAF disks could also explain repeatable short-duration variability in long GRBs in the same manner as strong variability of gamma-rays in blazers \citep{2020MNRAS.493.2229C}. Another point of the supporting evidence of the magnetic fields in the accretion disk and its jets triggering electron acceleration may be seen in the NS - NS collision triggered gamma-ray burst emission \citep{Abbott:2017it}, which accompanied simultaneous gravitational wave emission. Such gamma-ray emission was predicted via wakefield acceleration by \citet{Takahashi2000} \citep[see also][]{2002PhRvL..89p1101C}. Not to mention that the system of a black hole and a massive accretion disk show a huge amount of energy emissions via a variety of channels, such as radio-waves, infrared and optical emissions, X-rays and gamma-rays, ultra high energy cosmic rays (UHECR), neutrinos, and even gravitational waves. Finally, even if the appropriate system of a black hole and an accretion disk is given, the greatest uncertainty is the acceleration mechanism of high energy particles that are responsible for the production of high energy emissions in the jets. We will propose an alternative mechanism for the origin of high-energy emissions. Recently, \citet{ebisuzaki+tasjima2019} have proposed a model of acceleration of charged particles to very high energies including energies above $10^{20}~\mathrm{eV}$ for the case of protons and nucleus, and $10^{12-15}~\mathrm{eV}$ for electrons by electro-magnetic wave-particle interaction. If the episodic eruptive accretions generate Alfv\'enic wave pulses along the magnetic field in the jets, such Alfv\'enic wave pulses act as a driver of the collective accelerating pondermotive force whose direction is parallel to the motion of particles. This pondermotive force drives the wakes. Because the wakes propagate at the same speed with the particles, the so-called wakefield acceleration has a robust built-in coherence in the acceleration system itself. In other words, the accelerating particles are surfing along with the wakes without too much energy loss due to synchrotron emission during the process. This reinforces the blazer emission mechanism of gamma-ray photons by this mechanism as a natural way. It is therefore the wakefield acceleration have more advantage than the diffusive shock accelerations \citep{Drury_1983}. \begin{figure} \epsscale{0.85} \plotone{f1.eps} \caption{Schematic illustration of a neutrino driven accretion flow (NDAF) and a magnetically driven jet. The jet is driven by self-collimated magnetic field structure emanating from a magnetized accretion disk \citep{2004ApJ...605..307K,Kato:2006fu}. Electro-magnetic (EM) wave pulses are generated by magnetohydrodynamic instabilities in the magnetized disk and are propagating along the jet in the presence of magnetic fields in the jet plasma \citep{1973bppp.book.....I,1981ASSL...82.....A,1997plas.conf.....T}, even though the frequency of EM wave pulses $\omega$ is smaller than the plasma frequency $\omega_\mathrm{p}$ in the vicinity of NDAFs. If the frequency of EM wave pulses becomes larger than the plasma frequency ($\omega > \omega_\mathrm{p}$) in the distance, the energy of EM wave pulses is transferred into charged particles as a result of the wakefield acceleration \citep{1979PhRvL..43..267T,ebisuzaki+tasjima2019}.} \label{fig:schematic} \end{figure} In this study we extend a model of the accretion disk presented by \citet{ebisuzaki+tasjima2019} \citep[see also][]{Ebisuzaki:2014es,2018MNRAS.479.2534M} into a NDAF. A basic concept of our model is shown as a schematic illustration in Figure~\ref{fig:schematic}. We estimate the energy flux of both the electro-magnetic wave pulses and neutrino emissions from the NDAFs. Section \ref{sec:method} we describe more details how we derived the structure of NDAFs for a reader's convenience although the basic equations we used here are basically same as the previous study. We present our results in Section \ref{sec:results}. We summarize in Section \ref{sec:summary_and_discussion} that observational signature of the wakefield acceleration in relation to GRBs with some discussions, and we conclude in Section \ref{sec:conclusion}. \section{Method} \label{sec:method} \subsection{Neutrino driven accretion flow disks}\label{sec:method:disk_model} We assume that an axisymmetric steady-state accretion disk is in Keplerian rotation in which the radial ($\varpi$-) distribution of orbital angular velocity $\Omega_\mathrm{K}(\varpi)\equiv \sqrt{GM/\varpi^{3}}$ under the Newtonian gravity potential where $M$ is the mass of a central BH and $G$ is the gravitational constant. Under the hydrostatic equilibrium in the vertical ($z$-) direction, the vertical structure is expressed to be the Gaussian profile with the pressure scale height $H(\varpi)$ where $\varpi$ is the distance from the gravity center on the mid plane where $z=0$. It is therefore the basic equations are formulated by using vertically integrated hydrodynamic variables of the disk, such as the column density $\Sigma(\varpi)\equiv\int_{-\infty}^{\infty}\rho(\varpi,z) dz$ where $\rho(\varpi,z)$ is the density distribution in the $\varpi$-$z$ plane. In the one-zone approximation of the vertical structure of the disk, the radial density distribution at $z=0$ is expressed as $\rho_\mathrm{0}(\varpi) = \Sigma(\varpi)/2H(\varpi)$. For a constant mass accretion rate $\dot{M}$ though the disk in steady-state solution, the mass conservation equation in the radial direction of the disk yields: \begin{equation} \dot{M}=-2\pi \varpi\Sigma(\varpi) v_\varpi(\varpi)=\mathrm{const.}, \label{conservation_of_mass} \end{equation} where $v_\varpi$ is the radial velocity of the gas in the disk, which is negative for inflow. Here we consider the mass accretion rate is extremely high as $\dot{M} \geq 0.1 \mathrm{M}_\odot/s$. From angular momentum conservation in the steady-state disk, we obtain a formula of angular momentum transport rate: \begin{equation} \dot{M} \varpi^{2} \Omega_\mathrm{K}(\varpi)=-2\pi \varpi^{2}\cal{S}_{\varpi\varphi} + \mathrm{const.}, \label{conservation_of_angular_momentum} \end{equation} where ${\cal S}_{\varpi\varphi} = \alpha\Sigma(\varpi) {\cal C}_\mathrm{s}(\varpi)^{2}$ is the viscous stress given in the $\alpha$-disk prescription \citep{shakura+sunyaev:1973AA}, where ${\cal C}_\mathrm{s}(\varpi)$ is the sound velocity. We set a constant viscosity parameter of $\alpha = 0.1$. Under the one-zone approximation in the isothermal condition, the hydrostatic balance in the vertical direction is reduced to $H(\varpi)={\cal C}_\mathrm{s}(\varpi)/\Omega_\mathrm{K}(\varpi)$. In the case of a torque-free boundary condition ${\cal S}_{\varpi\varphi}=0$ at the inner edge of the disk $\varpi=\varpi_\mathrm{in}$, Equation\,\ref{conservation_of_angular_momentum} becomes \begin{equation} \dot{M} \varpi^{2}_\mathrm{in} \Omega_\mathrm{K} (\varpi_\mathrm{in}) = \mathrm{const.} \equiv \cal{L}_\mathrm{in} \label{conservation_of_angular_momentum_at_rin} \end{equation} where $\cal{L}_\mathrm{in}$ is the angular momentum gain of the central BH. But, since we are not interested in the evolution of a spin of a black hole, we simply ignore the angular momentum gain, ${\cal L}_\mathrm{in} = 0$. From Equation \ref{conservation_of_angular_momentum}, we have \begin{equation} \Sigma(\varpi) = \frac{\dot{M}\Omega_\mathrm{K}(\varpi)}{2\pi\alpha{\cal C}_\mathrm{s}^{2}(\varpi)}. \label{conservation_of_angular_momentum2} \end{equation} In the standard accretion disk model \citep{shakura+sunyaev:1973AA}, the viscous heating rate $Q_\mathrm{visc}$ is determined as \begin{equation} Q_\mathrm{vis}(\varpi) = \frac{3\dot{M}}{4\pi}\Omega_\mathrm{K}^{2}(\varpi). \end{equation} For a steady-state accretion flow, the viscous heating and neutrino cooling must be balanced at every radius, $Q_\mathrm{vis}(\varpi)=Q_\nu(\varpi)$, where $Q_\nu(\varpi)$ is the neutrino energy cooling rate. Since the neutrino can escape from the upper and lower side of the disk, so the emergent neutrino energy flux is written to: \begin{equation} {\cal F}_{\nu}(\varpi) = Q_{\nu}(\varpi) / 2 = \frac{3\dot{M}}{8\pi}\Omega_\mathrm{K}^{2}(\varpi). \label{eqn:radiation_flux} \end{equation} The total energy density at the surface density $\Sigma(\varpi)$ is determined by the relation \citep{shakura+sunyaev:1973AA}, \begin{equation} \epsilon_\mathrm{0}(\varpi)=\frac{3}{4}\frac{{\cal F}_{\nu}(\varpi)}{c}\bar{\kappa}_{\nu}(\varpi)\Sigma(\varpi) = \frac{9\dot{M}}{32\pi c}\bar{\kappa}_{\nu}(\varpi)\Sigma(\varpi)\Omega_\mathrm{K}^{2}(\varpi) \end{equation} where $c$ is the speed of light and $\bar{\kappa}_{\nu}(\varpi)$ is the Rosseland mean opacity of neutrino. In the neutrino driven accretion flow, the sound speed is expressed as \begin{equation} {\cal C}_\mathrm{s}(\varpi) = \sqrt{\frac{\epsilon_\mathrm{0}(\varpi)}{3\rho_\mathrm{0}(\varpi)}} \label{eqn:sound_speed} \end{equation} where $\rho_\mathrm{0}(\varpi)$ is the gas density on the mid-plane in the disk. Substituting Equation \ref{eqn:sound_speed} into the relation of the hydrostatic balance in the vertical direction, we find that the pressure scale height becomes \begin{equation} H = \frac{3\dot{M}}{16\pi c}\bar{\kappa}_\nu(\varpi) \label{eqn:scaleheight} \end{equation} Using the above equations, the properties of a neutrino driven accretion flow are summarized as follows: \begin{equation} \rho_\mathrm{0}(\varpi) = \frac{1024\pi^{2}c^{3}}{27\alpha\dot{M}^{2}}\bar{\kappa}^{-3}_\nu(\varpi)\Omega^{-1}_\mathrm{K}(\varpi). \label{eqn:ro0} \end{equation} \begin{equation} \epsilon_\mathrm{0}(\varpi) = \frac{4c}{\alpha}\bar{\kappa}^{-1}_{\nu}(\varpi)\Omega_\mathrm{K}(\varpi). \label{eqn:epsilon0} \end{equation} The pressure of the disk $p_\mathrm{0}(\varpi) = \rho_\mathrm{0} {\cal C}_\mathrm{s}^{2}(\varpi)$ becomes \begin{equation} p_\mathrm{0}(\varpi) = \frac{4c}{3\alpha}\bar{\kappa}^{-1}_{\nu}(\varpi)\Omega_\mathrm{K}(\varpi). \label{eqn:p0} \end{equation} The magnetic field strength $B_\mathrm{0}(\varpi)$ on the mid-plane is determined by using a ratio of the pressure and the magnetic pressure, $\beta\equiv p_\mathrm{0}(\varpi)/p_\mathrm{0,mag}(\varpi)$ where $p_\mathrm{0,mag}(\varpi) = B_\mathrm{0}^{2}(\varpi)/8\pi$, \begin{equation} B_\mathrm{0}(\varpi) = \sqrt{\frac{8\pi p_\mathrm{0}(\varpi)}{\beta}} = \sqrt{\frac{32\pi c}{3\alpha\beta}}\bar{\kappa}^{-1/2}_{\nu}(\varpi)\Omega^{1/2}_\mathrm{K}(\varpi) \label{eqn:B0} \end{equation} where $\beta = 10$ is the assumption for our study. Note that $B_\mathrm{0}(\varpi)$ is dominated by the toroidal magnetic field inside the disk. For an optically thick neutrino driven accretion flow, the relation between the total energy density and temperature is approximated to $\epsilon_\mathrm{0}(\varpi) = (11/4) a T_\mathrm{0}^{4}(\varpi) + (7/8) a T_\mathrm{0}^{4}(\varpi) = (29/8) a T_\mathrm{0}^{4}(\varpi)$ where $a$ is the radiation constant \citep{2002ApJ...579..706D}. It is therefore, \begin{equation} T_\mathrm{0}(\varpi) = \left(\frac{32c}{29\alpha a}\bar{\kappa}^{-1}_\nu(\varpi)\Omega_\mathrm{K}(\varpi)\right)^{1/4}. \label{eqn:T} \end{equation} where the Rosseland mean opacity $\bar{\kappa}_{\nu}(\varpi)$ is determined by the neutrino-nucleon scattering process which is given by \begin{equation} \bar{\kappa}_{\nu}(\varpi) = \kappa_{\nu0}\left(\frac{k_\mathrm{B}T_\mathrm{0}(\varpi)}{m_\mathrm{e}c^{2}}\right)^{2} \label{eqn:opacity} \end{equation} where $\kappa_{\nu0} = 5.22\times 10^{-20}\,\mathrm{cm^{2} g^{-1}}$ for $k_\mathrm{B}T_\mathrm{0}(\varpi)\gg m_\mathrm{e} c^{2}$, $k_\mathrm{B}$ is the Boltzmann constant, and $m_\mathrm{e}$ is the electron mass \citep{1964PhRv..136.1164B, 2002ApJ...579..706D}. Here the opacity depends on the temperature in contrast to the accretion disk model \citep{ebisuzaki+tasjima2019} in which the opacity is dominated by the Thomson scattering process. Substituting Equation \ref{eqn:opacity} into Equation \ref{eqn:T}, the temperature is given by \begin{align} T_\mathrm{0}(\varpi) & = \left(\frac{32 m^{2}_\mathrm{e}c^{5}}{29\alpha a k_\mathrm{B}^{2}\kappa_\mathrm{\nu0}}\right)^{1/6}\Omega^{1/6}_\mathrm{K}(\varpi)\notag \\ & = 3.58\times 10^{11} \left(\frac{\alpha}{0.1}\right)^{-1/6} \left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-1/6} \left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-1/4}\,\hbox{[K]}. \label{eqn:Tdisk} \end{align} where $r_\mathrm{s}\equiv 2GM/c^{2}$ is the Schwarzschild radius. It is therefore the opacity $\bar{\kappa}_{\nu}(\varpi)$ is expressed as \begin{align} \bar{\kappa}_{\nu}(\varpi) & = \left(\frac{32\kappa_{\nu0}^{2} k_\mathrm{B}^{4}}{29\alpha a m^{4}_\mathrm{e} c^{7}}\right)^{1/3}\Omega^{1/3}_\mathrm{K}(\varpi)\notag\\ & = 1.91\times 10^{-16} \left(\frac{\alpha}{0.1}\right)^{-1/3} \left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-1/3} \left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-1/2}\,\hbox{[$\mathrm{cm}^{2}\,\mathrm{g}^{-1}$]}. \label{eqn:kappa} \end{align} Substituting Equation \ref{eqn:kappa} into Equations \ref{eqn:scaleheight},\ref{eqn:ro0},\ref{eqn:epsilon0}, and \ref{eqn:B0}, we have \begin{align} H & = \left(\frac{3\dot{M}}{16\pi c}\right)\left(\frac{32\kappa^{2}_{\nu0}k^{4}_\mathrm{B}}{29\alpha a m^{4}_\mathrm{e}c^{7}}\right)^{1/3}\Omega^{1/3}_\mathrm{K}(\varpi)\notag\\ & = 7.55\times 10^{5} \left(\frac{\alpha}{0.1}\right)^{-1/3}\left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right)\left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-1/3}\left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-1/2}\,\hbox{[$\mathrm{cm}$]}. \label{eqn:H} \end{align} \begin{align} \rho_\mathrm{0}(\varpi) & = \left(\frac{928\pi^{2} a m^{4}_\mathrm{e} c^{10}}{27\dot{M}^{2}\kappa^{2}_\mathrm{\nu0}k^{4}_\mathrm{B}}\right)\Omega^{-2}_\mathrm{K}(\varpi)\notag\\ & = 5.13\times 10^{10}\left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right)^{-2}\left(\frac{M}{\mathrm{M}_{\odot}}\right)^{2}\left(\frac{\varpi}{\mathrm{r} _\mathrm{s}}\right)^{3}\,\hbox{[$\mathrm{g}\,\mathrm{cm}^{-3}$]}. \label{eqn:ro} \end{align} \begin{align} \epsilon_\mathrm{0}(\varpi) & = \left(\frac{58 a m^{4}_\mathrm{e} c^{10}}{\alpha^{2}\kappa^{2}_{\nu0}k^{4}_\mathrm{B}}\right)^{1/3}\Omega^{2/3}_\mathrm{K}(\varpi)\notag\\ & = 4.52\times 10^{32}\left(\frac{\alpha}{0.1}\right)^{-2/3}\left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-2/3}\left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-1}\,\hbox{[$\mathrm{erg}\,\mathrm{cm}^{-3}$]}. \end{align} \begin{align} B_\mathrm{0}(\varpi) & = \left(\frac{8\pi}{3\beta}\right)^{1/2}\left(\frac{58 a m^{4}_\mathrm{e} c^{10}}{\alpha^{2}\kappa^{2}_{\nu0}k^{4}_\mathrm{B}}\right)^{1/6}\Omega^{1/3}_\mathrm{K}(\varpi)\notag\\ & = 1.95\times 10^{16}\left(\frac{\beta}{10}\right)^{-1/2}\left(\frac{\alpha}{0.1}\right)^{-1/3}\left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-1/3}\left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-1/2}\,\hbox{[$\mathrm{G}$]}. \label{eqn:B} \end{align} where $\dot{\mathrm{M}}_\odot\equiv \mathrm{M}_\odot / \mathrm{s}$. Finally, the radial velocity $v_\varpi(\varpi)$ becomes \begin{align} v_{\varpi}(\varpi) & = -\frac{\dot{M}}{2\pi \varpi\Sigma(\varpi)}\notag\\ & = -\left(\frac{9\dot{M}^{2}\alpha}{256\pi^{2} c^{2}\varpi}\right)\left(\frac{32\kappa^{2}_{\nu0}k^{4}_\mathrm{B}}{29 \alpha a m^{4}_\mathrm{e} c^{7}}\right)^{2/3}\Omega^{5/3}_\mathrm{K}(\varpi)\notag\\ & = 1.38\times 10^{10} \left(\frac{\alpha}{0.1}\right)^{1/3} \left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right)^{2} \left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-5/3}\left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-7/2}\,\hbox{[$\mathrm{cm}\,s^{-1}$]}. \end{align} The radial profile of the NDAF disk properties are shown in Figure\,\ref{fig:diskmodels}. \begin{figure} \epsscale{0.85} \plotone{f2.eps} \caption{Analytical solution of an accretion disk under the condition of local radiative equilibrium by neutrino emission with mass accretion rates of $\dot{M}/\dot{\mathrm{M}}_{\odot}=0.1$, $1.0$, and $4.0$. Temperature ($T_\mathrm{0}$) and magnetic field ($B_\mathrm{0}$) are independent of $\dot{M}$. $B_\mathrm{0}$ reaches $10^{16}\,\mathrm{G}$ at the inner-edge of the disk. The properties of the accretion disk is almost consistent with \citet{2013ApJ...766...31K} for $\dot{M}/\dot{\mathrm{M}}_{\odot}=1$.} \label{fig:diskmodels} \end{figure} \subsection{Burst emissions of the electro-magnetic pulses} We consider that the wavelength of the emitted electro-magnetic (EM) pulses is of the order of the size of the density fluctuations generated in the disk. When the generation of the density fluctuations is regulated by the distance between two Alfv\'en singularities for the unstable non-axisymmetric mode in magneto-rotational instability \citep[MRI:][]{1995ApJ...445..767M}, the wavelength of EM wave pulses becomes \begin{equation} \lambda(\varpi) = \frac{\cal C_\mathrm{A}(\varpi)}{\cal C_\mathrm{s}(\varpi)}\frac{\Omega_\mathrm{K}(\varpi)}{\cal R(\varpi)}H(\varpi) \label{eqn:lambda_def} \end{equation} where ${\cal C_\mathrm{A}(\varpi)}=B_\mathrm{0}(\varpi)/\sqrt{4\pi\rho_\mathrm{0}(\varpi)}$ is the Alfv\'en speed on the mid-plane in the disk and the shear rate ${\cal R}(\varpi) = -(\varpi/2)\left[d\Omega_\mathrm{K}(\varpi)/d\varpi\right]$ becomes \begin{equation} {\cal R(\varpi)} = \frac{3}{4}\Omega_\mathrm{K}(\varpi). \label{eqn:A} \end{equation} By using Equations \ref{eqn:ro} and \ref{eqn:B}, the Alfv\'en speed ${\cal C_\mathrm{A}(\varpi)}$ becomes \begin{align} {\cal C_\mathrm{A}(\varpi)} & = 2.42\times 10^{10} \left(\frac{\beta}{10}\right)^{-1/2}\left(\frac{\alpha}{0.1}\right)^{-1/3} \left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right) \left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-4/3}\left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-2}\,\hbox{[$\mathrm{cm}\,s^{-1}$]}. \label{eqn:ca} \end{align} Using ${\cal C}_\mathrm{s}(\varpi) = H(\varpi)\Omega_\mathrm{K}(\varpi)$ and substituting Equations \ref{eqn:H}, \ref{eqn:A}, and \ref{eqn:ca} into Equation \ref{eqn:lambda_def}, \begin{align} \lambda(\varpi) & = \frac{4}{3} {\cal C}_\mathrm{A}(\varpi) \Omega^{-1}_\mathrm{K}(\varpi)\notag\\ & = 4.50\times 10^{5} \left(\frac{\beta}{10}\right)^{-1/2}\left(\frac{\alpha}{0.1}\right)^{-1/3} \left(\frac{\dot{\mathrm{M}}}{\dot{\mathrm{M}}_{\odot}}\right) \left(\frac{M}{ \mathrm{M}_{\odot}}\right)^{-1/3}\left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-1/2}\,\hbox{[$\mathrm{cm}$]}. \label{eqn:lambda} \end{align} The timescale of emergence of an EM wave pulse on the disk $\tau_\mathrm{wave}$ is given by the half-thickness of the disk divided by the local Alfv\'en speed. By using Equations \ref{eqn:ca} and \ref{eqn:H}, \begin{align} \tau_\mathrm{wave} & = \frac{H(\varpi)}{\cal C_\mathrm{A}(\varpi)} = \sqrt{\frac{\beta}{2}}\,\Omega^{-1}_\mathrm{K}(\varpi)\notag\\ & = 3.11\times 10^{-5} \left(\frac{\beta}{10}\right)^{1/2} \left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right)^{-1} \left(\frac{M}{\mathrm{M}_{\odot}}\right) \left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{3/2}\,\hbox{[$\mathrm{s}$]}. \label{eqn:tau_wave} \end{align} The power of an EM wave pulse ${\cal P}_\mathrm{wave}(\varpi)$ propagating along the jet is estimated by the magnetic energy emerging from the disk in the vertical direction divided by $\tau_\mathrm{wave}$, and therefore \begin{equation} {\cal P}_\mathrm{wave}(\varpi)=\frac{B^{2}_\mathrm{0}(\varpi)}{8\pi} \frac{H(\varpi) d{\cal A}(\varpi)}{\tau_\mathrm{wave}} \label{eqn:Pwave} \end{equation} where $d{\cal A}(\varpi)$ is the unit surface area on the disk. Substituting Equations~\ref{eqn:H}, \ref{eqn:B}, and \ref{eqn:ca} into ${\cal P}_\mathrm{wave}(\varpi)$ dividing by $d{\cal A}(\varpi)$, the energy flux of EM wave pulses ${\cal F}_\mathrm{wave}(\varpi)$ is expressed as \begin{align} {\cal F}_\mathrm{wave}(\varpi) & = \frac{{\cal P}_\mathrm{wave}(\varpi)}{d{\cal A}(\varpi)} = \frac{{\cal C_\mathrm{A}(\varpi)} B^{2}_\mathrm{0}(\varpi)}{8\pi} = \frac{\dot{M}}{\pi\alpha\sqrt{8\beta^{3}}}\Omega^{2}_\mathrm{K}(\varpi) \notag\\ & = 3.65\times 10^{41} \left(\frac{\beta}{10}\right)^{-3/2} \left(\frac{\alpha}{0.1}\right)^{-1} \left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right) \left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-2} \left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-3}\,\hbox{[$\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$]}. \label{eqn:Fwave} \end{align} The total wave luminosity from the entire disk is calculated by integrating ${\cal F}_\mathrm{wave} d{\cal A}(\varpi)$ over the radius: \begin{align} L_\mathrm{wave} & = \int_{\varpi_\mathrm{in}}^{\infty} 2 {\cal F}_\mathrm{wave}(\varpi) 2\pi \varpi d\varpi = \frac{\dot{M}}{\alpha}\left(\frac{2}{\beta^{3}}\right)^{1/2}\left(\frac{GM}{\varpi_\mathrm{in}}\right) = \left(\frac{1}{18\alpha^{2}\beta^{3}}\right)^{1/2}\dot{M}c^{2}\notag\\ & = 1.33\times 10^{53} \left(\frac{\beta}{10}\right)^{-3/2} \left(\frac{\alpha}{0.1}\right)^{-1} \left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right)\,\hbox{[$\mathrm{erg}\,\mathrm{s}^{-1}$]}. \label{eqn:Lw} \end{align} Here we assume that the inner edge of the disk is truncated at the innermost stable circular orbit (ISCO)\footnote{Although we use the Newtonian gravitational potential, it is convenient to set it as the inner boundary of the disk around a non-rotating BH.}, thus $\varpi_\mathrm{in} = 3 \mathrm{r}_\mathrm{s}$. Note that a factor of $2$ comes from both upper and lower sides of the disk. Likewise, from Equation \ref{eqn:radiation_flux}, the neutrino energy flux ${\cal F}_\nu$ becomes \begin{align} {\cal F}_{\nu}(\varpi) & = 1.22\times 10^{42} \left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right) \left(\frac{M}{\mathrm{M}_{\odot}}\right) \left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-3}\,\hbox{[$\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$]}, \label{eqn:Frad} \end{align} and the neutrino luminosity $L_{\nu}$ is calculated by integrating $F_\nu$ over the entire disk: \begin{align} L_\nu & = \int_{\varpi_\mathrm{in}}^{\infty} 2 {\cal F}_\nu(\varpi) 2\pi \varpi d\varpi = \frac{3\dot{M}}{2}\frac{GM}{\varpi_\mathrm{in}} = \frac{1}{4}\dot{M}c^{2}\notag\\ & = 4.47\times 10^{53} \left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right)\,\hbox{[$\mathrm{erg}\,\mathrm{s}^{-1}$]}. \label{eqn:Lnu} \end{align} The ratio of the wave luminosity to the neutrino luminosity $L_\mathrm{wave}/L_\nu$ becomes \begin{equation} \frac{L_\mathrm{wave}}{L_{\nu}} = \left(\frac{8}{9\alpha^{2}\beta^{3}}\right)^{1/2}= 0.29 \left(\frac{\alpha}{0.1}\right)^{-1}\left(\frac{\beta}{10}\right)^{-3/2} \end{equation} which is consistent with the value reported as $\mathcal{O}(1)$ in \citet{ebisuzaki+tasjima2019}, suggesting that the wave luminosity can be the primary energy source of GRBs, SNe, and HNe. \subsection{Propagation of the electro-magnetic wave pulses along the jet} The magnetic fields contained within the progenitor are twisted and amplified via the MRI or dynamo action in the NDAF disk. Such a strong toroidal magnetic field component is eventually emerged from the surface of the disk via the magnetic buoyancy and converted into a large-scale vertical magnetic field, which is known as a magnetic tower \citep{1996MNRAS.279..389L,2004ApJ...605..307K,2006ApJ...647.1192U}. In addition, such explosive magnetic flux ejections from the magnetized accretion disks are expected to occur repeatedly \citep{1990ApJ...350..295S}. Once large-amplitude Alfv\'enic wave pulses (which turn into EM wave pulses) have launched from the NDAF, it propagates along the vertical magnetic field confined within a funnel shape in the jet as a wave packet at nearly the speed of light (See Figure~\ref{fig:schematic}). Assuming that EM wave pulses do not interact with the surroundings outside the jet, the energy injected by the electric field $E_\mathrm{0}(\varpi)$ within each propagating pulse is expressed as: \begin{equation} E_\mathrm{0}(\varpi) = \sqrt{\frac{4\pi{\cal F}_\mathrm{wave}(\varpi)}{c}}. \end{equation} From Equation \ref{eqn:Fwave}, it becomes \begin{equation} E_\mathrm{0}(\varpi) = 1.24\times 10^{16}\left(\frac{\beta}{10}\right)^{-3/4} \left(\frac{\alpha}{0.1}\right)^{-1/2} \left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right)^{1/2} \left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-1} \left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-3/2}\,\hbox{[$\mathrm{dyn\,esu^{-1}}$]}. \label{eqn:Ew} \end{equation} By analogy with an important parameter in the intense laser-plasma interactions, we employ the wakefield strength parameter \citep{1988ApPhL..53.2146S}: \begin{equation} a_\mathrm{0}(\varpi) = e A_\mathrm{0}/m_\mathrm{e}c^{2} \end{equation} where $A_\mathrm{0}\equiv c E_\mathrm{0}(\varpi)/\omega$ is the amplitude of the vector potential at the base of the jet and $\omega = 2\pi c/\lambda(\varpi)$ is the angular frequency of the EM wave pulse in the jet. Note that we assume the propagation speed of the EM wave pulse to be the speed of light. This assumption holds the most of the cases including jets from neutrino driven accretion flows. The parameter $a_\mathrm{0}(\varpi)$ is basically the normalized vector potential amplitude of the EM wave pulse at the base of the jet. By substituting Equation \ref{eqn:Ew}, \begin{equation} a_\mathrm{0} (\varpi) = 5.19\times 10^{17} \left(\frac{\beta}{10}\right)^{-5/4} \left(\frac{\alpha}{0.1}\right)^{-4/3} \left(\frac{\dot{M}}{\dot{\mathrm{M}}_{\odot}}\right)^{3/2} \left(\frac{M}{\mathrm{M}_{\odot}}\right)^{-4/3} \left(\frac{\varpi}{\mathrm{r}_\mathrm{s}}\right)^{-2}. \label{eqn:azero} \end{equation} The physical parameters within the jet as a function of the distance from the disk $z$ determines how the electro-magnetic (EM) wave pulses propagate through the jet. However, the structure of jet emanated from the NDAF disk and penetrating though a progenitor ambient medium is not easy to characterize. Therefore, we assume that the so-called jet collimation profile is determined by the normalized distance $z/\varpi_\mathrm{0}$ in powers of $\phi$ for simplicity, and therefore the radius of the jet can be expressed as \begin{equation} R(\varpi_\mathrm{0},z)= \varpi_\mathrm{0} \left[1 + \left(z / \varpi_\mathrm{0}\right)^{\phi}\right] \end{equation} where $\varpi_\mathrm{0}$ is the radius at the base of the jet. Here we choose $\phi=1/2$ and $\phi=1$, namely, a parabolic shape for a collimated jet and a conical shape for an uncollimated wind (its opening angle of $45$ degrees), respectively. Assuming the magnetic flux within the area ${\cal A}(z) = \pi R^{2}(\varpi_\mathrm{0},z)$ is conserved along the jet, $B_\mathrm{0}(\varpi_\mathrm{in}){\cal A}(0) = B(z) {\cal A}(z)=\mathrm{const.}$, it is therefore the magnetic field strength along the jet $B(z)$ can be expressed as \begin{align} B(z) & = B_\mathrm{0}{\cal A}(0)/{\cal A}(z). \label{eqn:Bz} \end{align} The plasma number density $n_\mathrm{p}(z)$ in the jet is estimated by assuming the ratio between the total kinetic luminosity and the total neutrino luminosity is constant along the jet, \begin{equation} L_\mathrm{kinetic}=n_\mathrm{p}(z)\mu m_\mathrm{p} c^{3}\Gamma^{2}{\cal A}(z) = \xi L_{\nu} \label{eqn:L_kinetic} \end{equation} where the mean molecular weight $\mu = 2.34$, and we set the constant jet bulk Lorentz factor of $\Gamma=400$ \citep[which is slightly higher than the averaged value of][]{2018A&A...609A.112G} and we choose $\xi=0.1$. From the charge neutrality in the jet, the electron number density $n_\mathrm{e}(z)$ can be expressed as \begin{equation} n_\mathrm{e}(z) = n_\mathrm{p}(z) = \frac{\xi L_{\nu}}{\mu m_\mathrm{p} c^{3}\Gamma^{2}{\cal A}(z)}. \end{equation} From Equation~\ref{eqn:azero}, the wakefield strength parameter in the jet $a(z)$ becomes much greater than the unity, $a(z)\gg 1$, trapped electrons in the EM wave pulses become ultra-relativistic and therefore the Lorentz factor of electrons becomes $\gamma_\mathrm{e}(z) \approx a(z)$ \citep{2017NCimR..40...33T} where $\gamma_\mathrm{e}(z) = 1/\sqrt{1- (|\vec{v}_\mathrm{e}| / c)^{2}}$ is the Lorentz factor of trapped electrons with the velocity of electrons $\vec{v}_\mathrm{e}$ in the wake. By assuming the base of the jet is located at $\varpi_\mathrm{0} = \varpi_\mathrm{in}$ and the energy flux of EM wave pulses is conserved along the jet, ${\cal A}(z) a^{2}(z) = {\cal A}(0) a^{2}_\mathrm{0}(\varpi_\mathrm{0}) = \mathrm{const.}$, the wakefield strength parameter can be expressed as \begin{equation} \gamma_\mathrm{e}(z) \approx a(z) = a_\mathrm{0}(\varpi_\mathrm{0}) \sqrt{{\cal A}(0)/{\cal A}(z)}. \end{equation} Since $\vec{v}_\mathrm{e}$ is mostly perpendicular to the bulk velocity of the jet, the plasma frequency is expressed as \begin{equation} \omega_\mathrm{p}(z) = \sqrt{4\pi n_\mathrm{e}(z) e^{2}/m_\mathrm{e}\gamma(z)} \label{eqn:omegap} \end{equation} where $\gamma(z)=\gamma_\mathrm{e}(z)\Gamma$ is the Lorentz factor of the combined velocities in the jet. Note that the cyclotron frequency in the jet is $\omega_\mathrm{c}(z)=\sqrt{e B(z)/m_\mathrm{e}c\gamma(z)}$. Finally, the vertical structure of the jet properties such as the electron number density, the vertical magnetic field strength, the Lorentz factor of electrons, and the EM wave frequency of the jet in comparison with both the plasma frequency and the cyclotron frequency are shown in Figure\,\ref{fig:jet}. \begin{figure} \epsscale{0.85} \plotone{f3.eps} \caption{The vertical structure of the jet properties for $\phi=0.5$ (black) and $\phi=1$ (red). The electron number density, the vertical magnetic field strength, the wakefield strength parameter ($\approx$ the Lorentz factor of electrons), and the angular frequency of electro-magnetic (EM) wave pulses of the jet ($\omega$) in comparison with both the cyclotron frequency ($\omega_{\mathrm{c}}$) and the plasma frequency ($\omega_{\mathrm{p}}$) are shown from top to bottom panels, respectively. The mass of a black hole is $M = 3 \mathrm{M}_\odot$ and the ratio between the total kinetic luminosity of the jet ($L_\mathrm{kinetic}$) and the total neutrino luminosity ($L_\mathrm{\nu}$) is assumed to be $\xi\equiv L_\mathrm{kinetic}/L_\mathrm{\nu}=0.1$. The region of $\omega_\mathrm{p}/\omega > 1$ is the evanescent region but the presence of magnetic field makes some of the magnetic waves not evanescent, whereas the region of $\omega_\mathrm{p}/\omega < 1$ is the propagation region for EM wave pulses.} \label{fig:jet} \end{figure} \section{Results} \label{sec:results} \subsection{Neutrino spectrum of the NDAF disks} Assuming the disk is a blackbody source for neutrinos at each radius, the neutrino energy flux per unit energy interval per solid angle to be \begin{equation} {\cal B}_{\nu}(\varepsilon_{\nu},T_\nu(\varpi)) = \frac{4\varepsilon^{3}_{\nu}/h^{3}c^{2}}{\exp{\left[\left(\varepsilon_{\nu} - \mu_\nu\right)/k_\mathrm{B}T_\nu(\varpi)\right]} + 1}. \end{equation} where $\varepsilon_{\nu}$ is the energy of neutrino, $T_{\nu}(\varpi)$ is the effective temperature of neutrino at each radius, and $\mu_\nu$ is the chemical potential of neutrinos. We derive $T_{\nu}(\varpi)$ by using the relation ${\cal F}_{\nu}(\varpi) = (7/8)a T^{4}_{\nu}(\varpi)$. Assuming that the chemical potential of neutrinos can be ignored, $\mu_{\nu}=0$ \citep{2007ApJ...662.1156K}, we compute the emergent luminosity of neutrino $L_\nu(\varepsilon_{\nu}) = 4\pi^{2}{\cal B}_\nu(\varepsilon_{\nu},T_\nu(\varpi))\varpi d\varpi$ as a function of the energy of neutrino and plot the spectra as shown in Figure \ref{fig:neutrino_spectrum}. The peak value of neutrino energy is $18.3~\mathrm{MeV}$ which is originated from the inner region of NDAF disks ($\varpi\lsim 10~r_\mathrm{s}$). It is higher than a peak neutrino energy expected from the standard supernova with a hot neutron star \citep{1987ApJ...318..288M}. Since the higher neutrino energy becomes the larger neutrino cross-sections in water becomes \citep[e.g.,][]{2003PhLB..564...42S}, water Cherenkov detectors, like Super-Kamiokande (SK), could have more chance to detect neutrinos from NDAFs. \begin{figure} \epsscale{0.85} \plotone{f4.eps} \caption{Neutrino spectrum of NDAF of $L_{\nu}$ with the mass accretion rate of $\dot{M}=1.0\,\mathrm{M}_\odot/\mathrm{s}$. The peak value of neutrino energy $\epsilon_{\nu\mathrm{, peak}}$ is $18.3~\mathrm{MeV}$ which is originated from the inner region of NDAF disks ($\varpi\lsim 10~r_\mathrm{s}$), and this is higher than the previously estimated values.} \label{fig:neutrino_spectrum} \end{figure} \subsection{Explosion energy by waves and accreted mass onto a black hole} We have estimated the explosion energy released as EM wave pulses $E_\mathrm{exp}$ by integrating Equation \ref{eqn:Lw} over time during the growth of a black hole \begin{equation} E_\mathrm{exp} = \int_{0}^{\tau} L_\mathrm{wave} dt = \frac{c^{2}}{6\alpha}\left(\frac{2}{\beta^{3}}\right)^{1/2}\int_{0}^{\tau}\dot{M} dt = \frac{c^{2}}{6\alpha}\left(\frac{2}{\beta^{3}}\right)^{1/2} M_\mathrm{acc} \end{equation} where $M_\mathrm{acc}$ is the accreted mass. Figure~\ref{fig:wave_power} shows the relation between the explosion energy by EM wave pulses $E_\mathrm{exp}$ and the accumulated mass $M_\mathrm{acc}$. It is important to note that the explosion energy reaches high as $10^{53}~\mathrm{erg}$ if the accreted mass becomes $1\,\mathrm{M}_\odot$, and therefore our model is relevant for HNe. \begin{figure} \epsscale{0.85} \plotone{f5.eps} \caption{Explosion energy emitted by EM wave pulses from NDAF and the accreted mass onto a black hole ($\alpha = 0.1$ and $\beta = 10$). Two shaded bands represent typical explosion energy for supernovae (SNe) and hypernovea (HNe) \citep{2005hedl.book...81N,2013ARA&A..51..457N}. If the accreted mass becomes one solar mass, the explosion energy could exceed $10^{53}\,\mathrm{erg}$ which is strong enough for reversing collapsing cores of massive stars into explosion and therefore our model is relevant for HNe as indicated by a red arrow.} \label{fig:wave_power} \end{figure} \subsection{Acceleration of high energy particles in the jet} The pondermotive force exerted on electrons by EM wave pulses generates longitudinal polarization of electron distribution which creates wakefields as illustrated in Figure\,\ref{fig:schematic}. Charged particles are accelerated by the wakefield $E_\mathrm{TD} = m_\mathrm{e} \omega_\mathrm{p}(z) c / e$ \citep[the so-called Tajima-Dawson field:][]{1979PhRvL..43..267T}, and therefore the wakefield force $F_\mathrm{w}$ exerted on a charged particle in the non-relativistic regime is expressed as $F_\mathrm{w} = {\cal Z} e E_\mathrm{TD} = {\cal Z} m_\mathrm{e} \omega_\mathrm{p}(z) c$ where ${\cal Z}$ is the charge number. In the relativistic regime, namely under the intense wakefield $a(z)\approx\gamma(z)\gg 1$ and high bulk Lorentz factor $\Gamma \ge 1$, the wakefield force $F_\mathrm{w}$ becomes \citep{2017NCimR..40...33T} \begin{equation} F_\mathrm{w} = {\cal Z}\Gamma m_\mathrm{e} a(z) \omega_\mathrm{p}(z) c. \label{eqn:F_w} \end{equation} Note that the accelerating force is the same for electrons (positrons) and protons. The maximum energy ${\cal W}_\mathrm{max}$ gained by the accelerated charged particles is determined by integrating the work done by $F_\mathrm{w}$ over the acceleration distance {\bf $\Delta z_\mathrm{w}$} \begin{equation} {\cal W}_\mathrm{max} = \int_{z_\mathrm{w}}^{z_\mathrm{w}+\Delta z_\mathrm{w}} F_\mathrm{w} dz = {\cal Z} \Gamma m_\mathrm{e} c \int_{z_\mathrm{w}}^{z_\mathrm{w}+\Delta z_\mathrm{w}} \omega_\mathrm{p}(z) a(z) dz. \label{eqn:wmax} \end{equation} Here we assume that $z_\mathrm{w}$ is the acceleration site on which the plasma becomes the underdense condition ($\omega > \omega_\mathrm{p}$) from the overdense condition ($\omega < \omega_\mathrm{p}$). As shown in Figure \ref{fig:jet}, the acceleration site $z_\mathrm{w}$ depends on the vertical structure of the jet, more precisely the density stratification in the jet. The energy gained for an electron (and a position) is most likely limited by the radiation loss such as the synchrotron and Bremsstrahlung emissions because both magnetic field and charged particles in the jet bend their trajectory. Thus, after the onset of the wakefield acceleration at $z \gsim z_\mathrm{w}$, the synchrotron emissions by an accelerated electron (positron) result in an observational signatures of gamma-ray emissions. On the other hand, the energy gained by a proton is determined by the coherent accelerating length available over the jet environment. In the laser wakefield accelerations for $a_\mathrm{0} \gg 1$, the maximum coherent accelerating length is determined by a half of the pump depletion length $l_\mathrm{pd} = 2\sqrt{2} c \left(\omega^{2} / \omega^{3}_\mathrm{p}\right) a_\mathrm{0}$ where $\omega$ and $\omega_\mathrm{p}$ is the frequency of the laser beam and the plasma frequency, respectively \citep{Esarey:2009ks}. This is an ideal condition for the maximum acceleration length, and therefore we remind the readers that the effective acceleration length could be shorter in reality. In our study, we consider the maximum coherent accelerating length as the acceleration distance in the jets, namely $\Delta z_\mathrm{w} = l_\mathrm{pd}/2 = \sqrt{2} c \left[\omega_\mathrm{0}^{2} / \omega^{3}_\mathrm{p}(z_\mathrm{w})\right] a(z_\mathrm{w})$ where $\omega_\mathrm{0} = 2\pi c/\lambda(\varpi_\mathrm{0})$ is the frequency of EM wave pulses at the base of the jet. Since the integral of $\omega_\mathrm{p}(z) a(z)$ in Equation~\ref{eqn:wmax} is complicated, we shall reduce $\omega_\mathrm{p}(z)$ and $a(z)$ to the following forms: for $z\gg \varpi_\mathrm{0}$, ${\cal R}(\varpi_\mathrm{0},z) \approx \varpi_\mathrm{0}\left(z/\varpi_\mathrm{0}\right)^{\phi}$ leads to $\omega_\mathrm{p}(z) \approx \omega_\mathrm{p}(0) \varpi_\mathrm{0}^{\phi/2} z^{-\phi/2}$ and $a(z) \approx a_\mathrm{0}(\varpi_\mathrm{0}) \varpi_\mathrm{0}^{\phi} z^{-\phi}$. After some of algebra Equation\,\ref{eqn:wmax} becomes \begin{align} {\cal W}_\mathrm{max} & \approx {\cal Z} \Gamma m_\mathrm{e} \omega_\mathrm{p00} a_\mathrm{00} c \dot{M}^{3/4} \Omega^{2/3}_\mathrm{K}(\varpi_\mathrm{0}) \varpi_\mathrm{0}^{3\phi/2 - 1} \int_{z_\mathrm{w}}^{z_\mathrm{w}+\Delta z_\mathrm{w}} z^{-3\phi/2} dz,\notag\\ & = {\cal W}_\mathrm{0} \phi_\mathrm{0}^{-1} \left[\left(z_\mathrm{w} + \Delta z_\mathrm{w}\right)^{\phi_\mathrm{0}} - z_\mathrm{w}^{\phi_\mathrm{0}}\right] \dot{M}^{3/4} \Omega^{2/3}_\mathrm{K}(\varpi_\mathrm{0}) \varpi_\mathrm{0}^{-\phi_\mathrm{0}} \label{eqn:wmax3} \end{align} where $\phi_\mathrm{0}=1-\frac{3}{2}\phi$, ${\cal W}_\mathrm{0} = {\cal Z}\Gamma m_\mathrm{e} \omega_\mathrm{p00} a_\mathrm{00} c$, and the acceleration site and the acceleration distance are \begin{equation} z_\mathrm{w} =\left[\left(\frac{\omega_\mathrm{p00}}{\omega_\mathrm{00}}\right)\left(\frac{6}{c^{2}}\right)^{\frac{\phi-1}{2}}\dot{M}^{1/4}\left(GM\right)^{\frac{3\phi-4}{6}}\right]^{2/\phi}, \label{eqn:zw} \end{equation} \begin{equation} \Delta z_\mathrm{w} = \left(\frac{1}{3}\right)^{3/2} \left(\frac{c^{4}}{4}\right) \left(\frac{\omega_\mathrm{00}}{\omega^{2}_\mathrm{p00}} a_\mathrm{00}\right)\dot{M}^{2}\left(GM\right)^{-1/3}, \label{eqn:dzw} \end{equation} {\bf and} \begin{equation} a_\mathrm{00} = \frac{e \kappa^{2/3}_{\nu\mathrm{0}} k^{4/3}_\mathrm{B}}{2^{7/12} 29^{1/3} \pi^{2} m^{7/3}_\mathrm{e} c^{35/6} \alpha^{5/6} \beta^{5/4} a^{1/3}}, \end{equation} \begin{equation} \omega_\mathrm{p00} = \frac{2^{31/24} 29^{1/6} \pi \sqrt{\xi L_\nu e} \alpha^{5/12} \beta^{5/8} c^{17/12} a^{1/6} m^{2/3}_\mathrm{e}}{\sqrt{\mu m_\mathrm{p} \Gamma^{3}} \kappa^{1/3}_{\nu\mathrm{0}} k^{2/3}_\mathrm{B}}, \end{equation} \begin{equation} \omega_\mathrm{00} = \frac{2^{5/6} 29^{1/3} \pi^{2} \alpha^{1/3} \beta^{1/2} a^{1/3} m^{4/3}_\mathrm{e} c^{13/3}}{\kappa^{2/3}_{\nu\mathrm{0}} k^{4/3}_\mathrm{B}}. \end{equation} If the accelerated ion were carbon and oxygen nucleus, the gain by such ions is ${\cal Z}$-times the value of proton. By substituting Equations \ref{eqn:lambda}, \ref{eqn:azero}, and $\varpi_\mathrm{0} = \varpi_\mathrm{in}$, Equation~\ref{eqn:wmax3} becomes \begin{equation} {\cal W}_\mathrm{max} = {\cal W}_\mathrm{0} \phi_\mathrm{0}^{-1} \left[\left(z_\mathrm{w} + \Delta z_\mathrm{w}\right)^{\phi_\mathrm{0}} - z_\mathrm{w}^{\phi_\mathrm{0}}\right] \left(\frac{c^2}{6}\right)^{\phi_\mathrm{0}+1} \dot{M}^{3/4} \left(GM\right)^{-(\phi_\mathrm{0}+2/3)}. \label{eqn:wmax4} \end{equation} Note that the maximum energy gain ${\cal W}_\mathrm{max}$ becomes exactly independent of the mass of central objects $M$ if $\phi=1$. On the other hand, if $\phi=1/2$, it is less dependent on $M$ because $\Delta z_\mathrm{w}$ is independent of $\phi$. From Equation\,\ref{eqn:Lnu}, the mass accretion rate is expressed by $\dot{M} = 4 {\cal L}_\nu / c^{2}$ where ${\cal L}_\nu$ is the neutrino luminosity constrained by the future observations, and therefore, substituting it into Equations\,\ref{eqn:zw}, \ref{eqn:dzw}, and \ref{eqn:wmax4}, we plot the relation between observables such as neutrino luminosity ${\cal L}_{\nu}$ and BH mass ${\cal M}$ for a given ${\cal W}_\mathrm{max}$ in Figure~\ref{fig:max_energy}. This shows that the neutrino luminosity necessary for the maximum energy gain for $< 10^{24}~\mathrm{eV}$ is well-below the neutrino luminosity from the NDAF disks of $L_\nu = 4.47\times 10^{53}\,\mathrm{erg}\,\mathrm{s}^{-1}$. Therefore the energy of protons gained by the wakefield acceleration seems to be sufficient for a source of both the extremely high energy cosmic rays (EHECRs) and the super-EHECRs of $10^{22 - 23}~\mathrm{eV}$ \citep{Takahashi2000}. In addition, the acceleration time required for the maximum energy of protons for $< 10^{24}~\mathrm{eV}$ is not more than $\approx 1~\mathrm{s}$, and therefore the wakefield acceleration seems to be efficient for generating the super-EHECRs as well. \begin{figure} \epsscale{0.85} \plotone{f6.eps} \caption{Maximum energy of accelerated protons ${\cal W}_\mathrm{max} = 10^{18}$, $10^{20}$, $10^{22}$, and $10^{24}~\mathrm{eV}$ for $\phi=1/2$ (white) and $\phi=1$ (red) in the jets from the NDAFs in relation to the mass of the central objects and neutrino luminosity. Color-scale indicates the acceleration time duration defined as $\Delta z_\mathrm{w}/c$ which is independent to $\phi$. A black dotted line shows the neutrino luminosity of NDAFs with the mass accretion rate of $\dot{M}=1.0\,\mathrm{M}_\odot/\mathrm{s}$, indicating that the energy of protons gained by the wakefield acceleration is sufficient for a source of the extremely high energy cosmic rays (EHECRs) and the super-EHECRs of $10^{22 - 23}~\mathrm{eV}$ \citep{Takahashi2000}. The acceleration time required for the maximum energy of protons is not more than $\approx 10^{5}~\mathrm{s}$ and therefore the wakefield acceleration is efficient for generating the super-EHECRs as well.} \label{fig:max_energy} \end{figure} \section{Summary and Discussion}\label{sec:summary_dicussion} \label{sec:summary_and_discussion} In this paper, we have derived the properties of NDAF disks analytically and estimated the energy extracted via electro-magnetic (EM) wave pulses from the NDAF disks for the first time. We have found that the energy emitted by EM wave pulses becomes more than $10^{53}~\mathrm{erg}$ in the form of Poynting flux if the accreted mass reached $\sim 1 \mathrm{M}_\odot$. Such energy is sufficient for reversing collapsing cores of massive stars into explosion. Since NDAF disks can only be formed in collapsars under the condition in which the specific angular momentum of progenitors becomes $j \gsim 1.5\times 10^{16}\,(M/\mathrm{M}_\odot)\,\mathrm{cm^{2}\,s^{-1}}$, NDAF disks can explain the bifurcation between HNe and faint SNe \citep{2013ARA&A..51..457N}. Therefore the explosion driven by EM wave pulses from NDAF disks is the plausible model for HNe. The wave energy penetrating through the collapsing cores after the explosion seems to be enough to generate high-energy particles up to the super-extremely high energy cosmic rays (super-EHECRs) of $10^{22-23}\,\mathrm{eV}$ as a result of the wakefield acceleration for ions in the jets. Meanwhile, accelerated electrons and positrons can be the source of gamma-ray emissions as well as non-thermal emissions in GRBs \citep{Takahashi2000}. The neutrino spectra of NDAF disks are extended up to $100\,\mathrm{MeV}$ and its peak value of neutrino energy is $18.3~\mathrm{MeV}$, which is an order of magnitude larger than the threshold energy of neutrinos in the previous study for evaluating the detection efficiency of the water Cherenkov detectors of Super-Kamiokande (SK) \citep{1998ApJ...496..216T,2003APh....18..551N} and its cross-section for inverse beta decay reaches $\gsim 10^{-41}~\mathrm{cm}^{2}$ \citep{2011JPhCS.309a2028S}. Therefore, neutrino signals from NDAF disks could be a primary target for SK. If the SK could have good timing capabilities equivalent to the Keplerian rotation period at the inner-edge of NDAF disks, $\delta t = 2\pi/\Omega_\mathrm{K}(\varpi_\mathrm{0}) \sim 4.0\times 10^{-4}\,\left(M/\mathrm{M}_\odot\right)\,\mathrm{s}$, the detection of neutrino intensity modulation as a result of deformations of NDAF disks would constrain the origin of neutrino signals as well as the properties of the central BHs. \subsection{Dependency on the jet collimation profile} The collimation profile of the jets from NDAFs is unknown. But it can be extrapolated from the knowledge of jets from active galactic nuclei (AGNs) such as a powerful radio jet in M87. The collimation profile in the M87 jet is known to be a parabolic-shape constrained within a few hundreds of the Schwartzschild radii in the vicinity of a supermassive black hole \citep{2017PASJ...69...71H}. This is why we use a parabolic-shape in the first place. But we also use a conical-shape because accretion-disk winds, which are an alternative way to drive outflows, have been observed in AGNs \citep{Tombesi:2015cx}. By assuming the jet collimation profile is inherent between $\phi = 1/2$ and $\phi=1$, we can investigate a wider variety of the density stratification in the jet from NDAFs. The onset of the acceleration process between $\phi=1/2$ and $\phi=1$ is quite different, because we assume that the acceleration starts at the same density, magnetic field strength, and wakefield strength parameter according to the condition of $\omega_\mathrm{p}(z_\mathrm{w})/\omega_\mathrm{0}=1$, but at the different location depending on the jet collimation profile as shown in Figure~\ref{fig:jet}. Actually, the acceleration site for $\phi=1$ is $z_{\mathrm{w},\phi=1} = 8.4\times 10^{5}~\mathrm{r_{s}}$ whereas that of $\phi=1/2$ is $z_{\mathrm{w},\phi=1/2} = 2.3\times 10^{11}~\mathrm{r_{s}}$. As a result, the generation of high energy charged particles by the wakefield acceleration must be delayed due to the propagation time of $\delta t_{\phi=1} = z_{\mathrm{w},\phi=1}/c \approx 8.3~\mathrm{s}$ and $\delta t_{\phi=1/2} = z_{\mathrm{w},\phi=1/2}/c \approx 27~\mathrm{days}$ in the observer frame. The steeper the density profile, the shorter the delay time becomes. If gamma-rays are emitted from accelerated high energy electrons and positrons in the jets, the delay time of those gamma-ray emissions can discriminate the density stratification along the jet. Actually, \textit{Fermi} Gamma-ray Burst Monitor detected a gamma-ray-burst (GRB 170817A) with a time delay of $\sim 1.7~\mathrm{s}$ with respect to the merger time of the gravitational wave event (GW170817) \citep{Abbott:2017it}. If this event is driven by a jet from a NDAF disk, such a short time interval is consistent with our models because the time interval between the gamma-ray emission and the gravitational wave emission is expected to be in the range between $\delta t_{\mathrm{obs},\phi=1} = \delta t_{\phi=1}\,(1 - \sqrt{\Gamma^2-1} \cos{\theta} / \Gamma) \approx 3.2\times 10^{-2}~\mathrm{s}$ and $\delta t_{\mathrm{obs},\phi=1/2} = \delta t_{\phi=1}\,(1 - \sqrt{\Gamma^2-1} \cos{\theta} / \Gamma) \approx 2.5~\mathrm{hours}$ if the jet were directed at angle $\theta = 5~\mathrm{degrees}$ with respect to the line of sight to the observer located far away from the source. Therefore, the density gradient is also consistent with the range of the jet collimation profile we used here. However, the opening angle of $45$ degrees for $\phi=1$ in the jets from NDAFs seems to be far from reality according to the many other observations of GRBs \citep{2018A&A...609A.112G}, in which a typical opening angle is assumed to be $5$ degrees. Therefore, we may take into account hybrid collimation profiles, such as a two-component jet model \citep[the so-called spine-sheath jet model:][]{2003ApJ...594L..23V,2005NCimC..28..439P}. First and foremost, we need to incorporate a model based on numerical experiments of the magnetically driven jets penetrating through the progenitor ambient medium and propagating at the further distance for more detailed investigations. \subsection{Maximum energy of accelerated protons} In this study we have estimated the maximum energy of accelerated protons by integrating the work done by wakefield force generated by a strong EM wave pulse over the maximum acceleration distance at the condition of $\omega_\mathrm{p}(z_\mathrm{w})/\omega_\mathrm{0} \lsim 1$. This condition is based on the idealized assumption for the wakefield accelerations, such as a wakefield generated behind a short intense EM pulse. In astrophysical wakefield acceleration, the EM wave pulse could be a broader and more complicated pulse structure. If such a composite large amplitude Alfv\'enic wave pulse (which have turned into an EM wave pulse) occurs, it is expected to have incessant repetition of acceleration and out-of-acceleration within the excited wakefield. For this reason, the spectrum for number of charged particles accelerated at the energy $\varepsilon$ has a power-law in $\varepsilon^{-p}$ with an index $p=2$ \citep{1991AIPC..230...27M}. In the wakefield acceleration in the jet, the magnetic field act as a guide for propagating EM wave pulses which excite the wake in plasma behind the pulses. According to the dispersion relation of waves in plasma with no-magnetic field for simplicity, $\omega^{2} = \omega^{2}_\mathrm{p} + k^{2} c^{2}$, the group velocity of EM wave pulses, $v_\mathrm{g} = d\omega/dk = c^{2} / (\omega/k)$, is equal to the phase velocity of the wake, $v_\mathrm{p,w} = v_\mathrm{g} = c\sqrt{1-(\omega_\mathrm{p}/\omega)^{2}}$, thus the Lorentz factor of the wake becomes $\gamma_\mathrm{w} = 1/\sqrt{1- (v_\mathrm{p,w}/c)^{2}} = \omega / \omega_p$ where $\omega$ is the frequency of an EM wave pulse. Once the wake phase velocity approaches nearly the speed of light beyond $z > z_\mathrm{w}$ (namely $\gamma_\mathrm{w} > 1$) along the jet, the electrons cannot compensate the immense wakes produced by EM wave pulses because the response of electrons is limited by the speed of light. In other words, the EM wave pulses can "run away" from the instability due to the interaction between wave and plasma, and it can continue to "run though" plasma at the large distance, creating the large-scale coherent wakes in the jet. Therefore the electric wakefields exert over the pump depletion length behind the EM wave pulses. This is the reason why we consider that the wake has stability and rigidity under the condition of $\omega_\mathrm{p}/\omega <1$. Unlike the Fermi acceleration, the wakefield acceleration do not require the multiple reflections by a magnetic mirror, which causes asynchronicity and the serious synchrotron radiation loss not only for electrons and positrons but also for protons with $\gsim 10^{20}~\mathrm{eV}$. Since the wakefield force, not magnetic bending force, is responsible for accelerating charged particles, the wakefield acceleration is linear and synchronous with the propagating EM wave pulses and has more advantage in accelerating the super-EHECRs of $10^{22 - 23}~\mathrm{eV}$ in comparison with the Fermi acceleration, especially in the strongly magnetized jets from NDAFs. Another point is that the accelerated high energy protons larger than the break energy, $\sim 10^{16}~\mathrm{eV}$, lose its energy by pion production through photo-meson interaction with energetic photons $\sim 1~\mathrm{MeV}$ in the jet \citep{1997PhRvL..78.2292W, 2001LNP...576.....L}. Such energetic photons could be generated by the synchrotron radiation originated from accelerated electrons and positrons which are instantly generated by the wakefield acceleration in the jet less than picoseconds (the lowest range of color-scale in Figure~\ref{fig:max_energy}). If the energy density of gamma-ray photons in the jet is larger than $10^{11}~\mathrm{erg/cm}^{3}$, the timescale of pion production is less than microseconds which is comparable to the acceleration time for energetic charged protons of $< 10^{20}~\mathrm{eV}$ (see Figure~\ref{fig:max_energy}). This argument suggests that simultaneous production of high energy protons and reduction of its energy via pion production must be taken into account for evaluating the maximum energy of protons in more realistic condition. Note that the high energy protons accelerated by the wakefield acceleration could be an alternative source of $\sim 10^{14}~\mathrm{eV}$ neutrinos as previously expected by \citet{1997PhRvL..78.2292W} in the context of a fireball model for GRBs \citep{1994AIPC..307..543P}. Our model thus involves pinpointed spatial source of the emitter as well as its temporal structure, from which we could diagnose the conditions of the emitter \citep{2019arXiv190805993C}. This will be tested by estimating the flux of both gamma-ray emissions and high-energy neutrinos and comparing with the observations of IceCube \citep{2017ApJ...843..112A} in the context of our model in the near future. \section{Conclusion}\label{sec:conclusion} We have demonstrated the wakefield acceleration in the jets from NDAF disks as a model of gamma-ray bursts. The wakefield acceleration postulates various observational signatures which could have been detected in the future. The time-variability of $\lsim 100~\mathrm{MeV}$ neutrino emissions from the NDAF disks may discriminate the nature of an EM wave pulse which is responsible for driving the wakefield in the jets. The tracing of gamma-ray emissions from high energy electrons and positions and subsequent burst of $\sim 10^{14}~\mathrm{eV}$ neutrinos may disclose the onset of wakefield acceleration in the jets. The detection of the extremely high energy cosmic rays (EHECRs) of $10^{21 -22}~\mathrm{eV}$ and the super-EHCRs of $10^{22 -23}~\mathrm{eV}$ within several hours after both gamma-ray emissions and neutrino bursts could be a smoking gun for the astrophysical wakefield acceleration. Because of those collective nature of high energy astrophysics and particles acceleration in ultra-relativistic regimes, the wakefield acceleration will be a key player for the multi-messenger astronomy. \begin{acknowledgments} This work is supported in part by the Norman Rostoker Fund. The authors would like to thank Prof. R. Matsumoto, Prof. H. Sobel, Prof. S. Nagataki, and Prof. T. Totani for fruitful discussions and helpful comments. \end{acknowledgments}
1,108,101,563,775
arxiv
\section{Introduction} \label{sec:intro} Undulatory locomotion, the self-propulsion of an organism via the passage of deformation waves along its body, is ubiquitous in nature \cite{gray1953undulatory,cohen2010swimming}. Flagellated microorganisms swim in fluids \cite{gray1955propulsion, chwang1971note, lighthill1976flagellar, keller1976swimming,purcell1977life, higdon1979hydrodynamic}, snakes slither on land \cite{gray1946mechanism, guo2008limbless, Hu23062009,alben2013} and sandfish lizards (\textit{Scincus scincus}) undulate in granular substrates \cite{baumgartner2008investigating, maladen2009undulatory, ding2012mechanics}. Yet the underlying physics differ: from viscous forces \cite{lauga2009hydrodynamics} in fluids to frictional forces \cite{maladen2009undulatory} in terrestrial media. The investigation of these undulatory mechanisms in different environments advances our understanding of various biological processes \cite{cohen2010swimming, fauci2006biofluidmechanics} and provides insights into the effective design of biomimetic robots \cite{williams2014self,maladen2011undulatory}. The swimming of microorganisms in Newtonian fluids, where viscous forces dominate inertial effects, is governed by the Stokes equations \cite{lauga2009hydrodynamics}. Despite the linearity of the governing equation, locomotion problems typically introduce geometric nonlinearity, making the problem less tractable \cite{sauzade11}. For slender bodies such as flagella and cilia, Gray and Hancock \cite{gray1955propulsion} exploited their slenderness to develop a local drag model, called resistive force theory (RFT), which has been shown useful in modeling flagellar locomotion and the design of synthetic micro-swimmers \cite{lauga2009hydrodynamics,pak2014theoretical}. In this local theory, hydrodynamic interactions between different parts of the body are neglected and the viscous force acting on a part of the body depends only on the local velocity relative to the fluid. Using RFT, Lighthill showed that, for an undulating filament of infinite length, the sawtooth waveform is the optimal beating pattern maximizing hydrodynamic efficiency \cite{lighthill1976flagellar}. Locomotion in granular media is relatively less well understood due to their complex rheological features \cite{zhang2014effective, goldman2014colloquium}. The frictional nature of the particles generates a yield stress, a threshold above which the grains flow in response to external forcing \cite{goldman2014colloquium}. Different from viscous fluids, the resistance experienced by a moving intruder originates from the inhomogeneous and anisotropic response of the granular force chains, which are narrow areas of strained grains surrounded by the unstrained bulk of medium \cite{albert1999slow}. At low locomotion speed, where the granular matter is in a quasi-static regime, the effect of inertia is negligible compared to frictional and gravitational forces from granular media \cite{ding2012mechanics}, which is similar to that of a low Reynolds-number fluid. In this regime, studies measuring the drag force of an intruder moving through a GM reveal that the drag force is independent of the speed of the intruder, but it increases with the depth of GM and proportional to the size of the intruder \cite{albert1999slow,hill2005scaling,schroter2007phase,zhou2007simul,seguin2011dense}. Recently, Maladen \textit{et al}.\ \cite{maladen2009undulatory} studied the subsurface locomotion of sandfish in dry granular substrates. While the crawling and burying motion of a sandfish is driven by its limbs, an undulatory gait is employed for subsurface locomotion without use of limbs. Using high speed x-ray imaging, the subsurface undulating pattern of the sandfish body was found to be well described by a sinusoidal waveform. A major challenge in the quantitative analysis of locomotion in granular materials is a lack of validated force models like the Stokes equation in viscous fluids \cite{zhang2014effective, goldman2014colloquium}. But inspired by the success of RFT for locomotion in viscous fluids, Maladen \textit{et al}.\ \cite{maladen2009undulatory} developed an empirical RFT in dry granular substrates for slender bodies (Sec.~\ref{subsec:RFT}), which was shown effective in modeling the undulatory subsurface locomotion of sandfish \cite{maladen2009undulatory}. The proposed force model thus enables theoretical studies to address some fundamental questions on locomotion in granular media. In this paper we employ the proposed RFT to investigate the swimming characteristics of a slender filament of finite and infinite length undulating in a granular medium and compare the results with those in viscous fluids. In particular, previous analysis using the granular RFT considered only force balance in one direction \cite{maladen2009undulatory} and hence a swimmer can only follow a straight swimming trajectory in this simplified scenario. Here we extend the results by considering a full three-dimensional force and torque balances, resulting in more complex kinematics such as pitching, drifting and reorientation. The swimming performance in relation to these complex kinematics is also discussed. The paper is organized as follows. We formulate the problem and review the recently proposed RFT in granular media in Sec.~\ref{sec:form}. Swimmers of infinite length are first considered (Sec.~\ref{sec:inf}): we determine that the optimal waveform maximizing swimming efficiency, similar to results in viscous fluids, is a sawtooth (Sec.~\ref{subsec:opt}); we then study the swimming characteristics of sawtooth and sinusoidal swimmers in granular media and compare the results with swimming in viscous fluids (Sec.~\ref{subsec:sawNsine}). Next we consider swimmers of finite length (Sec.~\ref{sec:fin}) and characterize the effects of drifting and pitching in terms of propulsion speed and efficiency, before concluding the paper with remarks in Sec.~\ref{sec:discussion}. \vspace{1.75in} \section{Mathematical Formulation} \label{sec:form} \subsection{Kinematics} \label{subsec:kinematic} We consider an inextensible cylindrical filament of length $L$ and radius $r$ such that $r \ll L$, and assume that it passes a periodic waveform down along the body to propel itself in granular substrates. Following Spagnolie and Lauga \cite{spagnolie2010optimal}, the waveform is defined as $\mathbf{X}(s) =[X(s),Y(s),0]^{\mathsf{T}}$, where $s \in [0,L]$ is the arc length from the tip. The periodicity of the waveform can then be described as \begin{align}\label{eq:periodicity_condition} X(s+\Lambda)=X(s)+\lambda, \quad Y(s+\Lambda)=Y(s), \end{align} where $\lambda$ is the wave length and $\Lambda$ the corresponding arc length along the body. $N$ is the number of waves passed along the filament. Note that $L=N\Lambda$ and $\lambda =\alpha \Lambda$, where $0<\alpha<1$ is due to the bending of the body \cite{spagnolie2010optimal}. \begin{figure*}[!htb] \centering \includegraphics[scale=1]{schematic.pdf} \caption{\label{fig:schematic}Illustration of an undulating slender filament and the resistive force theory in granular media. The body propagates a prescribed waveform to propel itself. Each element $\,\text{d} s$ experiences a drag force $\,\text{d} \mathbf{F} = \mathbf{f} \,\text{d} s$. The basis vectors $\{\mathbf{e}_{x}, \mathbf{e}_{y}\}$ and the position vectors of its head $\mathbf{x}(0,t)$ and a material point $\mathbf{x}(s,t)$ on the body in the lab frame are shown ($\mathbf{e}_{z} = \mathbf{e}_{x}\times \mathbf{e}_{y}$). The angle between the local velocity $\mathbf{u}$ and unit tangent vector $\mathbf{t}$ is $\psi(s,t)$.} \end{figure*} Initially, the filament is oriented along the $x$-axis of the lab frame with its head at $\mathbf{x}_0$. At time $t$, the filament is passing the waveform at a phase velocity $\mathbf{V}$ (with constant phase speed $V$) along the waveform's centerline, which is oriented at an angle $\theta(t)$ to the $x$-axis (Fig. \ref{fig:schematic}). In a reference frame moving with the wave phase velocity $\mathbf{V}$, a material point on the filament is moving tangentially along the body with speed $c = V/\alpha$, and hence the period of the waveform is $T = \lambda/V = \Lambda/c$. By defining the position vector of a material point at location $s$ and time $t$ in the lab frame as $\mathbf{x}(s,t)$, we obtain \begin{align}\label{eq:positionvec.} \mathbf{x}(s,t)-\mathbf{x}(0,t)= \mathbf{\Theta}(t)\cdot\mathbf{R}(s,t), \end{align} where \begin{align}\label{eq:rotationmatrix} \mathbf{\Theta}(t)=\begin{bmatrix} \cos\theta(t)&-\sin\theta(t)&0\\ \sin\theta(t) &\cos\theta(t)&0\\ 0&0&1 \\ \end{bmatrix} \end{align} is the rotation matrix, and $\mathbf{R}(s,t)=\mathbf{X}(s,t)-\mathbf{X}(0,t)$, and note that $\mathbf{X}(s,t)= \mathbf{X}(s-ct)$. Then, the velocity of each material point in the lab frame would be \begin{align}\label{eq:velocity_relation} \mathbf{u}(s,t)=\dot{\mathbf{x}}(0,t)+ \dot{\theta} \mathbf{\Theta}\cdot \mathbf{R}^\perp+ \mathbf{\Theta} \cdot\dot{\mathbf{R}} \textcolor{blue}{,} \end{align} where $\mathbf{R}^\perp=\mathbf{e}_z \times \mathbf{R}$, and dot denotes time derivative. The unit tangent vector in the direction of increasing $s$ is \begin{align}\label{eq:tangent_vec} \mathbf{t}= \mathbf{x}_s=\mathbf{\Theta}\cdot\mathbf{X}_s(s,t), \end{align} where the subscript $s$ denotes the derivative with respect to $s$. The angle between the local velocity vector $\mathbf{u}$ and the local unit tangent vector $\mathbf{t}$ is $\psi$: \begin{align} \cos\psi = \hat{\bu}\cdot\mathbf{t}, \quad \hat{\bu} = \frac{\mathbf{u}}{\| \mathbf{u}\|} \cdot \end{align} Now, to define the waveform we specify the tangent angle made with the centerline of the waveform \begin{align}\label{eq:waveform} \varphi(s,t)=\arctan\frac{Y_s}{X_s} \textcolor{blue}{\cdot} \end{align} Note that we have the following geometric relations: \begin{align} \mathbf{R}&=\int_0^s \mathbf{X}_s \,\text{d} s, \quad \dot{\mathbf{R}} = \int_0^s \dot{\varphi}\mathbf{X}^\perp_s\,\text{d} s,\label{eq:R}\\ \mathbf{t} &= \mathbf{\Theta}\cdot\mathbf{R}_s=\mathbf{\Theta}\cdot\mathbf{X}_s, \end{align} where $\mathbf{X}^\perp_s = \mathbf{e}_z\times \mathbf{X}_s$, and \begin{align} \alpha = \frac{\lambda }{\Lambda} = \frac{1}{\Lambda} \int_0^\Lambda \cos\varphi \,\text{d} s. \end{align} The inextensibility assumption requires that $\partial [\mathbf{x}_s\cdot\mathbf{x}_s]/\partial t =0$, and the arc-length parameterization of the swimming filament naturally satisfies this constraint. The tangent angle is specified as a composition of different Fourier modes: \begin{align}\label{eq:Fourier_psi} \varphi(s,t)=\sum\limits_{n=1}^{n^*} \left\{a_n \cos\left[\frac{2\pi n}{\Lambda}\left(s-ct\right)\right]+b_n \sin\left[\frac{2\pi n}{\Lambda}\left(s-ct\right)\right]\right\}, \end{align} where \begin{align}\label{eq:F_coef} a_n&= \frac{2}{\Lambda}\int_0^\Lambda \varphi(s,0) \cos\left[\frac{2\pi n s}{\Lambda}\right] \,\text{d} s,\\ b_n&= \frac{2}{\Lambda}\int_0^\Lambda \varphi(s,0) \sin\left[\frac{2\pi n s}{\Lambda}\right] \,\text{d} s, \quad n=1, 2, 3, ... \end{align} \subsection{Resistive force theory} \label{subsec:RFT} In low Reynolds number swimming of a slender filament in a Newtonian fluids, the resistive forces are linearly dependent on the local velocity. The force per unit length exerted by the fluid on the swimmer body at location $s$ and time $t$ is given by \begin{align} \mathbf{f}(s,t) = -K_T \mathbf{u} \cdot\mathbf{t}\bt -K_N(\mathbf{u}-\mathbf{u}\cdot\mathbf{t}\bt), \end{align} where $K_N$ and $K_T$ are, respectively, the normal and tangential resistive coefficients. The self-propulsion of elongated filaments is possible because of drag anisotropy ($K_N \neq K_T $). A detailed discussion on this property can be found in the review paper by Lauga and Powers \cite{lauga2009hydrodynamics}. Recent experimental studies of direct force and motion measurements on undulatory microswimmers in viscous fluids find excellent agreement with RFT predictions \cite{friedrich2010high,schulman2014dynamic}. The ratio $r_{K}= K_N/K_T$ varies with the slenderness ($L/r$) of the body. In the limit of an infinitely slender body, $L/r \rightarrow \infty$, $r_{K} \rightarrow 2$, which is the value adopted in this study. For undulatory locomotion in dry granular media, we only consider the slow motion regime where grain-grain and grain-swimmer frictional forces dominate material inertial forces \cite{maladen2009undulatory}. The motion of the swimmer is confined to the horizontal plane such that the change of resistance due to depth is irrelevant. In this regime the granular particles behave like a dense frictional fluid where the material is constantly stirred by the moving swimmer \cite{zhang2014effective}. The frictional force acting tangentially everywhere on the surface of a small cylindrical element is characterized by $C_{F}$, which is refered to as the flow resistance coefficient \cite{maladen2009undulatory}. The other contribution to the resistive forces is the in-plane drag-induced normal force, which is characterized by $C_{S}$. Note that $C_{S}$ is a constant because the drag is independent of the velocity magnitude. The normal resistive coefficient $C_\perp$ depends on the orientation ($\psi$) of the element with respect to the direction of motion (Fig. \ref{fig:schematic}). In other words, the resistive force exerted by the granular material on the swimmer per unit length \begin{align}\label{eq:RFTGM} \mathbf{f}(s,t)= -C_\parallel \hat{\bu}\cdot\mathbf{t}\bt-C_\perp(\hat{\bu}-\hat{\bu}\cdot\mathbf{t}\bt), \end{align} where \begin{gather}\label{eq:RFTco} C_\parallel = 2rC_F,\\ C_\perp(\psi) = 2rC_F+ \frac{2rC_S \sin \beta_0}{\sin \psi} = C_\parallel \left(1+\frac{C_S\sin\beta_0}{C_F\sin\psi}\right), \end{gather} $\tan \beta_{0}=\cot \gamma_{0} \sin \psi$ and $\gamma_0$ is a constant related to the internal slip angle of the granular media\textcolor{blue}{ \cite{maladen2009undulatory}}. Although a complete physical picture of the dependence of $C_\perp$ on the orientation $\psi$ remains elusive, the application of the granular RFT proves to be effective. Several studies have applied the granular RFT to study the locomotion of sand-swimming animals and artificial swimmers and found good agreement with experiments and numerical simulations \cite{maladen2011undulatory,zhang2014effective}. A detailed discussion about the effectiveness of granular RFT on modelling sand-swimming can be found in a review article by Zhang and Goldman \cite{zhang2014effective}. An important parameter characterizing the response of dry GM to intrusion is the volume fraction $\phi$, which is defined as the ratio of the total volume of the particles divided by the occupied volume. The level of compaction affects drag response as closely packed (high $\phi$) GM expands to flow while loosely packed (low $\phi$) material would consolidate \cite{maladen2009undulatory}. The drag parameters $C_{S}, C_{F}$ and $\gamma_{0}$ depend on the volume fraction of the GM. In our study, we refer to the GM with $\phi=0.58$ as loosely packed (LP) whereas $\phi=0.62$ as closely packed (CP). The numerical values of the drag parameters are adopted from the paper by Maladen \textit{et al}. \cite{maladen2009undulatory}, where the forces at a fixed depth of $7.62$ cm were measured by towing a cylinder of stainless steel. Without external forcing, the self-propelled filament satisfies force-free and torque-free conditions: \begin{align} \mathbf{F}& =\int_0^L \mathbf{f}(s,t) \,\text{d} s=\textbf{0}, \label{eq:Fbalance}\\ \mathbf{T} &=\int_0^L [\mathbf{x}(s,t)-\mathbf{x}(0,t)] \times \mathbf{f}(s,t)\,\text{d} s= \textbf{0}.\label{eq:Tbalance} \end{align} The granular RFT exhibits the symmetry property that $\mathbf{u} \to -\mathbf{u}$ results in $\mathbf{f} \to -\mathbf{f}$. Combining this symmetry with the kinematics of the undulatory locomotion (see Sec.~\ref{subsec:kinematic}), one can show that the velocities $-\dot{\mathbf{x}}(0,t)$ and $-\dot{\theta}$ are solutions to the instantaneous motion under a reversal of the actuation direction ($c\to -c$) provided that $\dot{\mathbf{x}}(0,t)$ and $\dot{\theta}$ are solutions to the original problem (without reversal of the actuation). This symmetry is of course present in viscous RFT and this commonality, as we shall show, leads to qualitatively similar swimming behaviors. \subsection{Swimming efficiency} \label{subsec:efficiency} The instantaneous swimming speed of the filament is given by $\dot{\mathbf{x}}(0,t)$, and the mean swimming velocity is defined as $\mathbf{U} = \left<\dot{\mathbf{x}}(0,t)\right> = U_{x}\mathbf{e}_{x}+U_{y} \mathbf{e}_{y}$ with the magnitude $U=\lVert \mathbf{U}\rVert$. The angle brackets $\left<...\right>$ denote time-averaging over one period $T$. The efficiency of the undulatory locomotion for a given deformation wave is defined by the ratio of the power required to drag the straightened filament through the surrounding substance to the power spent to propel the undulating body at the same velocity \cite{lighthill1975mathematica}. Hence, the efficiency for undulatory swimming of slender filaments in viscous fluid ($\eta_f$) and granular substance ($\eta_g$), respectively, are \begin{align} \label{eq:effi} \eta_f = \frac{K_T L U^2}{P}, \quad \eta_g = \frac{C_\parallel L U}{P}, \end{align} where \begin{align} P = \left<\int_0^L \mathbf{f}(s,t)\cdot\mathbf{u}(s,t)\,\text{d} s\right> \cdot \end{align} The optimal swimming can then be interpreted as either swimming with the maximum speed at a given power or swimming with the minimum power at a given speed. \subsection{Waveforms} \label{subsec:waveform} We consider two typical planar waveforms that have been well studied in Newtonian swimming: the sinusoidal waveform, and the sawtooth waveform (Fig.~\ref{fig:wave}). The sinusoidal waveform can be described by its Cartesian coordinates: \begin{align}\label{eq:SineCart} Y = b \sin k(X+X_{0}), \end{align} where $k = 2\pi/\lambda$ is the wave number, $kX_{0}$ is the initial phase angle of the waveform, and $b$ the wave amplitude. The dimensionless wave amplitude is defined as $\epsilon = kb$. The sawtooth waveform, which consists of straight links with a bending angle $\beta$ ($\varphi = \pm \beta/2$), can be described as \begin{align} \label{eq:sawtoothEq} Y = \frac{2b}{\pi} \arcsin[\sin k(X+X_0)], \end{align} The dimensionless amplitude $\epsilon = k b= (\pi/2)\tan(\beta/2)$. \begin{figure*}[!htb] \centering \includegraphics[scale=1]{wave.pdf} \caption{\label{fig:wave} Undulating filaments with a single wave ($N=1$). (a): sinusoid, $kX_{0}=0$; (b) sawtooth, $kX_{0}=0$} \end{figure*} \section{Bodies of infinite length} \label{sec:inf} For bodies of infinite length ($L\to\infty$), the swimming motion is steady and unidirectional, and hence $\dot{\theta(t})=0$. Without loss of generality, we assume the filament propagates the deformation wave in the positive $x$-direction. Then the velocity of a material point on the body can be written as \begin{align}\label{eq:INFV} \mathbf{u} = -U \mathbf{e}_x +V\mathbf{e}_x-c\mathbf{t}, \end{align} where $U$ is the swimming speed \cite{lighthill1975mathematica}. For an infinite swimmer, the unidirectional swimming velocity for a given waveform can be obtained from only the force balance in the $x$-direction, $\mathbf{F}\cdot\mathbf{e}_x=0$, over a single wavelength, \begin{align} \int_0^\Lambda\left(\frac{C_S \sin\beta_0}{\sin\psi}+C_F\right)\hat{\bu}\cdot\mathbf{e}_x\,\text{d} s -\int_0^\Lambda\frac{C_S \sin\beta_0}{\sin\psi}(\hat{\bu}\cdot\mathbf{t})\mathbf{t}\cdot\mathbf{e}_x\,\text{d} s=0.\label{eq:Fx0} \end{align} The above integral equation can be solved for $U$ numerically for a given waveform in general and but is analytically tractable in certain asymptotic regimes, which we discuss below. \subsection{Optimal shape: numerical results} \label{subsec:opt} A natural question for swimming organisms is how their swimming gaits evolve under the pressure of natural selection \cite{childress1981mechanics}, since being able to swim does not necessarily mean one does it efficiently. The understanding of optimal swimming may reveal nature's design principles and guide the engineering of robots capable of efficient self-propulsion. As a response, the optimal strategies of several Newtonian swimming configurations have been studied. Becker \textit{et al}. \cite{becker2003self} determined the optimal strategy of Purcell's three-link swimmer under constant forcing and minimum mechanical work. Tam and Hosoi \cite{TamPRL} improved the swimming speed and efficiency of the optimal strategy of Purcell's three-link swimmer by allowing simultaneous rather than sequential movement of both hinges (kinematic optimization). Using viscous RFT, Lighthill showed that the optimal flagellar shape has constant angle between the local tangent to the flagellum and the swimming direction \cite{lighthill1976flagellar}. In 2D, the sawtooth profile with a tangent angle $\varphi \approx \pm 40^\circ$ (bending angle $\beta \approx 80^{\circ}$) was found to optimize the swimming efficiency of an infinite length swimming filament. Alternatively, this solution can be obtained through a variational approach \cite{spagnolie2010optimal}. In 3D, Lighthill's solution leads to an optimal shape of a rotating helix. More recently, Spagnolie and Lauga studied the optimal shapes for both finite and infinite elastic flagellum by incorporating physical constraints such as bending and sliding costs \cite{spagnolie2010optimal}. Inspired by the investigations of optimal strategies for Newtonian swimming, we study the optimal shape for infinite swimmers in granular substrates using resistive force theory. For bodies of infinite length, the optimal shape is time, scale and phase invariant \cite{spagnolie2010optimal}. Therefore, we take $\Lambda=L=1$ and consider the optimization for $t=0$. In other words, the local tangent angle for the optimization problem would be \begin{align} \varphi(s,t=0)=\sum\limits_{n=1}^{n^*} a_n \cos(2\pi ns). \end{align} We consider the optimal filament shape by maximizing the swimming efficiency $\eta$ defined in Sec.~\ref{subsec:efficiency}. Once the local tangent angle is obtained, the shape itself can be recovered by integration. The numerical methods used in this optimization can be found in the Appendix. \begin{figure*}[!htb] \centering \includegraphics[scale=1]{opt.pdf} \caption{\label{fig:opt}Optimal shapes in terms of swimming efficiency for an infinite filament in a granular substrate (LP, CP) and Newtonian fluid. The spatial coordinates are scaled to the same wave length. For loosely packed granular material, the optimal shape is almost the same as the analytical result of Lighthill's in Newtonian fluid. } \end{figure*} The optimal shapes found by maximizing the swimming efficiency are presented in Fig.~\ref{fig:opt} for a LP granular substrate (red dashed line), a CP granular substrate (blue dash-dot line), and a viscous Newtonian fluid (black solid line) as a comparison. First, it is interesting that the optimal shape stays as sawtooth despite the nonlinearity in the resistive force model of granular substrates. The optimal bending angles for LP and CP granular media are, respectively, $\beta \approx 80^{\circ}$ and $\beta \approx 87^{\circ}$. The associated efficiencies of the optimal shapes are around $0.56$ for LP and $0.51$ for CP granular substrates, which are much greater than that of Newtonian swimming. In spite of the difference in the surrounding media, the optimal bending angle for granular substrates and viscous Newtonian fluids lie within the same range; in particular, the optimal sawtooth in LP closely resembles that in Newtonian fluids. We argue that it is not surprising that the sawtooth waveform is optimal in both the viscous RFT and the nonlinear granular RFT. Given an angle that maximizes the efficiency of a local element. Without any penalty, the globally optimal shape would be the one that is locally optimal everywhere along the body. As a result, a local resistive force model should exhibit an optimal shape of a certain sawtooth waveform. Using this argument, we can simply drop the integration (or assume it is a sawtooth) in Eq. (\ref{eq:Fx0}) and consider the local optimality. The local optimal angle obtained is indeed the same as that found using numerical global optimization (see Sec.~\ref{subsec:sawNsine}). The existence of a locally optimal tangent angle $\varphi$ originates from the physical picture introduced by the drag-based propulsion model \cite{lauga2009hydrodynamics} (Fig.~\ref{fig:schematic}). Let $\mathbf{u}_{d}=u_{d}\mathbf{e}_{y}$ be the transverse deformation velocity of an infinite swimming filament. Then a propulsive force, which is perpendicular to the direction of the deformation velocity, generated by this deformation can be given by $\mathbf{f}_{\textrm{prop}} = -(C_{\perp}(\psi)-C_{\parallel}) \sin\varphi\cos\varphi \mathbf{e}_{x}$. Therefore, the propulsive force arising from a local deformation of the filament scales with its orientation as $\sin\varphi\cos\varphi/\sqrt{\tan^{2}\gamma_{0}+\cos^{2}\varphi}$, the maximum of which is achieved when $\varphi \approx 64^{\circ}$. However, as the tangent angle increases, the power consumption of the swimming filament increases. As a result, the swimmer tends to reduce the tangent angle to decrease the energy expenditure while maintaining a relatively high propulsive force. It is the interplay of these two factors that determines the optimal tangent angle. \subsection{Sawtooth and sinusoid} \label{subsec:sawNsine} The swimming speed of an infinite sawtooth in viscous fluids can be expressed as \begin{align}\label{eq:sawNewtoninf} \frac{U}{V} = \frac{1-\cos\beta}{3-\cos\beta} \cdot \end{align} For a sawtooth profile in granular substrates, although an explicit analytical solution cannot be extracted, an implicit algebraic equation for the swimming speed $U$ can be obtained since the local resistive forces do not vary along the body: \begin{align}\label{eq:SawtoothINF} \left(\frac{C_S \sin\beta_0}{\sin\psi}+C_F\right)\hat{\bu}\cdot\mathbf{e}_x -\frac{C_S \sin\beta_0}{\sin\psi}(\hat{\bu}\cdot\mathbf{t})\mathbf{t}\cdot\mathbf{e}_x=0, \end{align} where $\mathbf{t}\cdot\mathbf{e}_x = \cos(\beta/2)$. We then solve Eq.~(\ref{eq:SawtoothINF}) numerically (see Appendix) with the same convergence criterion as in the optimization (Sec.~\ref{subsec:opt}). For a sinusoidal wave in granular media, a simplification like Eq.~\ref{eq:SawtoothINF} is not available and we therefore directly solve Eq.~(\ref{eq:Fx0}) with the numerical method outlined in the Appendix. For small amplitude sawtooth waveforms ($\epsilon \ll 1$), or small bending angle $\beta$, we obtain an asymptotic solution of the swimming speed $U$. Note that the swimming speed is invariant under a phase shift of $\pi$, which is equivalent to a sign change in the amplitude: $\epsilon \rightarrow -\epsilon$. Assuming a regular expansion in $\epsilon$, this symmetry argument leads to a quadratic scaling of the swimming speed in the wave amplitude \cite{pak2014theoretical} \begin{align} \frac{U}{V}\sim \frac{4\cos\gamma_0C_S}{\pi^2C_F}\epsilon^2 \cdot \end{align} When the bending angle is large, another asymptotic limit can be obtained. The swimming speed $U/V$ approaches a constant as $\beta \to \pi$ and analytically we find that \begin{align} \frac{U}{V} \sim \frac{C_{S}}{C_{S}+C_{F}\tan\gamma_{0}} \cdot \end{align} One can also show that this large amplitude asymptotic limit for a sawtooth equals that of a sinusoidal wave. For small amplitude sinusoidal waveforms, however, the nonlinearity of the shape and the resistive forces results in a non-uniform integral and a slowly converging asymptotic series. To leading order, the swimming speed $U/V$ scales as $\epsilon^{2}/\ln(1/\lvert\epsilon\rvert) $, which does not agree well with the numerical results even for $\epsilon< 0.1 $ as the higher order terms being truncated are not significantly smaller. We present the small and large amplitude asymptotic solutions for the granular swimming of a sawtooth profile in Fig.~\ref{fig:inf}(a). The asymptotic solutions agree well with the numerical solutions even for wave amplitudes close to one. Fig.~\ref{fig:inf}(b) shows the efficiency of swimming as a function of the bending angle for an infinite sawtooth in both granular media and viscous fluids. For swimming efficiency, a global maximum in bending angle exists for both viscous and granular swimming. Note that the optimal angles obtained here are equal to those obtained via the global optimization (Sec.~\ref{subsec:opt}). \begin{figure*}[!htb] \centering \includegraphics[scale=1]{inf.pdf} \caption{\label{fig:inf}(a): Swimming speed of infinite sawtooth waveforms as a function of amplitude $\epsilon$ (or bending angle $\beta$) in granular material and Newtonian fluids. The dashed lines indicate the small and large amplitude asymptotic solutions. (b): Efficiency of infinite sawtooth waveforms as a function of amplitude $\epsilon$ (or bending angle $\beta$) in granular material and Newtonian fluids.} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[scale=1]{sawnsine.pdf} \caption{\label{fig:sawNsine}A comparison of the swimming speed (a) and efficiency (b), as a function of wave amplitude $\epsilon$ for sawtooth and sinusoidal waveforms in granular substrates and Newtonian fluids.} \end{figure*} In Fig.~\ref{fig:sawNsine}, we compare the swimming speed and efficiency of sawtooth and sinusoidal waveforms in both GM and Newtonian fluids as a function of the wave amplitude $\epsilon$. In both GM and Newtonian fluids, the swimming speed of a sawtooth is only slightly different from that of a sinusoid with the same dimensionless amplitude. This small difference indicates that the effects of the local curvature variations are not significant in both the granular and viscous RFT. Although the sawtooth is found to be the mathematically optimal shape, the undulatory gait of a sandfish resembles a smooth sinusoidal waveform \cite{maladen2009undulatory}. The slight difference in swimming performance between the two waveforms presented in this section might justify the adoption of a sinusoidal waveform instead of the mathematically optimal sawtooth waveform, since the kinks in the sawtooth may involve other energetic costs associated with bending and the deformation of the internal structure of the body \cite{spagnolie2010optimal}. \section{Bodies of finite length} \label{sec:fin} The infinite swimmer model only enforces a force balance in one direction and hence a swimmer is confined to swim only unidirectionally without any rotation. In reality, however, a swimmer has a finite size and more complex swimming kinematics, including transverse motion relative to the wave propagation direction and rotation. Previous studies employed slender body theory to investigate the swimming motion of finite filaments in a viscous Newtonian fluid and their swimming performance in relation to number of wavelengths and filament length \cite{pironneau1974optimal, higdon1979hydrodynamic, spagnolie2010optimal,koehler2012pitching}. In this section, we investigate the swimming characteristics of finite-length sinusoidal swimmers in a granular medium and compare with their Newtonian counterparts. The numerical methods implemented to solve the equations of motion of a finite length swimmer are given in the Appendix. \subsection{Geometries} \label{subsec:geom} \begin{figure*}[!htb] \centering \includegraphics[scale=1]{oddeven.pdf} \caption{\label{fig:oddEven}Shapes of swimming finite length single wave ($N=1$) sinusoidal filaments for different wave amplitude $\epsilon$. (a): the odd sine configuration, with $kX_{0}=0$, (b): the even cosine configuration, with $kX_{0}=\pi/2$. The waveforms are rescaled to the same wave length for better comparison.} \end{figure*} For an undulating sinusoidal filament, the initial shape of the swimmer is determined by the number of waves $N$, the wave amplitude $\epsilon$, and the initial phase angle $kX_{0}$ (Eq.~(\ref{eq:SineCart})). The two specific categories of shapes that possess odd or even symmetry for a single wave sinusoidal swimmer are shown in Fig.~\ref{fig:oddEven}. A swimmer in an odd configuration is the one that has point symmetry about the midpoint of the filament as seen in Fig.~\ref{fig:oddEven}(a), while an even configuration is the one that possesses mirror symmetry about the vertical line through the midpoint as in Fig.~\ref{fig:oddEven}(b). In our paper, the shapes shown in Fig. \ref{fig:oddEven}(a) are referred to as odd sine swimmers, while even cosine swimmers are those shown in Fig. \ref{fig:oddEven}(b). Note that an even sine swimmer would be the one that has the number of waves $N \in \{1/2, 3/2,5/2, ...\}$ and a phase angle $kX_{0} \in \{ 0,\pm \pi, \pm2\pi, ...\}$; an even cosine swimmer is the one that has the number of waves $N \in \{1, 2, 3, ... \}$ and a phase angle $kX_{0}\in \{\pm\pi/2, \pm3\pi/2, ... \}$. \subsection{Pitching, drifting and reorientation} Unlike the swimming of an infinite length undulatory swimmer whose motion is steady and unidirectional, the locomotion of a finite filament may also experience net motion normal to the initial direction wave propagation direction, also referred to as drifting, and unsteady rotational motion, known as pitching. Here we characterize in GM the re-orientation of a finite swimmer that results in drifting, and the dependence of swimming performance on pitching motion, previously reported to diminish performance in viscous Newtonian media \cite{spagnolie2010optimal,koehler2012pitching}. \begin{figure*}[!htb] \centering \includegraphics[scale=1]{traj.pdf} \caption{\label{fig:traj}Trajectory of the head $\mathbf{x}(0,t)$ (black solid lines) and trajectory of the swimmer centroid (dotted lines) for swimming finite sinusoidal filaments with $N=1$ and $\epsilon =1$ that possess odd/even symmetry at $t=0$ in loosely packed GM. The filament swims towards the left when the wave propagates to the right. If the configuration possesses even symmetry it does not undergo a net reorientation.} \end{figure*} For an even symmetry filament in viscous fluids, Koehler \textit{et al}. \cite{koehler2012pitching} showed that the velocity of the center of mass is along the centerline of the waveform, hence the net drifting is zero. This argument relies on the kinematic reversibility of Stokes flow: reflection about the vertical line is equivalent to a time reversal (or reversing the direction of the actuation), so the instantaneous swimming is identical to the mirror reflection of its time-reversal, and the linearity requires the reverse of velocity due to time-reversal, thus one can show that the transverse component of the velocity is zero. As a result, the net displacement in one period for a filament starts with the even configuration is along the initial waveform centerline. Although the granular RFT is nonlinear, the aforementioned symmetry property ($\mathbf{u} \to -\mathbf{u} \Rightarrow \mathbf{f} \to -\mathbf{f}$, see Sec.~\ref{subsec:RFT}) means that the same argument for an even symmetry swimmer can be made in GM. Therefore, zero net transverse motion is achieved if the swimmer starts with an even symmetry, which is also corroborated by the numerical simulation. Fig.~\ref{fig:traj} shows the head trajectories of two swimming sinusoidal filaments with the same wave amplitude ($\epsilon=1$), one starts with even symmetry while the other starts with odd symmetry. The net displacement of the even cosine swimmer is in the negative $x$-direction, which is the opposite direction of the wave propagation at $t=0$. The odd sine swimmer, however, appears to be drifting upwards to the positive $y$-direction through time. \begin{figure*}[!htb] \centering \includegraphics[scale=1]{theta.pdf} \caption{\label{fig:theta}Parametric plots for the magnitude of the reorientation angle $\lvert\left< \theta \right>-\theta_0\rvert$ for a single wave ($N=1$) sinusoid in (a) loosely packed GM, (b) closely packed GM and (c) Newtonian fluids. (d): Plots of $\lvert\left< \theta \right>-\theta_0\rvert$ against the wave amplitude $\epsilon$ for the odd sine configuration in GM and Newtonian fluids. $\lvert\left< \theta \right>-\theta_0\rvert$ is periodic with a period of $\pi$.} \end{figure*} The swimming behavior presented in Fig. \ref{fig:traj} can be understood by examining the periodic instantaneous motion of the swimmer. In the moving frame, or the Lagrangian frame, the instantaneous motion of the swimmer can be viewed as being pulled through a waveform-shaped tube \cite{koehler2012pitching}. This motion, in turn, causes rotation and translation of the Lagrangian frame. The instantaneous rotation of the Lagrangian frame is described by $\theta(t)$, which is periodic due to the periodicity of the wave propagation. The average of $\theta(t)$ over one period, denoted as $\left< \theta\right>$, describes the average swimming direction. This angle $\left< \theta \right>$ is the same in every period which results in a straight line trajectory on average. If a filament, starts with an odd (even) configuration at $t =0$ (if aligned with the $x$-axis then $\theta_0=0$), it would possess even (odd) symmetry at $t = T/4$. Thus the filament alternates between even symmetry and odd symmetry after successive time steps of $T/4$. In this viewpoint, $\left<\theta\right>-\theta_0$ characterizes the amount of time $t_{1}$ required for the filament to reorient itself such that it reaches an even symmetry. After that, the swimmer would move in the direction of the waveform centerline at $t=t_{1}$. For a fixed number of waves $N$ and amplitude $\epsilon$, the odd configuration requires the largest amount of time ($T/4$) to reach an even symmetry, therefore has the largest angle of reorientation. Note that the angle of reorientation should be distinguished from pitching of the swimmer, which is the instantaneous rotation of the swimmer about its waveform centerline. In Fig.~\ref{fig:theta}, we present parametric plots of absolute value of the angle of reorientation $\lvert\left< \theta \right>-\theta_0\rvert$ by varying the wave phase angle $kX_{0}$ and the amplitude $\epsilon$ in both GM and viscous fluids. The number of waves is fixed as $N=1$, which approximates the shape of an undulating sandfish body \cite{maladen2009undulatory}. Note that a phase shift of $\pi$ would result in a reversal of the direction of the transverse motion, hence the sign of $\left<\theta\right>-\theta_0$. In both GM and Newtonian fluids, the maximum in $\lvert\left< \theta \right>-\theta_0\rvert$ is obtained when the filament possesses an odd symmetry at $t=0$, i.e., $kX_{0} \in\{ 0, \pi, 2\pi, ...\}$. For shapes that possess even symmetry, namely, $kX_{0} \in\{ \pi/2, 3\pi/2, ...\}$, zero transverse motion is observed. Within our parameter range, a maximum in $\lvert\left< \theta \right>-\theta_0\rvert$ is achieved around an intermediate value of the amplitude for a given phase angle. As an example, the variation of $\lvert\left< \theta \right>-\theta_0\rvert$ with the amplitude $\epsilon$ for the odd configuration is shown in Fig.~\ref{fig:theta}(d). The largest amount of reorientation of an odd swimmer is achieved when $\epsilon \approx 1-1.2$ in GM while $\epsilon \approx 2.2$ in viscous fluids. We also note that the angle of reorientation decreases with the increasing of wave amplitude in the large amplitude region ($\epsilon>2$). \begin{figure}[!tb] \centering \includegraphics[scale=1]{tm.pdf} \caption{\label{fig:tm} Maximum instantaneous pitching angle $\theta_{\textrm{mp}}$ as a function of the wave amplitude $\epsilon$ for single wave ($N=1$) sinusoidal swimmers in GM and Newtonian fluids. } \end{figure} Although the transverse motion of the even configuration is minimal, the instantaneous pitching, $\theta(t)-\left<\theta\right>$, which generally diminishes performance, can be significant. Multiple metrics have been used to characterize pitching of a swimmer \cite{koehler2012pitching,spagnolie2010optimal}, here we use the maximal amount of instantaneous pitching a swimmer can experience in one cycle of its motion $\theta_{\textrm{mp}} = \lvert\theta(t)-\left< \theta \right>\rvert_{\text{max}}$. Fig.~\ref{fig:tm} shows the maximal instantaneous pitching angle $\theta_{\textrm{mp}}$ for single wave sinusoidal swimmers in GM and Newtonian fluids. The maximal instantaneous pitching angle of a single wave sinusoid goes up to about $15^{\circ}$ in loosely packed GM while around $19^{\circ}$ in closely packed GM. The instantaneous pitching of the swimmer results in a tortuous motion with a net swimming speed smaller than that of an infinite sinusoid. For a fixed number of waves and wave amplitude, a phase shift only leads to a variation in the direction of swimming. In other words, the velocity magnitude $U$ is independent of $kX_{0}$ but the $x$ and $y$ components vary. From a control point of view, one can change the phase angle of an artificial sinusoidal swimmer to obtain the desired direction of swimming. \subsection{Swimming performance} The two typical metrics for swimming performance used in the literature are the dimensionless swimming speed $U/V$ and the swimming efficiency $\eta$, see Eq. (\ref{eq:effi}). For a sinusoidal swimmer, the performance depends on the dimensionless amplitude $\epsilon$ and the number of waves $N$. Note that the initial phase angle $kX_{0}$ does not affect the two performance metrics. The desired motion of a finite swimmer is its translation, therefore the optimization of a finite sinusoidal filament requires minimizing pitching. \begin{figure}[!tb] \centering \includegraphics[scale=1]{ungv.pdf} \caption{\label{fig:UNGV}Swimming speed $U/V$ as a function of the dimensionless amplitude $\epsilon$ for different number of waves $N$ in (a) loosely packed GM and (b) closely packed GM. The solid lines denote the swimming speed of an infinite sinusoid.} \end{figure} \begin{figure}[!tb] \centering \includegraphics[scale=1]{efi.pdf} \caption{\label{fig:efi}Swimming efficiency $\eta$ as a function of the dimensionless amplitude $\epsilon$ for different number of waves $N$ in (a) loosely packed GM and (b) closely packed GM. The shaded regions represent the observed values of $\epsilon$ for lizards reported in the literature \cite{maladen2009undulatory, ding2012mechanics}.} \end{figure} For an undulatory finite filament in viscous fluids, several studies have characterized the swimming performance and optimal strategies. Spagnolie and Lauga reported that the local maxima in swimming efficiency occur for around half-integer number of waves ($N \approx 3/2, 5/2, ...,$) when the bending cost is small \cite{spagnolie2010optimal}. Later studies by Koehler \textit{et al}. \cite{koehler2012pitching} and Berman \textit{et al}. \cite{berman2013undulatory} also showed that, for a sinusoidal swimmer, local maxima in performance are achieved for close to half-integer number of waves where pitching is small. We first verify that the swimming velocity (Fig.~\ref{fig:UNGV}) and efficiency (Fig.~\ref{fig:efi}) of a finite sinusoidal swimmer in GM both converge to that of an infinite sinusoidal swimmer as the number of waves $N$ increases. For a single wave sinusoid ($N=1$) in loosely packed GM, the optimal dimensionless amplitude that maximizes the efficiency is $\epsilon\approx 1.68$. As the number of waves increases, the optimal dimensionless amplitude approaches that of an infinite sinusoid ($\epsilon \approx 1.33$). Similar observations can be made for closely packed GM. We also observe that for a given dimensionless amplitude $\epsilon$, the difference in the swimming velocity (or efficiency) between a short swimmer ($N=1$) and an infinite swimmer can be associated with the pitching motion: the largest difference in swimming speed (or efficiency) between the $N=1$ and $N=\infty$ swimmers occurs in the region $\epsilon \approx 1$ in Figs.~\ref{fig:UNGV} and \ref{fig:efi}, which is also the region where pitching is the most significant (Fig.~\ref{fig:tm}). \begin{figure}[!tb] \centering \includegraphics[scale=1]{n.pdf} \caption{\label{fig:N} (a) Swimming speed as a function of the number of waves in GM. (b) Swimming efficiency as a function of the number of waves in GM. The dimensionless amplitude is fixed ($\epsilon=1$).} \end{figure} For a given waveform, the amount of pitching can be altered by changing the number of waves $N$. We investigate in Fig.~\ref{fig:N} the dependence of the performance metrics on the number of waves for a finite sinusoidal swimmer, keeping dimensionless amplitude fixed at $\epsilon=1$. Rather than approaching the swimming velocity (or efficiency) of the corresponding infinite sinusoid monotonically with increasing number of waves, the swimming speed and efficiency exhibit local maxima and minima. Similar to the Newtonian case, the local maxima in efficiency and swimming speed occur for the number of waves close to (but not equal) half-integers. The volume fraction of the GM has no significant influence on the number of waves where local maxima in swimming performance occur. As shown in Fig. \ref{fig:N}, the first local maximum in swimming performance for the number of waves greater than one occurs around $N\approx1.4$. The maxima in swimming performance are associated with minimal pitching as shown in Fig.~\ref{fig:tmp}. Finally we note that although both the first two local maxima have minimal pitching (Fig.~\ref{fig:tmp}), the swimmer with more number of waves ($N \approx 2.5$) still displays better swimming performance, which can be attributed to a smaller bobbing motion \cite{koehler2012pitching} (the relative motion of the center of mass of the swimmer to the net swimming direction) for the swimmer with more number of waves. \begin{figure}[!tb] \centering \includegraphics[scale=1]{tmp.pdf} \caption{\label{fig:tmp} Maximum instantaneous pitching angle as a function of the number of waves in GM. The dimensionless amplitude is fixed ($\epsilon=1$).} \end{figure} Finally, we relate our findings to biological observation; we show, in the shaded regions of Fig.~\ref{fig:efi}, the observed dimensionless amplitude (amplitude-to-wavelength ratio) for lizards reported in the literature ($\epsilon = 1.20-1.38$) \cite{maladen2009undulatory, ding2012mechanics}. We see in the case of both loosely-packed and closely-packed granular media, that the biologically observed range of wave amplitudes sample high efficiencies not far from optimal ($\epsilon \approx 1.69$ for LP and $\epsilon\approx 1.95$ for CP for $N=1$). Since the efficiency peak is broad, a swimmer may adopt a close-to-optimal shape at the expense of only a modest drop in swimming efficiency to address other constraints (such as bending costs or internal dissipation). \section{Conclusion} \label{sec:discussion} In this paper, we have investigated locomotion of slender filaments in granular media using a resistive force theory proposed by Maladen \textit{et al}. \cite{maladen2009undulatory}. While previous work focused on infinite swimmers (or 1-D swimming) in reality a swimmer has a finite size, which leads to more complex swimming motion. By taking into account full force and torque balances, a finite swimmer is no longer only confined to swim in a straight trajectory. The orientation of the swimmer can be controlled by adjusting the features of the waveform such as the amplitude, phase, and number of wavelengths, allowing a swimmer to move from an initial position to a final destination via a more complex, designated trajectory. These degrees of freedom enable the control of swimmers without the use of any external fields to actively steer the swimmer. Our studies characterize this complex swimming motion in granular media, which may be useful for the development of programmable and efficient autonomous locomotive systems in such environments, but also suggest that swimmers in nature are themselves closely tuned for optimality. We also find that undulatory locomotion of filaments in granular media is distinctly similar to that in viscous fluids. We compared a number of observations made for swimming in viscous fluids with RFT both for finite and infinite swimmers and found qualitatively similar behavior using granular resistive force theory despite the nonlinearity of the force law. The reason is largely down to two distinct similarities. The first, is that both laws are still local and thus ignore interactions of distinct parts of the body through the medium in which they swim. Ultimately this leads to finding that a sawtooth profile optimizes locomotion in both viscous fluids and granular media. The second, is that both force laws display the symmetry that $\mathbf{u}\to-\mathbf{u}$ results in $\mathbf{f}\to-\mathbf{f}$. This leads to a kinematic reversibility in both cases, where a reversal of the wave speed leads to an reversal of the translational and rotational motion of the swimmer, and hence a myriad of qualitatively similar behaviors that we have explored and quantified in the paper. \begin{acknowledgments} Funding (to GJE) from the Natural Science and Engineering Research Council of Canada (NSERC) is gratefully acknowledged. \end{acknowledgments} \newpage
1,108,101,563,776
arxiv
\section{Introduction} \textit{Graph} is a well studied and widely used notion in the realm of mathematics. \textit{Hypergraph}, a generalization of a graph, is also explored extensively. A hypergraph $G$ is an order pair of sets $(V,E)$, where $V (\neq\emptyset )$ is the set of vertices, and any element $e \in E$, called a hyperedge of $G$, is a nonempty subset of $V$. Thus a graph is a special case of hypergraph where $|e|=2$ for all $e\in E$. A singleton hyperedge is said to be a loop. In our work we consider hypergraphs without any loop, that is, $|e|\ge 2$, $\forall e \in E$. In this work, we are going to introduce some general notions of connectivity operators associated with hypergraphs. These notions are so exhaustive that can incorporate multiple conventional notions of connectivity matrices of hypergraphs. We provide constructions to determine eigenvectors and their eigenspaces of general connectivity operators associated with a class of hypergraphs. Since our approach pivot around some real-valued functions on the vertex set, using our methods, just looking at the structure of the hypergraphs one can determine the eigenvalues and their eigenspaces of the above-mentioned operators for some classes of hypergraphs. The last decade witnessed a revolution in hypergraph theory when different tensors or hypermatrices associated with hypergraphs are studied extensively in \cite{qi2017tensor, MR3598572,robeva2019duality,zhang2017some} and references therein. Despite promising progress, some aspects of spectral graph theory cannot be generalised to spectral hypergraph theory using tensors. The high computational complexity of the tensors associated with hypergraphs is another challenge in studying many spectral aspects of hypergraphs. Most tensor-related problems are NP-hard, as shown in \cite{MR3144915}. The alternative method for studying a hypergraph is to examine the underlying graph with appropriate weights. Different properties of a hypergraph are studied in terms of the spectra of different connectivity matrices associated with the underlying weighted graph of the hypergraph, see \cite{bretto2013hypergraph,rodriguez2003Laplacian,rodriguez2009Laplacian,MR4208993}. Since many significant properties (including the connectivity among the vertices) of a hypergraph are encrypted in the spectra of these matrices, they are generally referred to as the connectivity matrices of the hypergraph. In this article, we introduce some linear operators associated with a hypergraph which are generalization of some conventional notion of apparently different connectivity matrices associated with that hypergraph. In fact, here we attempt to unify some apparently different but similar concepts of connectivity matrices. Moreover, keeping in mind some applications of their special cases, we can predict some possible real-world applications of our introduced operators. Now we summarise the content of this article in brief. \Cref{basics} is devoted to introducing the general diffusion operator. In \Cref{prilims} we define some preliminary notions that we are going to use throughout the article. The general diffusion operator, one pivotal notion of this article, is introduced in \Cref{gen-diff-exm}. Some stimulating examples are included in this section. The spectra of the diffusion operator are studied in \Cref{eigdiffprop}. We provide eigenvalues of diffusion operator of hypergraph having some special property in \Cref{cute1}, \Cref{cute2}, \Cref{cute3}. We use one of the most natural approaches to analyze the eigenvalues of an operator. We exploit the eigenvectors of the operators in the above-mentioned theorems. We provide some results in \Cref{eigdiffprop} which facilitate us to find the eigenvalues and eigenvectors of some classes of diffusion operators of some types of hypergraphs simply from the structure of the hypergraphs. We calculate the eigenvalues and corresponding eigenspaces of diffusion operators of a class of hypergraphs in \Cref{spectra_ex}. We provide the complete spectra and corresponding eigenspaces of hyperflower hypergraph in \Cref{spectra_ex}.We show that the Laplacian operator is a constant multiple of the diffusion operator. Therefore, one can easily estimate the spectra of the Laplacian operator of hypergraphs from the same of the diffusion operators given in this section. In \Cref{bounds}, we investigate the spectral bounds of several hypergraph properties in terms of the spectra of the diffusion operator. We also derive spectral bounds for weak connectivity number, degree of vertices, maximum cut, bipartition width, isoperimetric constant of hypergraphs. In \Cref{adjacency}, we introduce the general adjacency operator. \Cref{nor-lap} is devoted to the normalized Laplacian. Some potential applications of our study are presented in \Cref{app}. \section{General diffusion operators of a hypergraph}\label{basics} Let $\mathbb{R}^{V}$ be the set of all real-valued functions on the vertex set $V$ and $\mathbb{R}^E$ denote the set of all real-valued functions on the set of all the hyperedges $E$. Suppose that $\mathbf{1}\in \mathbb{R}^V$ is defined by $\mathbf{1}(v)=1 $ for all $v\in V$. Let $\mathfrak{M}$ be a collection of linear operators on $\mathbb{R}^V$ such that for all $M\in \mathfrak{M}$, $\lim\limits_{t\to\infty}s(t)= c\mathbf{1}$ for some $c\in\mathbb{R}$, and where $s:\mathbb{R}\to \mathbb{R}^V$ is a solution of the differential equation $\dot{x}(t)=M(x(t))$. Diffusion processes end up with equality of concentration after the movement of substances from higher concentration to lower concentration. Therefore, $\lim\limits_{t\to\infty}s(t)= c\mathbf{1}$ can be interpreted as a diffusion under the action of the operator $ M$ and we refer any $M\in\mathfrak{M}$ as a diffusion operator. This section is devoted to finding a diffusion operator associated with a hypergraph. Now we recall some preliminaries related to hypergraphs. The corank, $cr(G)$ and rank, $rk(G)$ of a hypergraph, $G=(V,E)$, is defined by $cr(G)=\min\limits_{e\in E}|e|,$ and $ rk(G)=\max\limits_{e\in E}|e|.$ A hypergraph $G$ is called $m$-uniform hypergraph if $cr(G) = rk(G) = m$. Suppose that $v_0,v_l\in V$. A \textit{path $v_0-v_l$ of length $l$ connecting the vertices $v_0$ and $v_l$} in a hypergraph $G=(V,E)$ is an alternating sequence $v_0e_1v_1e_2v_2\ldots v_{l-1}e_lv_l$ of distinct vertices $v_0,v_1,\ldots,v_{l-1},v_l$ and distinct hyperedges $e_1,e_2,\ldots,e_l$, such that, $v_{i-1}, v_i \in e_i$ for all $i= 1, \dots, l$. The\textit{ distance, $d(u,v)$, between two vertices $u,v\in V$} is the minimum among the length of all paths connecting the vertices $u$ and $v$. The \textit{diameter}, $diam(G)$ of a hypergraph $G(V,E)$ is defined by $diam(G)=\max\limits_{u,v\in V}d(u,v)$. An \textit{weighted hypergraph} $G=(V,E,w)$ is a hypergraph with a function $w:E\to \mathbb{R}^{+}$, called the weight of the hyperedges. For an weighted hypergraph $G=(V,E,w)$, the \textit{degree of a vertex} $v\in V$ is defined by, $d(v)=\sum_{e\in E_v}w(e)$, where $E_v$ is the collection of all the hyperedges containing the vertex $v$. In \cite{bretto2013hypergraph}, $E_v$ is referred as the star centered in $v$. If the hypergraph is unweighted then $w(e)=1$ for all $e\in E$ and then $d(v)=|E_v|$. \subsection{The average operator and general signless Laplacian operator}\label{prilims} We consider $V$ and $E$ are two finite sets. Let $\delta_V:V\to\mathbb{R}^+$ and $\delta_E:E\to\mathbb{R}^+$ be two positive real-valued functions on the vertices and hyperedges, respectively. We define below inner products on $\mathbb{R}^{V}$ and $\mathbb{R}^E$. \begin{df} \begin{enumerate} \item ({Inner product on $\mathbb{R}^{V}$} ) Given $x,y\in$ $\mathbb{R}^{V}$, let $$(x,y)_V:=\sum\limits_{v\in V}\delta_V(v)x(v)y(v).$$ \item (Inner product on $\mathbb{R}^{E}$) Given $\beta,\gamma\in$ $\mathbb{R}^{E}$, let $$(\beta,\gamma)_E:=\sum\limits_{e\in E}\delta_E(e)\beta(e)\gamma(e).$$ \end{enumerate} \end{df} Now we define a function from $\mathbb{R}^{V}$ to $\mathbb{R}^{E}$, which will produce the average of any given real-valued function on $V$ on a given hyperedge $e$. \begin{df}[Average operator ] Given $x\in \mathbb{R}^{V}$, $e \in E$, the function $avg:\mathbb{R}^{V} \to \mathbb{R}^{E}$ is defined by $$(avg(x))(e):=\frac{\sum\limits_{v\in e}x(v)}{|e|}.$$ where $|e|$ is the cardinality of $e$. \end{df} Now we introduce the adjoint of $avg$. \begin{df}[Adjoint of the average operator] Given $\beta \in \mathbb{R}^{E} $, $ v\in V$ the function $avg^*:\mathbb{R}^{E}\to \mathbb{R}^{V}$ is defined by $$(avg^*(\beta))(v):=\sum\limits_{e\in E_v}\frac{\beta(e)}{|e|}\frac{\delta_E(e)}{\delta_V(v)}.$$ \end{df} Now we show that $avg^*:\mathbb{R}^{E}\to \mathbb{R}^{V}$ is the unique choice for being the adjoint of the average operator. \begin{prop} For any $x\in \mathbb{R}^V$ and any $\beta \in \mathbb{R}^E$, $(avg(x),\beta)_E=(x,avg^*(\beta))_V$. \end{prop} \begin{proof} \begin{align*} (avg(x),\beta)_E &=\sum\limits_{e\in E}\delta_E(e)(avg(x))(e)\beta(e)\\ &=\sum\limits_{e\in E}\delta_E(e)\beta(e)\frac{\sum\limits_{v\in e}x(v)}{|e|}\\ &=\sum\limits_{v\in V}\delta_V(v)x(v)\sum\limits_{e\in E_v}\frac{\beta(e)}{|e|}\frac{\delta_E(e)}{\delta_V(v)}\\ &=(x,avg^*(\beta))_V. \end{align*} \end{proof} Clearly, For all $x\in\mathbb{R}^{V}$ and $v\in V$ the expression of the function, $avg^*\circ avg:\mathbb{R}^{V}\to \mathbb{R}^{V}$ is $$ (avg^*\circ avg)(x)(v)=\sum\limits_{e\in E_v}\frac{(avg(x))(e)}{|e|}\frac{\delta_E(e)}{\delta_V(v)}.$$ From now onward, we denote the operator $avg^*\circ avg$ by $\mathcal Q$. Therefore, the operator $\mathcal Q:\mathbb{R}^{V}\to \mathbb{R}^{V}$ is defined by \begin{equation}\label{Q} \mathcal Q(x)(v)= (avg^*\circ avg)(x)(v)=\sum\limits_{e\in E_v}\frac{(avg(x))(e)}{|e|}\frac{\delta_E(e)}{\delta_V(v)}, \end{equation} for any $x\in\mathbb{R}^{V}$ and $v\in V$. \begin{rem}\label{remq} Now we have the following observations on $\mathcal Q$. \begin{itemize} \item [(1)] Evidently, $(x,\mathcal{Q}x)_V=(x,(avg^*\circ avg)x)_V=((avg(x),avg(x))_E\ge 0$. Therefore, $\mathcal{Q}$ is a positive semidefinite operator. Moreover, $\mathcal{Q} $ is self-adjoint since $(x,\mathcal{Q}y)_V=((avg(x),avg(y))_E=((avg(y),avg(x))_E=(y,\mathcal{Q}x)_V=(\mathcal{Q}x,y)_V $. \ \item [(2)] From \cref{Q} we have $ \mathcal{Q}(\mathbf{1})(v)=\sum\limits_{e\in E_v}\frac{(avg(\mathbf{1}))(e)}{|e|}\frac{\delta_E(e)}{\delta_V(v)}=\sum\limits_{e\in E_v}\frac{1}{|e|}\frac{\delta_E(e)}{\delta_V(v)}$. If $\sum\limits_{e\in E_v}\frac{1}{|e|}\frac{\delta_E(e)}{\delta_V(v)}=c$ for all $v\in V$ then $c$ is an eigenvalue with eigenvector $\mathbf{1}$. Moreover, if $\delta_E(e)=|e|$ and $\delta_V(v)=|E_v|$ for all $e\in E$ and $v\in V$, then $ \mathcal Q(\mathbf{1})(v)=\sum\limits_{e\in E_v}(avg(\mathbf{1}))(e)\frac{1}{|E_v|}=1$. Therefore, $1$ is an eigenvalue with eigenvector $\mathbf{1}$. \item [(3)] Consider $\delta_E(e)=|e|^2$ and $\delta_V(v)=1$. Then \begin{align*} \mathcal Q(x)(v)&=\sum\limits_{e\in E_v}\frac{(avg(x))(e)}{|e|}\frac{\delta_E(e)}{\delta_V(v)} \notag\\& =\sum\limits_{e\in E_v}\sum\limits_{u\in e}x(u)\\& =\sum\limits_{e\in E}B_{ve}\sum\limits_{u\in V}B_{ue}x(u)=((BB^T)x)(v), \end{align*} where $B=\left(B_{ue}\right)_{u\in V,e\in E}$ and $B_{ue}=1$ if $u\in e$ and otherwise $B_{ue}=0$. Therefore, $\mathcal{Q}$ becomes the operator associated with the signless Laplacian matrix $BB^T$, described in \cite[p. 1]{cardoso2019signless}. This motivates us to refer the operator $\mathcal{Q}$ as the general signless Laplacian of hypergraphs. \end{itemize} \end{rem} \subsection{The general diffusion operator} \label{gen-diff-exm} We define a function $n\in \mathbb{R}^{V}$ by $$n=\mathcal{Q}(\mathbf{1}).$$ Now we define the general diffusion function $L_G:\mathbb{R}^{V}\to\mathbb{R}^{V}$ by $$(L_G(x))(v)=\mathcal{Q}(x)(v)-n(v)x(v),$$ for all $x\in \mathbb{R}^{V}$, $v\in V$. Now onward, we denote $L_G$ by $L$, when there is no scope of confusion regarding the hypergraph $G$. \begin{nt}\label{gen} Different concepts of diffusion operators and Laplacian operators are available in the literature \cite{bretto2013hypergraph,rodriguez2003Laplacian,rodriguez2009Laplacian,MR4208993,banerjee2020synchronization}. In the following example, we show for the proper choices of $\delta_V$ and $\delta_E$, our notion of diffusion operator of hypergraph coincides with some existing notions of Laplacian operators and diffusion operators of hypergraphs. \end{nt} \begin{exm}\label{ex-diff-hy} \begin{enumerate}\label{ex-diff} \item \label{L1} If we take $\delta_V(v)=1$ and $\delta_E(e)=|e|^2$, then the operator $L$ becomes the negative of the Laplacian matrix, described in \cite{rodriguez2003Laplacian,rodriguez2009Laplacian}. \item \label{L2} If we choose $\delta_V(v)=1$ for all $v\in V$ and $\delta_E(e)=w(e)\frac{|e|^2}{|e|-1} $ then the operator $L$ becomes equal with the diffusion operator described in \cite{banerjee2020synchronization} for weighted hypergraphs. Moreover, for unweighted hypergraphs, i.e., when $w(e)=1$ for all $e\in E$, $L$ becomes the negative of the Laplacian operator defined in \cite{MR4208993}. \item \label{L3} When $\delta_V(v)=|E_v|$, and $\delta_E(e)=\frac{|e|^2}{|e|-1}$, $L$ becomes negative of the normalized Laplacian given in \cite{MR4208993}. \end{enumerate} \end{exm} The above examples motivate us to defined the general Laplacian operator $\mathfrak{L}$ associated with hypergraphs as, $\mathfrak{L}=-L$. Since, studying any one of the operators, $L$ and $\mathfrak{L}$, do the same for the other, from now, we focus on $L$. \begin{rem} Any result of this article involving any conditions on $\delta_E$ and $\delta_V$ on the diffusion operator, Laplacian operator, adjacency operator can be converted to a result on the operators given in \cite{MR4208993, banerjee2020synchronization,rodriguez2003Laplacian,rodriguez2009Laplacian} by choosing $\delta_E$, $\delta_V$ accordingly. Similarly, if one choose other $\delta_E$ and $\delta_V$ to incorporate different situations then all the results of this article can be converted to their framework by appropriately choosing $\delta_E$ and $\delta_V$. \end{rem} \section{Eigenvalues of the general diffusion operators of hypergraphs}\label{eigdiffprop} Since the map $f:\mathbb{R}^{V}\to \mathbb{R}^{V}$, defined by $(f(x))(v)=n(v)x(v)$, for all $x\in \mathbb{R}^{V},v\in \mathbb{R}^{V}$, is self-adjoint, and the operator, $\mathcal{Q}(x)$ is self-adjoint, thus $L$ is also self-adjoint. From the definition of $L$, it follows that \begin{align}\label{L} (Lx)(v)=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e}(x(u)-x(v)), \end{align} for all $x\in \mathbb{R}^V, v\in V$. For each hyperedge $e\in E$, suppose $Q_e$ is the incident matrix of a complete graph $K_{|e|}$ involving all the vertices in $e$ with a fixed orientation. So, $Q_e=\left\{{Q_e}_{rv}\right\}_{r\in \mathbb{N}_{\binom{|e|}{2}}, v\in V}$, where ${Q_e}_{rv}=-1$ if $v$ is the head of the $r$-th edge of the oriented $K_{|e|}$, ${Q_e}_{rv}=1$ if $v$ is the tail of the $r$-th edge of the oriented $K_{|e|}$, and ${Q_e}_{rv}=0$ otherwise. Here $\mathbb{N}_r$ is the collection of all the natural numbers $\le r$. It is easy to verify that for each $x\in \mathbb{R}^V$, $Lx=-(\sum\limits_{e\in E}\frac{\delta_E(e)}{|e|^2}\Delta_V^{-1}Q_e^tQ_e)x$, where $\Delta_V$ is a diagonal matrix of order $|V|$ such that $\Delta_V(v,v)=\delta_V(v)$ for all $v\in V$. \begin{prop}\label{nsdo} $L$ is negative semidefinite. \end{prop} \begin{proof} For any $x\in \mathbb{R}^{V}$, \begin{align} \label{nsdl} \notag (L(x),x)_V &=\sum\limits_{v\in V}\delta_V(v)[\mathcal{Q}(x)(v)-n(v)x(v))]x(v)\\\notag &=\sum\limits_{v\in V}\delta_V(v)[(avg^*\circ avg)(x)(v)-n(v)x(v))]x(v)\\ &=\sum\limits_{v\in V}\sum\limits_{e\in E_v}\frac{\delta_E(e)}{|e|}[(avg(x))(e)-x(v))]x(v). \end{align} The contribution of the hyperedge $e=\{v_1,v_2,\ldots,v_k\}$ in the sum of the \Cref{nsdl} \begin{align} \notag &=\sum\limits_{v\in e}\frac{\delta_E(e)}{|e|}[(avg(x))(e)-x(v))]x(v)\\ \notag &=\sum\limits_{i= 1}^k\frac{\delta_E(e)}{|e|^2}\left[\left\{\sum\limits_{j=1 }^k x(v_j)\right\}-|e|x(v_i))\right]x(v_i)\\ \notag &=-\frac{1}{2}\frac{\delta_E(e)}{|e|^2}\sum_{i,j=1}^k(x(v_i)-x(v_j))^2\le 0. \end{align} Thus the \Cref{nsdl} becomes, \begin{align}\label{nsdeold} (L(x),x)_V=-\sum_{e\in E} \frac{1}{2}\frac{\delta_E(e)}{|e|^2}\sum_{u,v\in e}(x(u)-x(v))^2\le 0. \end{align} Hence the proof follows. \end{proof} Note that in \Cref{nsdeold}, the term $(x(u)-x(v))^2$ appear twice in the sum $\sum\limits_{u,v\in e}(x(u)-x(v))^2$, first as $(x(u)-x(v))^2$ and then as $(x(v)-x(u))^2$. Therefore, the \Cref{nsdeold} can also be expressed as \begin{align}\label{nsde} (L(x),x)_V = -\sum_{e\in E} \frac{\delta_E(e)}{|e|^2}\sum_{\{u,v\}\subset e}(x(u)-x(v))^2. \end{align} \begin{prop}\label{evec1} $0$ is an eigenvalue of $L$ and if the hypergraph is connected then the eigenspace of $0$ is $\langle\mathbf{1}\rangle$, the vector space generated by $\mathbf{1}$. \end{prop} \begin{proof} Since, $L(\mathbf{1})(v)=0$ for all $v\in V$, $0$ is an eigenvalue of $L$ with an eigenvector $\mathbf{1}$. If $x$ belongs to the eigenspace of $L$ corresponding to the eigenvalue $0$ and the hypergraph is connected, then by \Cref{nsde} we have $x(u)=x(v)$ for all $u,v\in V$. Thus the proof follows. \end{proof} By \Cref{nsdo} and \Cref{evec1}, other than $0$ all the eigenvalues of $L$ are negative and for a connected hypergraph, the eigenspace of the eigenvalue $0$ is $\langle\mathbf{1}\rangle$. Hence, if $x(t)$ is a solution of the differential equation $$\dot{x}(t)=L(x(t))$$ then as $t\to \infty$, among all the components of decomposed vector $x(t)$ along the eigenvectors of $L$ only the component along $\mathbf{1}$ survives and rest of all tend to $0$. Thus, as $t\to \infty$, any solution of the given differential equation converge to the vector space $\langle\mathbf{1}\rangle$. Therefore, $L$ is a reasonable candidate for being the diffusion operator corresponding to a hypergraph on the space of all real-valued functions on the set of all the vertices, $V$. Suppose $|V|=N$. By \Cref{nsdo} and \Cref{evec1}, there exists a collections of non-negative reals $\{\lambda_i(G)\}_{i=1}^N$ (or simply $\{\lambda_i\}_{i=1}^N$ if there is no scope of confusion regarding the hypergraph) such that $-\lambda_i$ is an eigenvalue of $L$ for all $i\in \mathbb{N}_{N}$. Suppose the indices $i(\in \mathbb{N}_{N})$ are chosen in such a way that $\lambda_i\le\lambda_{i+1}$. By \Cref{evec1}, $\lambda_1=0$. The \textit{Rayleigh quotient} $R(-L,x)$ of $-L$ and nonzero $x(\in \mathbb{R}^V$) is $\frac{(-Lx,x)_V}{(x,x)_V}$. Since $L$ is self-adjoint, we can assume that $\{\mathbf{1},z_2,\ldots, z_N\}$ is the orthonormal basis of $\mathbb{R}^V$ consisting of the eigenfunctions of $L$ and $z_i(\in \mathbb{R}^V)$ is the eigenfunction of $L$ corresponding to the eigenvalue $\lambda_i$. The Rayleigh quotient reaches its minimum value $\lambda_1=0$ when $x=\mathbf{1}$, the eigenvector of $L$ corresponding to the eigenvalue $\lambda_1=0$. Moreover, $\lambda_2=\inf\limits_{\mathclap{x\in \langle\mathbf{1}\rangle^\perp-\{\mathbf{0}\}}}R(-L,x)$ and the Rayleigh quotient reaches the infimum value at $x=z_2$, the eigenvector of $L$, corresponding to the eigenvalue $\lambda_2$. The multiplicity of $0$ as an eigenvalue of the graph Laplacian is equal to the number of connected components of the graph. This is an well known result for the graphs. One can conclude the same for hypergraph Laplacian. The proof for hypergraph is almost same that works for graphs. \begin{prop} Multiplicity of the zero eigenvalue of the diffusion matrix $L$ of a hypergraph $G$ is equal to the number of connected components in $G$. \end{prop} \begin{proof} Suppose that the multiplicity of the zero eigenvalue is $k$. Let $S$ be the eigenspace of the eigenvalue $0$ of $L$. Let $(V_1,E_1)$ $,(V_2,E_2)$ $,\ldots, (V_k,E_k)$ be the $k$ components of the hypergraph $G$. Evidently, by \cref{nsde}, for all $z\in S$, \begin{equation}\label{equal} 0=(Lz,z)_V=-\sum_{e\in E} \frac{1}{2}\frac{\delta_E(e)}{|e|^2}\sum_{u,v\in e}(z(u)-z(v))^2. \end{equation} It is evident from \Cref{equal} that for all $z\in S$, $z$ is constant within each connected component of the hypergraph. Therefore, for all $z\in S$, there exists $z_1,z_2,\ldots,z_k\in \mathbb{R} $ such that $z(u)=z_i$ for all $u\in V_i$ where $i\in\{1,2,\ldots,k\}$. This association of each elements of $S$ to $k$ real numbers motivates us to define the linear map $\mathfrak{g}:S\to \mathbb{R}^k$ by $\mathfrak{g}(z):=(z_1,z_2,\ldots,z_k)$. Using \Cref{equal}, one can easily verify that $\mathfrak{g}$ is an isomorphism. therefore, the geometric multiplicity of $0$, which is the dimension of the eigenspace $S$ is exactly equal to $k$. Since $L$ is a self-adjoint operator, for any eigenvalue of $L$ the algebraic multiplicity is equal to the geometric multiplicity. Therefore, the number of components, $k$ is equal to the multiplicity of $0$ as an eigenvalue of $L$. \end{proof} Now we provide some results on the eigenvalues and eigenvectors of the diffusion operator of some classes of hypergraphs. The following results allow us to determine several eigenvalues of the same simply by looking at the hypergraphs. According to the definition of the diffusion operator, a hypergraph corresponds to a class of diffusion operator. More precisely, a hypergraph along with a particular choice of $(\delta_V,\delta_E)$ induces a diffusion operator.Therefore, naturally the eigenvectors and eigenvalues of the diffusion operator depends both on the structures of the hypergraphs and the choices of the inner products. In the following results, we determined the eigenvalues and the eigenvectors of the diffusion operator with two types of specifications- $i)$ the conditions on the structure of the hypergraphs specify the class of hypergraphs, and $ii)$ the conditions on $\delta_E,\delta_V$ specify the subclass of the diffusion operators. \begin{thm}\label{cute-new} If $G=(V,E)$ is a hypergraph such that \begin{itemize} \item [(i)] \label{cute-new-structure}$E_k=\{e_1,e_2,\ldots,e_k\}\subset E$ with $W=\bigcap\limits_{e\in E_k}e$ and $|W|\ge 2$, and $e\cap W=\emptyset$ for all $e\in E\setminus E_k$, \item[(ii)] \label{delta-cute-new}$\delta_V(v)=c$ for all $v\in W$, for some fixed $ \in\mathbb{R}$, \end{itemize} then $-\frac{1}{c}\sum\limits_{e\in E_k}\frac{\delta_E(e)}{|e|}$ is an eigenvalue of the diffusion operator $L$ and the dimension of the corresponding eigenspace is at least $|W|-1$. \end{thm} \begin{proof} Suppose that $W=\{v_0,v_1,\ldots,v_s\}$. Corresponding to each $v_i$, for all $i=1,2,\ldots,s$, we define $y_i\in \mathbb{R}^V$ as, $$ y_i(v)= \begin{cases} -1&\text{~if~} v=v_0\\ \phantom{-}1&\text{~if~} v=v_i\\ \phantom{-}0&\text{~otherwise.~} \end{cases} $$ We enlist below some crucial observations on $y_i$, for all $i=1,2,\ldots,s$. \begin{itemize} \item[(a)]If $v\in V\setminus W$ then $y_i(v)=0$. \item[(b)] If $e\notin E_k $ then $e\cap W=\emptyset$ and therefore, $y_i(v)=0$ for all $v\in e$. If $e\in E_k$ then from the definition of $y_i$ we have $\sum\limits_{v\in e}y_i(v)=0$. Therefore, $\sum\limits_{v\in e}y_i(v)=0$ for all $e\in E$. \item[(c)] Therefore, $ (Ly_i)(v)=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e}(y_i(u)-y_i(v))=-\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|}y_i(v)$. \item[(d)] Evidently, $E_v=E_k$ for all $ v\in W$, \end{itemize} Since $\delta_V(v)=c $ for all $v\in W$, then by the above observations we conclude that for all $i=1,2,\ldots,s$, \begin{align*} (Ly_i)(v) &= \begin{cases} -\sum\limits_{e\in E_k}\frac{\delta_E(e)}{c}\frac{1}{|e|}y_i(v) & \text{~if~} v\in W,\\ 0 & \text{~otherwise.~} \end{cases} \end{align*} Thus $(Ly_i)(v) = -\frac{1}{c}\sum\limits_{e\in E_k}\frac{\delta_E(e)}{|e|}y_i(v).$ Therefore, $-\frac{1}{c}\sum\limits_{e\in E_k}\frac{\delta_E(e)}{|e|} $ is an eigenvalue of $L$ with the eigenvectors $y_1,y_2,\ldots, y_s$, respectively. Since $$(\sum\limits_{i=1}^sc_iy_i)(v)= \begin{cases} -\sum\limits_{i-1}^s c_i&\text{~if~} v=v_0,\\ \phantom{-}c_i&\text{~if~} v=v_i,\\ \phantom{-}0&\text{~otherwise,~} \end{cases}$$ for $c_1,c_2,\ldots,c_s\in \mathbb{R}$, $ (\sum\limits_{i=1}^sc_iy_i)=0$ implies $c_i=0$ for all $i=1,2,\ldots, s$. Therefore, $\{y_1,y_2,\ldots, y_s\}$ is linearly independent and the dimension of the eigenspace of the above mentioned eigenvalue is at least $s$. \end{proof} We provide below some examples related to the above result. \begin{nt} \begin{enumerate} \item Let us recall \Cref{ex-diff}(\ref{L1}). If we put $\delta_V(v)=1$ and $\delta_E(e)=|e|^2$, then the diffusion operator $L$ becomes the negative of the Laplacian matrix, described in \cite{rodriguez2003Laplacian,rodriguez2009Laplacian}. In this case, $\delta_V$ is constant function and $\sum\limits_{e\in E_k}\frac{\delta_E(e)}{|e|}=\sum\limits_{e\in E_k}|e|$ is always a constant and thus, the condition (ii) of \Cref{cute-new} holds trivially and not required to be mentioned in this case. That is, in this case, $-\sum\limits_{e\in E_k}|e|$ is an eigenvalue of the diffusion operator and $\sum\limits_{e\in E_k}|e|$ is an eigenvalue of the Laplacian operator with the multiplicity $|W|-1$. \item In \Cref{ex-diff}(\ref{L2}) we have seen, for $\delta_V(v)=1$ and $\delta_E(e)=\frac{|e|^2}{|e|-1}$ the diffusion operator becomes the negative of the Laplacian operator mentioned in \cite{MR4208993}. In this case $\sum\limits_{e\in E_k}\frac{\delta_E(e)}{|e|}=\sum\limits_{e\in E_k}\frac{|e|}{|e|-1}$ is always a constant and therefore, also for this particular diffusion operator the condition $(ii)$ always holds. Evidently, here the Laplacian eigenvalue is $\sum\limits_{e\in E_k}\frac{|e|}{|e|-1}$ with the multiplicity $|W|-1$. \item Recall \Cref{ex-diff-hy}(\ref{L3}). If $\delta_V(v)=|E_v|$, and $\delta_E(e)=\frac{|e|^2}{|e|-1}$ then the diffusion operator becomes the negative of the normalized Laplacian described in \cite[Equatioin-14]{MR4208993}. Note that in \Cref{cute-new}, $E_v=E_k$ for all $v\in W$ and thus, $\delta_V(v)=|E_k|=k$ for all $v\in W$. Therefore, by \Cref{cute-new}, $\frac{1}{k}\sum\limits_{e\in E_k}\frac{|e|}{|e|-1}$ is an eigenvalue of the normalized Laplacian with the multiplicity $|W|-1$. \end{enumerate} \end{nt} \begin{exm} Consider the hypergraph $H(V,E)$ where $V=[20]=\{n\in \mathbb N:n\le 20\}$ and $E=\{e_1=\{1, 2,3,4\},e_2=\{1,2, 5,6,7\},e_3=\{1,2,8,9,10\},e_4=\{1,2,11,12,13,14\},e_5=\{1,2,15,16,17,18,19,20\}\}$. Since $|W|=\bigcap\limits_{i=1}^5e_i=\{1,2\}$, we have the followings. \begin{enumerate} \item In the framework of \cite{MR4208993}, one eigenvalue of the Laplacian of $H$ is $ \sum\limits_{i=1}^5\frac{|e_i|}{|e_i|-1}=\frac{1297}{210}$ and an eigenvalue of the normalized Laplacian matrix of $H$ is $\frac{1}{5}\sum\limits_{e\in E_k}\frac{|e|}{|e|-1}=\frac{1297}{1050}$. \item In the framework of \cite{rodriguez2003Laplacian,rodriguez2009Laplacian,bretto2013hypergraph}, one eigenvalue of the Laplacian of $H$ is $ \sum\limits_{i=1}^5|e_i|=28$. \end{enumerate} \end{exm} \begin{cor}\label{cute1} If $G=(V,E)$ is a hypergraph satisfying the following conditions \begin{itemize} \item [(i)] the intersection of all the hyperedges contains at least two vertices, that is, $|\bigcap\limits_{e\in E}e|\ge2$, \item [(ii)]the function $\delta_V$ is constant on $\bigcap\limits_{e\in E}e$, that is, there exists $c\in \mathbb{R}^+$ such that $\delta_V(v)=c$ for all $v\in \bigcap\limits_{e\in E}e$, \end{itemize} then $-\sum\limits_{e\in E}\frac{\delta_E(e)}{c}\frac{1}{|e|}$ is an eigenvalue of the diffusion operator $L$ and the dimension of the corresponding eigenspace is at least $|\bigcap\limits_{e\in E}e|-1$. \end{cor} \begin{proof}This result directly follows from the \Cref{cute-new} \end{proof} \begin{thm}\label{cute2} Let $G=(V,E)$ be a hypergraph. Suppose that there exists an hyperedge, $e_0\in E$, such that \begin{enumerate} \item[(i)] $e_0=e_u\cup e_v$ where $e_u\cap e_v=\emptyset$, \item[(ii)] $|e_u|\ge 2$, \item[(iii)]{\label{con-3}} $e\cap e_u=\emptyset$ for all $e(\neq e_0)\in E$. \end{enumerate} If $\delta_V(v)=c$ for all $v\in e_u$ then $-\frac{\delta_E(e_0)}{c}\frac{1}{|e_0|}$ is an eigenvalue of the diffusion operator $L$ of the hypergraph $G$ with multiplicity at least $|e_u|-1$. \end{thm} \begin{proof} Suppose that $ e_u=\{u_0,u_1,\ldots, u_k\}$. For each $u_i $, $i=1,2,\ldots, k$, we define $y_i\in \mathbb{R}^V$ by $$ y_i(v)= \begin{cases} \phantom{-}1 \text{~if ~} v=u_i,\\ -1\text{~if~} v=u_0, \\ \phantom{-}0 \text{~otherwise.~} \end{cases} $$ Now we have the following observations on $y_i$ for $i=1,2,\ldots,k$. \begin{itemize} \item [(a) Evidently, $\sum\limits_{u\in e}y_i(u)=0$ for all $e\in E$ and $i=1,2, \ldots,k$ because if $e\ne e_0$ then $y_i(v)=0$ for all $v\in e$ and $\sum\limits_{u\in e}y_i(u)=1-1=0$. Thus, for all $v\in V$, one has by \Cref{L} \begin{align*} (Ly_i)(v &=-\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|}y_i(v). \end{align*} \item[(b)] For all $v\in e_u$, $E_v=\{e_0\}$. Thus, for all $v\in e_u$, we have \begin{align*} (Ly_i)(v) =-\frac{\delta_E(e_0)}{\delta_V(v)}\frac{1}{|e_0|}y_i(v). \end{align*} \end{itemize} Since $\delta_V(v)=c$ for all $v\in e_u$, by the above observations one has \begin{align} (Ly_i)(v &=\begin{cases} -\frac{\delta_E(e_0)}{c}\frac{1}{|e_0|}y_i(v) & \text{~if~}v \in e_u,\\ 0&\text{~otherwise~}. \end{cases}\notag \end{align} Thus $(Ly_i)(v) = -\frac{\delta_E(e_0)}{c}\frac{1}{|e_0|}y_i(v).$ Therefore, $-\frac{\delta_E(e_0)}{c}\frac{1}{|e_0|}$ is an eigenvalue of $L$ with the eigenvectors $y_1,y_2,\ldots,y_k$, respectively. Note that, $$\left(\sum\limits_{i=1}^kc_iy_i\right)(v)= \begin{cases} -\sum\limits_{i=1}^kc_i & \text{~if~} v=u_0,\\ c_i & \text{~if~}v=v_i,\\ 0&\text{~otherwise~.} \end{cases}$$ Therefore, $\sum\limits_{i=1}^kc_iy_i=0$ leads to $c_i=0$ for all $i=1,2,\ldots,k$ and $\{y_1,y_2,\ldots, y_k\}$ is a linearly independent subset of the eigenspace of $-\frac{\delta_E(e_0)}{c}\frac{1}{|e_0|} $. This proves that the multiplicity of the eigenvalue $-\frac{\delta_E(e_0)}{c}\frac{1}{|e_0|}$ is at least $k$. \end{proof} \begin{nt}\label{petal_lap_nt} \begin{enumerate} \item\label{norm_lap_petal} Using conditions of \Cref{cute2}, one has $E_v=\{e_0\}$ for all $v\in e_u$. Recall \Cref{ex-diff-hy}(\ref{L3}). If $\delta_V(v)=|E_v|$, and $\delta_E(e)=\frac{|e|^2}{|e|-1}$ then the diffusion operator becomes the negative of the normalized Laplacian described in \cite[Equatioin-14]{MR4208993}. Since $\delta_V(v)=|E_v|=1$ for all $v\in e_u$, by \Cref{cute2}, $\frac{|e_0|}{|e_0|-1}$ is an eigenvalue of the normalized Laplacian with the multiplicity $|e_u|-1$. Moreover, according to \cite{MR4208993}, the normalized Laplacian matrix described in \cite[Equation-16]{MR4208993} is similar to that of \cite[Equation-14]{MR4208993}. Therefore, both the matrices have an eigenvalue $\frac{|e_0|}{|e_0|-1}$ with the multiplicity $|e_u|-1$. \item In \Cref{ex-diff}(\ref{L1}), for $\delta_V(v)=1$ and $\delta_E(e)=|e|^2$, the diffusion operator $L$ becomes the negative of the Laplacian matrix, described in \cite{rodriguez2003Laplacian,rodriguez2009Laplacian,bretto2013hypergraph}. In this case, $\delta_V$ is constant function and $\frac{\delta_E(e_0)}{c|e_0|}=|e_0|$ and thus, in this case the eigenvalue of the Laplacian matrix becomes $|e_0|$. \item In \Cref{ex-diff}(\ref{L2}), we have seen, for $\delta_V(v)=1$ and $\delta_E(e)=\frac{|e|^2}{|e|-1}$ the diffusion operator becomes the negative of the Laplacian operator mentioned in \cite{MR4208993}. Since, here, $\frac{\delta_E(e_0)}{c|e_0|}=\frac{|e_0|}{|e_0|-1}$, thus, in this case the eigenvalue of the diffusion operator becomes $-\frac{|e_0|}{|e_0|-1}$ and the eigenvalue of the Laplacian matrix is $ \frac{|e_0|}{|e_0|-1}$. \end{enumerate} \end{nt} \begin{exm}\sloppy Consider a hypergraph $H(V,E)$ with $V=[11]=\{n\in \mathbb{N}:n\le11\}$ and $E=\{e_1=\{1,2,3,4,5\},e_2=\{4,5,6,7,10,11\},e_3=\{6,7,8,9\},e_4=\{8,9,10,11\}\}$. Since $W=\{1,2,3,\}\subset e_1$ and $W\cap e=\emptyset$ for all $e(\neq e_1)\in E$, we have the followings. \begin{enumerate} \item In the framework of \cite{MR4208993}, an eigenvalue of the Laplacian of $H$ is $ \frac{|e_1|}{|e_1|-1}=\frac{5}{4}$. Moreover, by \Cref{petal_lap_nt}(\ref{norm_lap_petal}), $ \frac{|e_1|}{|e_1|-1}=\frac{5}{4}$ is also an eigenvalue of the normalized Laplacians described in \cite{MR4208993}. \item In the framework of \cite{rodriguez2003Laplacian,rodriguez2009Laplacian,bretto2013hypergraph}, one eigenvalue of the Laplacian of $H$ is $ |e_1|=5$. \end{enumerate} \end{exm} \begin{thm}\label{cute3} Suppose that $G=(V,E)$ be a hypergraph such that $E_0=\{e_0,e_1,\ldots,e_k\}\subset E$ with \begin{itemize} \item [(i)] $W \bigcap\limits_{i=1}^ke_i\neq \emptyset$, and $W\cap e=\emptyset$ for all $e\in E\setminus E_0$, \item[(ii)]for all $i=0,1,\ldots,k$ there exists an $F_i\subset V$ such that $e_i=W\cup F_i$ with $|F_i|=t$ and $F_i\cap W=\emptyset$ for all $i$, \item[(iii)] $F_i\cap e=\emptyset$ for all $e(\ne e_i)\in E$. \item[(iv)]there exists $c,\omega\in \mathbb{R}$ such that $\delta_V(v)=c$ for all $v\in \bigcup\limits_{i=0}^kF_i$, and $\frac{\delta_E(e)}{|e|^2}=\omega$ for all $e\in E_0$. \end{itemize} Then $-\frac{\omega}{c}|W|$ is an eigenvalue of $L$ with multiplicity at least $|E_0|-1$. \end{thm} \begin{proof} We define $y_i\in \mathbb{R}^V$ for all $i=1,2,\ldots,k$, as $$ y_i(v)= \begin{cases} -1&\text{~if~} v\in F_0\\ \phantom{-}1&\text{~if~}v\in F_i\\ \phantom{-}0&\text{~otherwise.~} \end{cases} $$ Now, we consider the following cases to prove the result. \begin{itemize} \item [(a)] For $v\in F_j$, one has $E_v=\{e_j\}$ for any $j=0,1,\ldots,k$. Therefore, \Cref{L} becomes $(Ly_i)(v)=\frac{\delta_E(e_j)}{c|e_j|^2}\sum\limits_{u\in e }(y_i(u)-y_i(v))=\frac{\delta_E(e_j)}{c|e_j|^2}\sum\limits_{u\in W }(y_i(u)-y_i(v))=-\frac{\omega}{c}|W|y_i(v)$. \item[(b)] For $v\in W $, clearly $E_v=E_0$ and $y_i(v)=0$. Thus, $(Ly_i)(v)=\sum\limits_{e\in E_0}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e}(y_i(u)-y_i(v))\\=\sum\limits_{j=0}^k\frac{\delta_E(e_j)}{\delta_V(v)}\frac{1}{|e_j|^2}\sum\limits_{u\in e_j}y_i(u)=\frac{\delta_E(e_i)}{\delta_V(v)}\frac{1}{|e_i|^2}t-\frac{\delta_E(e_0)}{\delta_V(v)}\frac{1}{|e_0|^2}t=\frac{\omega}{c}(t-t)=0$. \item[(c)] For $v\in V\setminus\left(W\cup\left(\bigcup\limits_{i=0}^kF_i\right)\right)$, one has $\sum\limits_{u\in e}(y_i(u)-y_i(v))=0$ for all $e\in E_v$. Therefore, $(Ly_i)(v)=0$. \end{itemize} Therefore, $-\frac{\omega}{c}|W|$ is an eigenvalue of $L$. Since $$(\sum\limits_{i=1}^kc_iy_i)(v)= \begin{cases} -\sum\limits_{i=1}^kc_i &\text{~if~} v\in F_0,\\ c_i &\text{~if~} v\in F_i,\\ 0 &\text{~otherwise,} \end{cases}$$ we have $\sum\limits_{i=1}^kc_iy_i=0$ if and only if $c_i=0$ for all $i=1,2,\ldots,k$. So, $y_1,y_2,\ldots,y_k$ are linearly independent and the dimension of the eigenspace of the eigenvalue $-\frac{\alpha}{c}\frac{s-1}{s^2}$ of L is at least $k$. Therefore, the multiplicity of the eigenvalue $-\frac{\omega}{c}|W|$ is at least $k=|W|-1$. \end{proof} \begin{nt} \begin{enumerate} \item In \Cref{ex-diff}(\ref{L1}), for $\delta_V(v)=1$ and $\delta_E(e)=|e|^2$, the diffusion operator $L$ becomes the negative of the Laplacian matrix, described in \cite{rodriguez2003Laplacian,rodriguez2009Laplacian}. In this case, $\delta_V$ is a constant function and $\frac{\delta_E(e_0)}{|e|^2}=1$. Thus the condition $(iv)$ of \Cref{cute3} holds trivially. Therefore, $|W| $ becomes an eigenvalue with the multiplicity $|W|-1$. \item In \Cref{ex-diff}(\ref{L2}) we have seen, for $\delta_V(v)=1$ and $\delta_E(e)=\frac{|e|^2}{|e|-1}$ the diffusion operator becomes the negative of the Laplacian operator mentioned in \cite{MR4208993}. Here one can also verify easily that the condition $(iv)$ of \Cref{cute3} holds if all the hyperedges in $E_0$ are of same cardinality. Therefore, in this case $\frac{1}{|e|-1}|W| $ is an eigenvalue with the multiplicity $|W|-1$. \item Note that for all $i=0,1,\ldots,k$, if $v\in F_i$ then $E_v=\{e_i\}$. Therefore, $|E_v|=1$ for all $v\in \bigcup\limits_{i=0}^kF_i$. Let us recall \Cref{ex-diff-hy}(\ref{L3}). Now if $\delta_V(v)=|E_v|$, and $\delta_E(e)=\frac{|e|^2}{|e|-1}$ then the diffusion operator becomes the negative of the normalized Laplacian described in \cite[Equatioin-14]{MR4208993}. In this framework, $\delta_V(v)=|E_v|=1$ for all $v\in \bigcup\limits_{i=0}^kF_i$ and thus by \Cref{cute3}, we get $\frac{1}{|e|-1}|W| $ is an eigenvalue of the normalized Laplacian matrix with the multiplicity $|W|-1$. \end{enumerate} \end{nt} We provide an application of the above result in \Cref{spectra_ex}. \subsection{Spectra of Diffusion Operator of Some Specific Hypergraphs }\label{spectra_ex} Now we recall some definitions of special type of hypergraphs from \cite{MR3116407,andreotti2020spectra} and derive the eigenvalues of their diffusion operators. \begin{df}[Cored vertex]\cite[Definition 2.3]{MR3116407} Suppose that $G(V,E)$ is a hypergraph. If for all $e\in E$, there exists $v_e\in e$ such that $v_e\notin e_j$ for all $e_j(\neq e)\in E$ then $G$ is called a cored hypergraph. A vertex with degree one is referred to as a cored vertex, and a vertex with degree greater than one is referred to as an \textit{intersectional} vertex. \end{df} According to \cite{andreotti2020spectra}, if a hyperedge has only one cored vertex then the core vertex is called a \textit{pendant vertex}. Moreover, two vertices $u,v$ of a hypergraph are \textit{twins} if they belong to the exactly same hyperedge(s). Note that in \Cref{cute2}, all the elements of $e_u$ are cored vertex and any pair of vertices in $e_u$ are twins. Moreover, \Cref{cute2} can be applied if there exists at least two cored vertex which are twins. In \Cref{cute3}, each $u_i$ is the only cored vertex in $e_i $, that is, each $u_i$ is a pendant and other than $u_i$, all the vertices in $e_i$ are intersectional. In \Cref{cute1}, the condition (1) can be restated as there exists at least a pair of twin vertices belongs to all the hyperedges. Now we are going to apply our results to determine the eigenvalues of some classes of hypergraphs that has cored vertices, twin vertices, and intersectional vertices. \subsubsection{Complete Spectra of the Diffusion Operator of Hyperflowers} \begin{df}[Hyperflowers]\cite{andreotti2020spectra} \label{hyperflower} A $(l,r)$-hyperflower with $t$-twins is a hypergraph $G=(V,E)$ where $V$ can be expressed as the disjoint partition $V=U\cup W$ with the following listed property. \begin{itemize} \item[(a)] The set $U$ can be partitioned into disjoint $t$-element sets as $U=\bigcup\limits_{i=1}^l U_i$. That is $|U_i|=t$, $U_i=\{u_{is}\}_{s=1}^t$ for all $i=1,2\ldots,l$ and $U_i\cap U_j=\emptyset$ for all $i\neq j$ and $i,j=1,\ldots,l$. \item[(b)] There exists $r$-disjoint set of vertices $e_1,\ldots,e_r\in \mathcal{P}(W)$, the power set of $W$, such that, $W=\bigcup\limits_{j=1}^re_j$ and $E=\{e_{ki}:e_{ki}=e_k\cup U_i,k=1,2,\ldots,r;i=1,2,\ldots,l\}$. \end{itemize} If $v\in U$, then $v$ is called a peripheral vertex. \end{df} Suppose that $E_k=\{e_{ki}\}_{i=1}^l$ for any $k=1,2,\ldots ,r$. Evidently, $e_k=\bigcap\limits_{e\in E_k}e$ and $e\cap e_k=\emptyset$ for all $e\in E\setminus E_k$. Therefore, by $\Cref{cute-new}$, if $\delta_V(v)=c_k$ and $\sum\limits_{e\in E_v}\frac{\delta_E(e)}{|e|}=\mu_k$ for all $v\in e_k$ then $-\frac{\mu_k}{c_k}$ is an eigenvalue of the diffusion operator $L_G$ with eigenspace of dimension at least $|e_k|-1$ for all $k$. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.25] \begin{pgfonlayer}{nodelayer} \node [style=blackfilled] (0) at (-8, 7.25) {}; \node [style=blackfilled] (1) at (-6.5, 8.5) {}; \node [style=blackfilled] (2) at (-9.5, 8.5) {}; \node [style=blackfilled] (3) at (-6.5, 5.75) {}; \node [style=blackfilled] (4) at (-9.25, 5.75) {}; \node [style=blackfilled] (5) at (-10, 7) {}; \node [style=blackfilled] (6) at (-6, 7) {}; \node [style=blackfilled] (7) at (-8, 9) {}; \node [style=blackfilled] (8) at (-8, 5.25) {}; \node [style=blackfilled] (9) at (7, 7) {}; \node [style=blackfilled] (10) at (8.5, 8.5) {}; \node [style=blackfilled] (11) at (5.5, 8.5) {}; \node [style=blackfilled] (12) at (8.5, 5.5) {}; \node [style=blackfilled] (13) at (5.5, 5.5) {}; \node [style=blackfilled] (14) at (5, 7) {}; \node [style=blackfilled] (15) at (8.75, 7) {}; \node [style=blackfilled] (16) at (7, 9) {}; \node [style=blackfilled] (17) at (7, 5) {}; \node [style=blackfilled] (18) at (7, -7) {}; \node [style=blackfilled] (19) at (8.5, -5.5) {}; \node [style=blackfilled] (20) at (5.5, -5.5) {}; \node [style=blackfilled] (21) at (8.5, -8.5) {}; \node [style=blackfilled] (22) at (5.5, -8.5) {}; \node [style=blackfilled] (23) at (5, -7) {}; \node [style=blackfilled] (24) at (9, -7) {}; \node [style=blackfilled] (25) at (7, -5) {}; \node [style=blackfilled] (26) at (7, -9) {}; \node [style=blackfilled] (27) at (-8, -7) {}; \node [style=blackfilled] (28) at (-6.5, -5.5) {}; \node [style=blackfilled] (29) at (-9.5, -5.5) {}; \node [style=blackfilled] (30) at (-6.5, -8.5) {}; \node [style=blackfilled] (31) at (-9.5, -8.5) {}; \node [style=blackfilled] (32) at (-10, -7) {}; \node [style=blackfilled] (33) at (-6, -7) {}; \node [style=blackfilled] (34) at (-8, -5) {}; \node [style=blackfilled] (35) at (-8, -9) {}; \node [style=black] (36) at (0, 0) {}; \node [style=black] (37) at (-1.5, 1.5) {}; \node [style=black] (38) at (1.5, -1.25) {}; \node [style=black] (39) at (1.5, 1.5) {}; \node [style=black] (40) at (-1.5, -1.25) {}; \node [style=none] (41) at (-10, 9) {}; \node [style=none] (42) at (-6, 9) {}; \node [style=none] (43) at (-10, 5.25) {}; \node [style=none] (44) at (-2, -2) {}; \node [style=none] (45) at (2, 2) {}; \node [style=none] (46) at (2, -1.75) {}; \node [style=none] (47) at (9, 9) {}; \node [style=none] (48) at (-2, 2) {}; \node [style=none] (49) at (4.75, 9) {}; \node [style=none] (50) at (9, 5) {}; \node [style=none] (51) at (-10, -5) {}; \node [style=none] (52) at (-10, -9) {}; \node [style=none] (53) at (-6, -9) {}; \node [style=none] (54) at (5, -9) {}; \node [style=none] (55) at (9, -9) {}; \node [style=none] (56) at (9, -5) {}; \node [style=none] (57) at (-0.25, 2.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=15] (48.center) to (51.center); \draw [bend left=15] (46.center) to (53.center); \draw [bend left=15] (48.center) to (45.center); \draw [bend left=15] (45.center) to (46.center); \draw [bend right] (51.center) to (52.center); \draw [bend right] (52.center) to (53.center); \draw [bend left] (48.center) to (45.center); \draw [bend left=15] (45.center) to (56.center); \draw [bend left] (56.center) to (55.center); \draw [bend left] (55.center) to (54.center); \draw [bend left=15] (54.center) to (44.center); \draw [bend left=15] (44.center) to (48.center); \draw [bend right] (41.center) to (43.center); \draw [bend right=15] (43.center) to (44.center); \draw [bend right=15, looseness=1.25] (44.center) to (46.center); \draw [bend right=45, looseness=0.75] (46.center) to (45.center); \draw [bend right=15] (45.center) to (42.center); \draw [bend right] (42.center) to (41.center); \draw [bend right] (47.center) to (49.center); \draw [bend right=15] (49.center) to (48.center); \draw [bend right] (48.center) to (44.center); \draw [bend right] (44.center) to (46.center); \draw [bend right=15] (46.center) to (50.center); \draw [bend right] (50.center) to (47.center); \end{pgfonlayer} \end{tikzpicture} \caption{$(4,1)$-hyperflower with $9$ twins.} \label{fig:hyperflower} \end{figure} In our study the hyperflowers with $r=1$ are interesting because, in this case, each peripheral vertex is a cored vertex of the hyperflower. We summarize below some crucial observations on $(l,1)$-hyperflower with $t$ twins (see \Cref{fig:hyperflower}). Since $r=1$, one has $W=e_1$ and $\bigcap\limits_{e\in E} e=W$. \begin{itemize} \item[(1)] Note that $e_1=W= \bigcap\limits_{e\in E}e$. Therefore, if $|e_1|\ge 2$ and $\delta_V(v)=c_0 $ for all $v\in e_1$ then by \Cref{cute1}, one can conclude that $-\sum\limits_{e\in E}\frac{\delta_E(e)}{c_0}\frac{1}{|e|}$ is an eigenvalue of the diffusion operator $L$ associated with the $(l,1)$-hyperflower with $t$ twins with the multiplicity at least $|e_1|-1=|W|-1$. \item[(2)] Since, for any hyperedge, all the peripheral vertices belong to the hyperedge are cored vertices and any two of them are twins, then by \Cref{cute2}, if $t\ge 2$ and $\delta_V(v)=c_i$ for all $v\in U_i$ then each hyperedge $e_{1i}$ of the hyperflower corresponds to an eigenvalue $-\frac{\delta_E(e_{1i})}{c_i}\frac{1}{|e_{1i}|}$ of $L$ with multiplicity at least $t-1$. \item[(3)] Note that if $t\ge 2$ then the total number of vertices in $(l,1)$-hyperflower with $t$ twins is $|V|=l.t+|e_1|$. Evidently, there exists $ l.t+|e_1|$ eigenvectors of $L$, out of which, $(|e_1|-1)+l(t-1)=(l.t+|e_1|)-(l+1)$ can be calculated by using \Cref{cute1} and \Cref{cute2}. We, know among the remaining $l+1$ eigenvectors, $\mathbf{1}$ is an eigenvector of $L$. \item[(4)] Suppose that $\delta_V(v)=c$ for all the peripheral vertices $v\in U$ and $ \frac{\delta_E(e)}{|e|^2}=\omega$ for all the hyperedges $e\in E$ then \Cref{cute3} suggests the existence of the eigenvalue $(-\frac{\omega}{c}|e_1|=)-\frac{\omega}{c}|W|$ with the multiplicity at least $l-1$. \item[(5)] We can easily verify that the last remaining eigenvalue is $-\frac{\omega}{c}(lt+|W|)=-\frac{\omega}{c}|V|$ with an eigenvector $y\in \mathbb{R}^V$ defined by $$ y(v)= \begin{cases} \phantom{-}lt &\text{~if~} v\in W\\ -|W| &\text{~if~} v\in U. \end{cases} $$. \item[(6)] Note that all the eigenvalues of $(l,1)-$ hyperflower with $t$-twins are the multiples of $\frac{\omega}{c}$. Therefore, if $\frac{\omega}{c}$ is an integer then all the eigenvalues of $L$ are integer. \end{itemize} In a $(l,1)$-hyperflower, when the central set $W$ is a singleton set $\{v_0\}$ then it becomes a sunflower. \begin{df}[Sunflower]\cite[Definition 2.4]{MR3116407} Let $G=(V,E)$ be a $k$-uniform hypergraph. If there exists a disjoint partition of the vertex set $V$ as $V=V_0\cup V_1\cup\ldots \cup V_s$, such that, \begin{enumerate} \item[(a)] $V_0=\{v_0\}$ and $|V_i|=k-1$ for all $i=1,2,\ldots, s$, \item[(b)]$E=\{e_i=V_0\cup V_i:i=1,2,\ldots, s\}$, \end{enumerate} then $G$ is called a $k$-uniform sunflower. Each hyperedge of sunflower is called leaf. The vertex $v_0$ is referred as the heart of the sunflower. Note that, the degree of the heart is the cardinality of the set $E$, which is also called the size of the sunflower. \end{df} Since the sunflower is a special case of the $(l,1)$-hyperflower, from the list of the eigenvalues of $(l,1)$-hyperflower one can determine the eigenvalues of the diffusion operator associated with sunflower. \subsubsection{Spectra of Diffusion Operators of Some More Hypergraphs} \begin{df}[Loose Path]\label{path} A $k$-uniform hypergraph $G=(V,E)$ is said to be a $k$-uniform loose path of size $d$ if all the $d$ hyperedges form a sequence $\{e_i \}^d_{i=1}$, such that, $e_i\cap e_j=\emptyset$ if $|i-j|>1$ and $|e_i\cap e_j|=1$ if $|i-j|=1$. \end{df}Note that if $i\neq 1,d$ then each $e_i$ contains $k-2$ cored vertices. Therefore, if $\delta_V(v)=c_i$ for all cored vertices $v\in e_i$, then by \Cref{cute2}, $-\frac{\delta_E(e_i)}{c_i}\frac{1}{k}$ is an eigenvlue of $L$ with the multiplicity $k-3$. If $k=1,d$ then $e_i$ contains $k-1$ cored vertices, then the multiplicity is $k-2$. If for $k\ge 3$, a $k$-uniform hypergraph $G=(V,E)$ is such that it satisfies the conditions stated in \Cref{path} with just one exception, which is $|e_1\cap e_d|=2$, then $G$ is called a $k$-uniform \textit{loose cycle} of size $d$. Using \Cref{cute2} we can find the eigenvalues of $L$ of a loose cycle as we have done it for a loose path. \section{Spectral bounds for some hypergraph property}\label{bounds} \begin{df}\label{gendegree} Let $r\in \mathbb{R}^V$ be defined by $r(v):=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{|e|-1}{|e|^2}$ and $r_0:=\max\limits_{v\in V}r(v)$. \end{df} Thus if $\delta_E(e)=w(e)\frac{|e|^2}{|e|-1},\delta_V(v)=1$ then $r(v)=d(v)$ and $r_0=d_{\max}$, where $d_{\max}:=\max\limits_{v\in V}d(v)$. For unweighted graph, $w(e)=1$ for all $e\in E$ and hence the degree $d(v)$ of a vertex $v$ is the number of hyperedges contain the vertex $v$. Let $E_v := \{e\in E: v\in e \}$. Thus $d(v)=|E_v|$. Clearly for an $m$-uniform hypergraph, $r(v)=\frac{m-1}{|m|^2}\sum\limits_{e\in E_V}\frac{\delta_E(e)}{\delta_V(v)}$. Now we compute a bound for $r(v)$ in terms of the spectra of the diffusion operator. Since for particular choices of inner products, $r(v)$ becomes the degree of $V$, this bound gives a spectral bound for the vertex degree. We use the techniques described in \cite[3.5., p. 300]{MR318007} to prove the next result. \begin{thm} Suppose that $\lambda_2$ and $\lambda_N$ are the second least and largest eigenvalue, respectively, of $-L$ then $$\delta_V(\min)\lambda_2\le \frac{|V|}{|V|-1}\min\limits_{v\in V}r(v)\delta_V(v)\le \frac{|V|}{|V|-1}\max\limits_{v\in V}r(v)\delta_V(v)\le \lambda_N\delta_V(\max).$$ \end{thm} \begin{proof} It is easy to show $\Tilde{L}=-L-\lambda_2(I_{|V|}-\frac{1}{|V|}J)$ is positive semidefinite, where $J$ is a square matrix of order $|V|$ with all its entry equal to $1$. Let, for all $v\in V$, $\chi_v\in\mathbb{R}^V$ be defined by $\chi_v(u)=1$ if $u=v$ and otherwise $\chi_v(u)=0$. Hence $(\Tilde{L}\chi_v,\chi_v)_V\ge0$ for all $v\in V$ and so $\lambda_2\le \frac{|V|}{|V|-1}\frac{1}{\delta_V(\min)}\min\limits_{v\in V}\sum\limits_{e\in E_V}\frac{\delta_E(e)}{|e|^2}(|e|-1)$. Similarly for the positive semidefinite matrix $\Tilde{M}=\lambda_N(I-\frac{1}{n}J)_V-(-L)$, we have $(\Tilde{M}\chi_v,\chi_v)_V\ge0$ and which implies $\frac{|V|}{|V|-1}\max\limits_{v\in V}\sum\limits_{e\in E_V}\frac{\delta_E(e)}{|e|^2}(|e|-1)\le \lambda_N\delta_V(\max)$. This completes the proof. \end{proof} The maximum and minimum cardinality of hyperedges in a hypergraph are called \textit{rank} $(rk(G))$ and \textit{corank} $(cr(G))$, respectively, of the hypergraph $G$. Removal of a vertex $v\in V$ from each hyperedge containing it, is called \textit{ weak deletion of $v$}. If weak deletion of a set of vertices increase the number of connected components of the hypergraph $G$ then the set is called \textit{weak vertex cut} of the hypergraph $G$. The \textit{weak connectivity number} $\kappa_w(G)$( or simply $\kappa_w$) is the minimum size of the weak vertex cut in the hypergraph $G$. \begin{thm}\label{th-upperbound} Let $G=(V,E)$ be a hypergraph with $|V|(\ge3)$, such that $G$ contains at least one pair of non-adjacent vertices then there exists a constant $\bar{k}$ such that $\lambda_2\le\bar{k} d_{\max}\kappa_W(G)$. \end{thm} \begin{proof} Let $W$ be the the weak vertex cut with $|W|=\kappa_w$. Clearly there exists a partition $V=V_1\cup W\cup V_2$ of $V$ such that no vertex of $V_1$ is adjacent to any vertex in $V_2$. Let us consider $y\in \mathbb{R}^V$ defined by $y(v)=|V_2|$ if $v\in V_1$, $y(v)=-|V_1|$ if $v\in V_2$, and $y(v)=0$ if $v\in W$. We define a function $k:V\to \mathbb{R}$ defined by $$k(v)=\sum_{e\in E_V}\sum_{u\in e\cap W}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}.$$ Suppose $\Bar{k}=\sup\limits_{e\in E,v\in V}\left\{\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\right\}$. It is easy to verify that, for all $v\in V_1\cup V_2$, $k (v)\le d_{\max}|W|\Bar{k}$. Clearly, for any $v\in V_1$, there exists no $e\in E_v$ such that $e\cap V_2\neq\phi$. Hence for any $v\in V_1$, one has \begin{align*} (Ly)(v)&=-\sum_{e\in E_V}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum_{u\in e\cap w}|V_2|\\ &=-k(v)|V_2| \end{align*} Similarly, for any $v\in V_2$, $(Ly)(v)=k(v)|V_1|$. Hence, \sloppy $(-Ly,y)_V\le\sum_{v\in V}\delta_V(v)k(v)(y(v))^2\le \Bar{k}d_{\max}|W|(y,y)_V$. Thus $\lambda_2\le \Bar{k}(d_{\max}-1)|W|=\Bar{k}d_{\max}\kappa_W(G)$. \end{proof} \begin{rem}In the above result if we put $\delta_E(e)=\frac{|e|^2}{|e|-1}$ for all $e\in E$ and $\delta_V(v)=1$ for all $v\in V$, the diffusion operator $L$ becomes the diffusion operator described in \cite{banerjee2020synchronization}. This operator is also the negative of the Laplacian matrix for hypergraph described in \cite{MR4208993}. With the above choice of $\delta_E$ and $\delta_V$ we have $\Bar{k}=\frac{1}{cr(G)-1}$. In the above theorem instead of the supremum $\bar{k}$, any upperbound of the set $\left\{\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\right\}_{e\in E,v\in V}$ yields an upperbound of $\lambda_2$, involving the weak connectivity number. Although we decided to go with the supremum to make the upperbound as sharp as possible. \end{rem} \begin{cor}\label{cor-upper} Let $G=(V,E)$ be a hypergraph with $|V|(\ge3)$ such that $G$ contains at least one pair of non-adjacent vertices and $d_{\max}< cr(G)$ and $\delta_V(v)=1$ for all $v\in V$, and $\delta_E(e)=w(e)\frac{|e|^2}{|e|-1}$ for all $e\in E $. Then $\lambda_2\le \kappa_W(G)$. \end{cor} \begin{proof} It is easy to verify $ \sum_{e\in E_V}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}=\sum_{e\in E_V}\frac{w(e)}{|e|}\le\frac{d_{\max}}{cr(G)-1}$. Therefore, $\Bar{k}\le \frac{d_{\max}}{cr(G)-1} $. Thus, the condition $d_{\max}< cr(G)$ leads us to $\Bar{k}\le 1$. Hence the result follows. \end{proof} \Cref{cor-upper} is stated and proved in \cite[Theorem 3.1]{MR4208993} independently. For any $S\subset V$, the collection of all the hyperedges contain vertices from both the sets $S$ and $V\setminus S$ are called the \textit{ edge boundary of the set} $S$. The edge boundary of $S$ is denoted by $\partial S$. \begin{thm}\label{2n} Let $G$ be a hypergrph. For any nonempty $S\subset V$, we have $$4\lambda_2\frac{\delta_V(min)}{\delta_E(\max)}\le \frac{|\partial S| |V|}{|S|(|V|-|S|)}\le \lambda_N\delta_V(\max)\max_{e\in E}\left\{\frac{|e|^2}{(|e|-1)\delta_E(e)}\right\}. $$ \end{thm} \begin{proof} We consider a function $z_s\in \mathbb{R}^V$ defined by $z_s(v):=\begin{cases} |V|-|S| & \text{ if } v\in S \\ -|S| &\text{ otherwise } \end{cases}$ corresponding to the set $S\ne\phi$. Thus by using $A.M\ge G.M$ inequality we have \begin{align*} (-Lz_s,z_s)_V&=\frac{1}{2}\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum_{u,v\in e}(z_s(u)-z_s(v))^2\\ &=\frac{1}{2}\sum_{e\in \partial S}\frac{\delta_E(e)}{|e|^2}2|e\setminus S||S\cap e||V|^2\\ &\le\frac{1}{4}\delta_E(\max)|\partial S||V|^2. \end{align*} \sloppy Now, as $(z_s,z_s)_V\ge \delta_V(\min)|S|(|V|-|S|)|V|$, we have $\lambda_2\le\frac{(-Lz_s,z_s)_V}{(z_s,z_s)_V}\le\frac{1}{4}\frac{\delta_E(\max)}{\delta_V(\min)}\frac{|\partial S| |V|}{|S|(|V|-|S|)}$. As for all $e\in \partial S$,and $|e-S||S\cap e|\ge |e|-1$ we have $(-Lz_s,z_s)_V =\frac{1}{2}\sum_{e\in \partial S}\frac{\delta_E(e)}{|e|^2}2|e-S||S\cap e||V|^2\ge |\partial S|\min\limits_{e\in E}\left\{\frac{\delta_E(|e|)}{|e|^2}(|e|-1)\right\}|V|^2$. Thus, $ \frac{|\partial S| |V|}{|S|(|V|-|S|)}\le \lambda_N\delta_V(\max)\max\limits_{e\in E}\left\{\frac{|e|^2}{(|e|-1)\delta_E(e)}\right\}$. This completes the proof. \end{proof} \begin{rem} If $\delta_V(v)=1$ for all $v\in V$ and $\delta_E(e)=\frac{|e|^2}{|e|-1}$ then \Cref{2n} leads us to $4\lambda_2\frac{cr(G)-1}{rk(G)^2}\le \frac{|\partial S||V|}{|S|(|V|-|S|)}\le \lambda_N$ and this implies the result given in \cite[Theorem 3.2, p-12]{MR4208993}. \end{rem} \begin{cor} Let $G$ be a hypergrph. For any nonempty $S\subset V$, we have $$4\lambda_2\frac{\delta_V(min)}{\delta_E(\max)}\le \frac{|\delta S| |V|}{|S|(|V|-|S|)}\le \lambda_N\frac{\delta_V(\max)}{\delta_E(\min)}\frac{(rk(G))^2}{cr(G)-1}. $$ \end{cor} Let $mc(G):=\max\{|\partial S|:\phi\neq S\subset V\}$ and $bw(G):=\min\left\{|\partial S|:s\subset V,|S|=\left\lfloor\frac{|V|}{2}\right\rfloor\right\}$ be the \textit{maximum cut} and \textit{bipartition width}, respectively, of a hypergraph $G(V,E)$. Now we have the following corollaries. \begin{cor} For any hypergraph $G(V,E)$, $$mc(G)\le\frac{|V|}{4}\lambda_N\delta_V(\max)\max\limits_{e\in E}\left\{\frac{|e|^2}{(|e|-1)\delta_E(e)}\right\}.$$ \end{cor} \begin{cor} For any hypergraph $G(V,E)$ if $\alpha(|V|)=\frac{4}{|V|}$ when $|V|$ is even, and $\alpha(|V|)=\frac{4|V|}{|V|^2-1}$ when $|V|$ is odd then $$4\lambda_2\frac{\delta_V(min)}{\delta_E(\max)}\le \alpha(|V|)bw(G)\le \lambda_N\delta_V(\max)\max_{e\in E}\left\{\frac{|e|^2}{(|e|-1)\delta_E(e)}\right\}. $$ \end{cor} Now we recall \textit{Cheeger constant} $$h(G):=\min_{S(\ne\phi)\subset V}\left\{\frac{|\partial S|}{\min(|S|,(|V-S|))}\right\}$$ of a hypergraph $G$ \cite{MR4208993}. \begin{cor} If $G$ is a connected hypergraph then $\lambda_2\le \frac{1}{2}\frac{\delta_E(\max)}{\delta_V(\min)}h(G)$. \end{cor} \begin{proof} This result follows immediately from \Cref{2n}. Clearly, there exists $S\subset V$ such that $h(G)=\frac{|\partial S|}{|S|}$ and $|S|\le \frac{1}{2}|V|$. This leads us to $\frac{|V|}{|V|-|S|}\le 2$. Hence by \Cref{2n}, $\lambda_2\le \frac{1}{2}\frac{\delta_E(\max)}{\delta_V(\min)}h(G)$. \end{proof} We recall that in an $m$- uniform hypergraph $G(V,E)$, $|e|=m$ for all $e\in E$. For any $m$-uniform hypergraph, the \Cref{2n} can be represented as follows. \begin{prop}\label{rod-gen} Let $G$ be an $m$-uniform hypergraph. Suppose $\gamma:\mathbb{N}\to\mathbb{R}$ is defined by $\gamma(m)=1$ if $m$ is even and $\gamma(m)=\frac{m^2}{m^2-1}$ if $m$ is odd then for any nonempty $S\subset V$, we have $$\lambda_N\delta_V(\max)\max_{e\in E}\left\{\frac{m^2}{(m-1)\delta_E(e)}\right\}\ge \frac{|\partial S| |V|}{|S|(|V|-|S|)}\ge4\lambda_2\frac{\delta_V(min)}{\delta_E(\max)}\gamma(m) . $$ \end{prop} \begin{proof} The proof is similar as described in \Cref{2n}, with the fact that for any $m$-hyperedge with $e\cap S=s$, we have $(m-s)s\le \begin{cases} \frac{m^2}{4}& \text{~if~} $m$ \text{~is even~} \\ \frac{m^2-1}{4} & \text{~if~} $m$ \text{~is odd~} \end{cases}$. \end{proof} \begin{rem} If $\delta_V(v)=1$ for all $v\in V$ and $\delta_E(e)=|e|$ then \Cref{rod-gen} provides the same result stated in \cite[Lemma-1, sec.2, p.917]{rodriguez2009Laplacian}. \end{rem} Our next result is a generalization of \cite[Theorem-4.2]{MR4208993}. The proof of the same is also similar to the proof of \cite[Theorem-4.2]{MR4208993}. \begin{thm} If $G$ be a hypergraph with $\lambda_2\le r(v)$ for all $v\in V$ then $$\sqrt{\lambda_2(2r_0-\lambda_2)}\ge \frac{\delta_E(\min)}{\delta_V(\max)(rk(G))^2}h(G).$$ \end{thm} \begin{proof} Let $z_2$ be the eigen function of $-L$ corresponding to the eigenvalue $\lambda_2$. Suppose that $V_1=\{v\in V:z_2(v)\ge0\}$ and $V_2=\{v\in V:z_2(v)<0\}$. Let $y\in\mathbb{R}^V$ be defined by $y(v)=z_2(v)$ if $v\in V_1$, otherwise $y(v)=0 $. Since, $$\lambda_2(y,y)_V\ge \left(\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{\{u,v\}\subset e\cap V_1}(y(u)-y(v))^2\right)-\sum\limits_{e\in \partial V_1}\frac{\delta_E(e)}{|e|^2}\sum\limits_{u\in e\cap V_2;v\in e\cap V_1}z_2(u)z_2(v)$$ and $$(2r_0-\lambda_2)(y,y)_V\ge \left(\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{\{u,v\}\subset e\cap V_1}(y(u)+y(v))^2\right)+\sum\limits_{e\in \partial V_1}\frac{\delta_E(e)}{|e|^2}\sum\limits_{u\in e\cap V_2;v\in e\cap V_1}z_2(u)z_2(v), $$ we conclude that \begin{align}\label{l1} \lambda_2(2r_0-\lambda_2)(y,y)_V^2&\ge \left(\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{\{u,v\}\subset e\cap V_1}(y(u)-y(v))^2\right)\left(\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{\{u,v\}\subset e\cap V_1}(y(u)+y(v))^2\right)\notag \\&-\alpha\left(4\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{\{u,v\}\subset e\cap V_1}y(u)y(v)+\alpha\right), \end{align} where $\alpha=\sum\limits_{e\in \partial V_1}\frac{\delta_E(e)}{|e|^2}\sum\limits_{u\in e\cap V_2;v\in e\cap V_1}z_2(u)z_2(v)$. Clearly $\alpha\le 0$. Since $\lambda_2\le r(v)$, we have $$\sum_{e\in \partial V_1}\frac{\delta_E(e)}{|e|^2}\sum\limits_{u\in e\cap V_2;v\in e\cap V_1}z_2(u)z_2(v)=\sum\limits_{v\in V_1}(r(v)-\lambda_2)\delta_v(v)\ge0.$$ Therefore, \begin{align}\label{l2} &\left(4\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{\{u,v\}\subset e\cap V_1}y(u)y(v)+\alpha\right) \notag\\&=\left(2\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{\{u,v\}\subset e\cap V_1}y(u)y(v)+\sum_{e\in \partial V_1}\frac{\delta_E(e)}{|e|^2}\sum\limits_{u\in e\cap V_2;v\in e\cap V_1}z_2(u)z_2(v)\right)\ge 0. \end{align} Since $r(v)\ge \lambda_2$, \Cref{l1} and \Cref{l2} imply that \sloppy \begin{align}\label{c1} &\lambda_2(2r_0-\lambda_2)(y,y)_V^2\notag\\&\ge\left(\frac{1}{2}\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{u,v\in e}(y(u)-y(v))^2\right)\left(\frac{1}{2}\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{u,v\in e}(y(u)+y(v))^2\right). \end{align} Now by Cauchy–Schwarz inequality we have \begin{align}\label{c2} &\left(\frac{1}{2}\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{u,v\in e}(y(u)-y(v))^2\right)\left(\frac{1}{2}\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{u,v\in e}(y(u)+y(v))^2\right)\notag\\&\ge \left(\frac{1}{2}\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum_{u,v\in e}(y^2(u)-y^2(v))\right)^2. \end{align} Suppose $t_0<t_1<\ldots<t_k$ are all possible distinct values of $y$ and $V_i=\left\{v\in V:y(v)\ge t_i\right\}$. Clearly $V=V_0\supseteq V_1\supseteq \ldots\supseteq V_k$. It can be easily verified that \sloppy$\left(\frac{1}{2}\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum_{u,v\in e}(y^2(u)-y^2(v))\right)\ge \frac{\delta_E(\min)}{(rk(G))^2}\sum\limits_{i\in \mathbb{N}_k} |\partial V_i|(t_i^2-t_{i-1}^2)\ge \frac{\delta_E(\min)}{(rk(G))^2}h(G)\sum\limits_{i\in \mathbb{N}_k} | V_i|(t_i^2-t_{i-1}^2)\ge\frac{\delta_E(\min)}{(rk(G))^2}h(G)\sum\limits_{i\in\mathbb{N}_{k}}( | V_i|-|V_{i+1}|)t_i^2 \ge\frac{\delta_E(\min)}{\delta_V(\max)(rk(G))^2}h(G)(y,y)_V$. Hence \Cref{c1} and \Cref{c2} lead us to $\sqrt{\lambda_2(2r_0-\lambda_2)}\ge \frac{\delta_E(\min)}{\delta_V(\max)(rk(G))^2}h(G)$. \end{proof} \begin{thm} For any hypergraph $G(V,E)$, $4\lambda_N\frac{\delta_V(\max)}{\delta_E(\min)}\ge h(G)$. \end{thm} \begin{proof} Suppose $S\subset V$ be such that $h(G)=\frac{|\partial S|}{|S|}$ and $z_N\in\mathbb{R}^V$ be the eigenvector of $L$ corresponding to the largest eigenvalue $\lambda_N$ . If $\chi_S\in \mathbb{R}^V$ be the characteristic function of the set $S$ then $\lambda_N=\frac{(-Lz_N,z_N)_V}{(z_N,z_N)_V}\ge \frac{(-L\chi_S,\chi_S)_V}{(\chi_S,\chi_S)_V}\ge \frac{1}{4}\frac{\delta_E(\min)}{\delta_V(\max)}h(G)$. \end{proof} \section{General Adjacency operator of a hypergraph }\label{adjacency} For graphs, Adjacency matrix can be expressed as the difference of the degree matrix and the Laplacian matrix associated with the graph. Here we define the general adjacency operator $A_G:\mathbb{R}^V\to \mathbb{R}^V$ for a hypergraph $G(V,E)$ as follows \begin{equation}\label{genadjacency} ((A_G)x)(v):=(L_G(x))(v)+r(v)x(v), \end{equation} for all $x\in\mathbb{R}^V$, where $L_G$ is the diffusion operator of $G$ and $r(\in \mathbb{R}^V)$ is defined in the \Cref{gendegree}. By \Cref{gendegree}, for all $x\in\mathbb{R}^V$, \begin{align*} (L_G(x))(v) &=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e}(x(u)-x(v))\\ &=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\left(\sum\limits_{u\in e;u\neq v}x(u)-\sum\limits_{u\in e;u\neq v}x(v)\right)\\ &=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e;u\neq v}x(u)-\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{|e|-1}{|e|^2}x(v)\\ &=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e;u\neq v}x(u)-r(v)x(v) \end{align*} Therefore, by \Cref{L} we have \begin{equation}\label{A} (A_Gx)(v)=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e;u\neq v}x(u). \end{equation} Henceforth we simply use $A$ to denote the general adjacency operator of a hypergraph instead of $A_G$ (if there is no confusion about the hypergraph $G$). Now we compute some eigenvalues and their eigenspaces of the general adjacency operators associated with some classes of hypergraphs. \begin{rem} For some specific values of $\delta_E$ and $\delta_V$, the diffusion operator coincides with some conventional operators. Similarly, if we choose $\delta_V(v)=1$ for all $v\in V$ and $\delta_E(e)=\frac{|E|^2}{|e|-1}$ for all $e\in E$, our adjacency operator becomes the adjacency operator described in \cite{MR4208993}. So all the theorems on adjacency operator stated in this subsection are also valid for the adjacency operator in \cite{MR4208993}. \end{rem} \begin{thm}\label{adj_cute_1} Suppose that $G=(V,E)$ be a hypergraph. If $e_0\in E$ such that \begin{itemize} \item[(i)] $e_0=e_u\cup e_v$, with $e_u\cap e_v=\emptyset$, and $|e_u|\ge 2$, \item[(ii)]\label{einte0}$e\cap e_u=\emptyset$ for all all $e(\neq e_0)\in E$, \item[(iii)] $\delta_V(v)=c$ for all $v\in e_u$, \end{itemize} then $-\frac{\delta_E(e_0)}{c|e_0|^2}$ is an eigenvalue of $A$ with multiplicity $|e_0|-1$. \end{thm} \begin{proof} Suppose that $e_u=\{u_0,u_1,\ldots,u_k\}$. Corresponding to each $u_i$, for $i=1,\ldots,k$, we define $y_i\in \mathbb{R}^V$ as \[ y_i(v)= \begin{cases} 1&\text{~if~} v=u_0,\\ -1&\text{~if~}v=u_i,\\ 0 &\text{~otherwise.~} \end{cases} \] So $(Ay_i)(v)=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e;u\neq v}y_i(u)$. Now we have the following observations. \begin{itemize} \item[(a)] For $v=u_0$, one has $E_v=\{e_0\}$. Therefore, $(Ay_i)(u_0)=\frac{\delta_E(e_0)}{\delta_V(u_0)}\frac{1}{|e_0|^2}\sum\limits_{u\in e;u\neq u_0}y_i(u)$. Since $y_i(v)=0$ for all $v\in V$ with $v\neq u_0$ and $v\neq u_i$, evidently, $(Ay_i)(u_0)=\frac{\delta_E(e_0)}{\delta_V(u_0)}\frac{1}{|e_0|^2}y_i(u_i)=-\frac{\delta_E(e_0)}{c}\frac{1}{|e_0|^2}y_i(u_0)$. \item[(b)] Similarly, for any $i=1,2\ldots,k$, we have $(Ay_i)(u_i)=\frac{\delta_E(e_0)}{\delta_V(u_0)}\frac{1}{|e_0|^2}y_i(u_0)=-\frac{\delta_E(e_0)}{c}\frac{1}{|e_0|^2}y_i(u_i)$. \item[(c)] For all $j\neq i $ and $j\neq0$, evidently, $\sum\limits_{u\in e;u\neq u_j}y_i(u)=0$ for all $e\in E_{u_j}$. Therefore, $(Ay_i)(u_j)=0$ for $j\neq 0$ and $j\neq i$. \item[(d)] For all $v\notin e_u$, $\sum\limits_{u\in e;u\neq v}y_i(u)=0$ for all $e\in E$ and therefore, $(Ay_i)(v)=0$. \end{itemize} Therefore, $Ay_i=-\frac{\delta_E(e_0)}{c}\frac{1}{|e_0|^2}y_i$ for all $i=1,2,\ldots ,k$. Evidently, $\{y_i\}_{i=1}^k$ is a linearly independent set. The rest of the proof is similar with the proof of \Cref{cute2}. \end{proof} \begin{thm}\label{adj_cute_2} Suppose that $G=(V,E)$ is a hypergraph. If \begin{itemize} \item[(i)]\label{con1} there exists $E_0=\{e_0,e_1,\ldots,e_s\}\subset E$ such that $W=\bigcap\limits_{i=1}^se_i$ and $e\cap W=\emptyset$ for all $e\in E\setminus E_0$, \item[(ii)] $|W|\ge 2$ and $W=\{v_0,v_1,\ldots ,v_k\}$, \item[(iii)] $\delta_V(v)=c$ for all $v\in W$, and $\sum\limits_{e\in E_0}\frac{\delta_E(e)}{c}\frac{1}{|e|^2}=\nu$. \end{itemize} then $-\nu$ is an eigenvalue of $A$ with multiplicity $|W|-1$. \end{thm} \begin{proof} For each $i=1,2,\ldots,k$ we define a function $y_i\in \mathbb{R}^V$ as \[ y_i(v)= \begin{cases} 1 &\text{~if~} v=v_0,\\ -1&\text{~if~} v=v_i,\\ 0 &\text{~otherwise.~} \end{cases} \] By \Cref{A} we have $(Ay_i)(v)=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e;u\neq v}y_i(u)$. Considering different cases we have the following facts. \begin{enumerate} \item[(a)] From condition(i) of the theorem we have $E_v=E_0 $ for all $v\in W$. Therefore, $(Ay_i)(v_i) =\sum\limits_{e\in E_0}\frac{\delta_E(e_i)}{\delta_V(u_i)}\frac{1}{|e_i|^2}y_i(v_0)=-\sum\limits_{e\in E_0}\frac{\delta_E(e_i)}{\delta_V(u_i)}\frac{1}{|e_i|^2}y_i(u_i) $ $=-\nu y_i(v_i)$ and \\ $(Ay_i)(v_0)=\sum\limits_{e\in E_0}\frac{\delta_E(e)}{\delta_V(v_0)}\frac{1}{|e|^2}\sum\limits_{u\in e;u\neq v_0}y_i(u)=\sum\limits_{e\in E_0}\frac{\delta_E(e_i)}{c}\frac{1}{|e_i|^2}y_i(v_i)=-\sum\limits_{e\in E_0}\frac{\delta_E(e_i)}{c}\frac{1}{|e_i|^2}y_i(u_i)=-\nu y_i(v_0)$. \item[(b)] For $i \ne j\ne 0$, we have $\sum\limits_{u\in e;u\ne v_j}y_i(u)=0$ for all $e\in E$ and thus $(Ay_i)(v_j)=0 $. \item[(c)] Note that $y_i(v)=0$ for all $v\in V\setminus W$ and for any $e\in E$, either both $v_0,v_i$ belongs to $e$ or none of them belongs to $e$. Therefore, $\sum\limits_{u\in e;u\ne v_j}y_i(u)=0$ for all $e\in E$ and this leads us to $(Ay_i)(v)=0 $ for all $v\notin W$. \end{enumerate} It is clear that $-\nu$ is an eigenvalue of $A$. Since, $\{y_i\}_{i=1}^k$ is a linearly independent set, the rest of the proof is similarly as in the proof of \Cref{cute2}. \end{proof} \begin{thm}\label{adj_cute3} Suppose that $G=(V,E)$ be a hypergraph with $E_0=\{e_0,e_1,\ldots,e_k\}\subset E$ such that \begin{itemize} \item [(1)] $W \bigcap\limits_{i=0}^ke_i\neq \emptyset$, and $W\cap e=\emptyset$ for all $e\in E\setminus E_0$, \item[(2)]for all $i=0,1,\ldots,k$ there exists an $F_i\subset V$ such that $e_i=W\cup F_i$ with $|F_i|=t$ and $F_i\cap W=\emptyset$ for all $i$, \item[(3)] $F_i\cap e=\emptyset$ for all $e(\ne e_i)\in E$, \item[(4)]there exists $c,\omega\in \mathbb{R}$ such that $\delta_V(v)=c$ for all $v\in \bigcup\limits_{i=0}^kF_i$ and $\frac{\delta_E(e)}{|e|^2}=\omega$ for all $e\in E_0$. \end{itemize} then $\frac{\omega}{c}(t-1)$ is an eigenvalue of $A$ with multiplicity at least $|E_0|-1$. \end{thm} \begin{proof} We define $y_i\in \mathbb{R}^V$ for all $i=1,2,\ldots,k$, as $$ y_i(v)= \begin{cases} -1&\text{~if~} v\in F_0\\ \phantom{-}1&\text{~if~}v\in F_i\\ \phantom{-}0&\text{~otherwise.~} \end{cases} $$ By \Cref{A} we have $(Ay_i)(v)=\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e;u\neq v}y_i(u)$. Now we consider the following cases to prove the result. \begin{itemize} \item[(a)]Since $E_v=\{e_j\}$, for $v\in F_j$, $(Ay_i)(v)=\frac{\delta_E(e_j)}{\delta_V(v)}\frac{1}{|e_j|^2}\sum\limits_{u\in e;u\neq v}y_i(u)=\frac{\delta_E(e_j)}{\delta_V(v)}\frac{1}{|e_j|^2}(|F_j|-1)y_i(v)$. \item[(b)] Since $E_v=E_0$ for all $v\in W$, $(Ay_i)(v)=\sum\limits_{e\in E_0}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e;u\neq v}y_i(u)=\frac{\delta_E(e_0)}{\delta_V(v)}\frac{1}{|e_0|^2}(|F_0|-1)(-1)+\frac{\delta_E(e_i)}{\delta_V(v)}\frac{1}{|e_i|^2}(|F_i|-1)(1)=\frac{\omega}{c}(1-1)=0$. \item[(c)]For any $v\in V\setminus (W\cup(\bigcup\limits_{i=0}^kF_i))$, we have $\sum\limits_{u\in e;u\neq v}y_i(u)=0$ for all $e\in E_v$. Therefore, $(Ly_i)(v)=0.$ \end{itemize} Thus $\frac{\omega}{c}(t-1)$ is an eigenvalue of $A$. Since $\{y_i\}_{i=1}^k$ are linearly independent, the multiplicity of the eigenvalue is at least $k=|E_0|-1$. \end{proof} \subsection{Complete Adjacency Spectra of Hyperflowers} Here we compute the complete list of eigenvalues of the adjacency operator associated with the $(l,1)$-hyperflower $G=(V,E)$ with $t$-twins. Suppose that for some $\gamma\in\mathbb{R} $, the function $y_c\in\mathbb{R}^V$ is defined by $$ y_{\gamma}(v)= \begin{cases} \gamma &\text{~if~} v\in W,\\ 1 &\text{~if~} v\in U, \end{cases} $$where, $V=U\cup W$ is the partition of the set of vertices, as described in \Cref{hyperflower}. If $\frac{\delta_E(e)}{\delta_V(v)|e|^2}=\alpha$ for all $v\in V$ and for all $e\in E$ then $$ (A y_c)(v)= \begin{cases} l\alpha(\gamma(|W|-1)+t) &\text{~if~} v\in W,\\ \alpha (|W|\gamma+(t-1) ) &\text{~if~} v\in U. \end{cases} $$ Therefore, if $\gamma$ is a root of \begin{equation}\label{hyperflower_last} |W|x^2+(t+l-l|W|-1)x-lt=0 \end{equation} then $ y_\gamma$ is an eigenvector of $A$ with eigenvalue $\alpha (|W|\gamma+(t-1) )$. The two roots of \Cref{hyperflower_last} is going to give us two eigenvalues of $A$. Now by \Cref{adj_cute_1}, If $\alpha=\frac{\delta_E(e)}{\delta_V(v)|e|^2}$ for all $v\in V$ and for all $e\in E$ then corresponding to $t$ twins of each hyperedge $e\in E$, $-\alpha$ becomes eigenvalue of $A$ with multiplicity at least $t-1$, where $\delta_V(v)=c$, for all $v\in e$. Evidently, for $l$ hyperedges, there are total $l(t-1)$ eigenvalues at least. If $\delta_V(v)=c$ for all $v\in U$ and $\delta_E(e)=\frac{\delta_E(e)}{|e|^2}=\mu$ for all $e\in E$ then \Cref{adj_cute3} implies that $\frac{\mu}{c} (t-1)$ becomes an eigenvalue of $A$ with multiplicity $(l-1)$. Similarly, if $\delta_V(v)=c$ for all $v\in V$ and $\sum\limits_{e\in E}\frac{\delta_E(e)}{c|e|^2}=\nu$, \Cref{adj_cute_2} concludes that $-\nu$ is an eigenvalue of $A$ with multiplicity $|W|-1$. Since, $(2+l(t-1)+(l-1)+|W|-1)=|V|$, thus we have the complete list of eigenvalues of $A$. Evidently, if $\frac{\delta_E(e)}{\delta_V(v)|e|^2}=\alpha$ for all $v\in V$, and for all $e\in E$ then we have the determinant of $A_G$, \begin{align*} \det(A_G)&=(-1)^{|V|-l-1}\left[(t-1)^2-|W|lt-(t-1)(t-1-|W|l+l)\right]\alpha^{|V|}(t-1)^{(l-1)}|E|^{|W|-1}\\ &=(-1)^{|V|-l-1}l[1-(t+w)]\alpha^{|V|}(t-1)^{(l-1)}|E|^{|W|-1}. \end{align*} Note that, if $\alpha$ is an integer then $\det(A)$ is always an integer. For example, if we consider $\delta_V(v)=1$ and $\delta_{E(e)}=|e|^2$ which implies the determinant of the adjacency matrix considered in \cite{rodriguez2003Laplacian, rodriguez2009Laplacian} is integer. Now we discuss some results, involving the the adjacency operator $A$. \begin{rem} \label{adjresult} \begin{itemize} \item[(1)] Clearly, $r(v)=c$(constant) for all $v\in V$ then $\mathbf{1}$ is an eigenvector of $A$ with eigenvalue $c$. \item[(2)] For any $x,y\in\mathbb{R}^V$, \begin{equation}\label{Axy} (Ax,y)_V= \sum\limits_ {\begin{subarray}{1} {u,v\in V;}\\ {u\neq v} \end{subarray}}x(u)y(v)\sum\limits_{e\in E_u\cap E_v}\frac{\delta_E(e)}{|e|^2}. \end{equation} Thus from \Cref{Axy} we have $(Ax,y)_V=(Ay,x)_V=(x,Ay)_V$. So $A$ is a self-adjoint operator. \item[(3)] Corresponding to each $v\in V$, we define $\chi_v\in\mathbb{R}^V$ as $$\chi_v(u)= \begin{cases} 1&\text{~if~} u=v\\ 0&\text{~otherwise.~} \end{cases} $$ Thus, $(A\chi_u,\chi_v)_V=\sum\limits_{e \in E_v\cap E_v}\frac{\delta_E(e)}{|e|^2}$. Therefore, the operator $A$ induces a matrix $B=\left(B_{uv}\right)_{u,v\in V}$ of order $|V|$ defined by $$B_{uv}:= \begin{cases} (A\chi_u,\chi_v)_V=\sum\limits_{e \in E_v\cap E_v}\frac{\delta_E(e)}{|e|^2} &\text{~if~} u\neq v,\\ 0& \text{~otherwise.} \end{cases}$$ \item[(4)] Since $A$ is self-adjoint, $B$ is symmetric matrix. Now form \Cref{A} we have $ (A_Gx)(v) =\sum\limits_{u\in V}\frac{1}{\delta_V(v)}\sum\limits_{e \in E_v\cap E_v}\frac{\delta_E(e)}{|e|^2}x(u)=\sum\limits_{u\in V}\frac{1}{\delta_V(v)}b_{vu}x(u)=\frac{1}{\delta_V(v)}(Bx)(v)$. Thus for the pre-assigned inner product $(\cdot,\cdot)_V$ on $\mathbb{R}$, the matrix $B$ can be directly deduced from the general adjacency operator $A$. From now onward we refer $B_G$ (or simply $B$) as the induced adjacency matrix associated withthe hypergraph $G$. \item[(5)]\begin{itemize} \item[(a)] If $ \delta_E(e)=|e|^2$, then the matrix $B$ becomes the adjacency matrix of hypergraph given in \cite{rodriguez2003Laplacian,rodriguez2009Laplacian}. \item [(b)] If $ \delta_E(e)=\frac{|e|^2}{|e|-1}$, then the matrix $B$ coincides with the concept of adjacency matrix of hypergraph introduced in \cite{MR4208993}. \end{itemize} This fact motivates us to incorporate the techniques used in \cite{MR4208993} on the matrix $B$. \item[(6)] Suppose that $\mathfrak{P}^n_{uv}$ is the set of all path of length $n$ connecting $u,v\in V$. For all \sloppy $p=ue_{1}v_{1}e_{2}\ldots e_{n}v\in \mathfrak{P}^n_{uv}$, we define $\mathfrak{E}(p):=\prod\limits_{i=1}^n\frac{\delta_E(e_i)}{|e_i|^2}$. Thus the $uv$-th entry of the matrix $B^n$ is $ B^n_{uv}=\sum\limits_{p\in \mathfrak{P}^n_{uv}}\mathfrak{E}(p)>0$ if and only if there exists a path $p\in \mathfrak{P}^n_{uv}$. \item[(7)] The matrix $B$ induces an $1, 0$-matrix $B_0$ defined by ${B_0}_{uv}=0$ if $B_{uv}=0$ and otherwise ${B_0}_{uv}=1$. So $B_0$ is the adjacency matrix of an unweighted graph $G_0=(V,E_0)$ defined by, for $u,v\in V$ with $u\neq v$, there exists an edge $\{u,v\}\in E_0$ if and only if there exists at least one hyperedge $e\in E$ such that $u,v\in e$. The hypergraph $G$ and the the graph $G_0$ have similar properties like, connectivity, graph colouring, etc. Moreover, if we impose an weight $ w_0:E_0\to\mathbb{R}$ on $G_0$, where $w_0$ is defined by $w_0(\{u,v\})=B_{uv}$, then the adjacency matrix $B$ of the hypergraph $G$ is also the adjacency matrix of the weighted graph $G_w=(V,E_0,w_0)$. \item [(8)] If there exists $u,v\in V$ such that the distance $d(u,v)=l$ then the $(u,v)$-th entry of the matrix $B^l$ is non-zero whereas the same for $I,B,B^2,\ldots,B^{l-1}$ are zero. Thus $I,B,B^2,\ldots,B^l$ are linearly independent. Similarly, if $diam(G)=k$, then $I,B,B^2,\ldots,B^k$ are linearly independent. If there exists $r$ distinct eigenvalues of $B$ then the degree of the minimal polynomial of $B$ is $r$. Thus there exists $c_0,c_1,\ldots, c_r\in \mathbb{R}$, not all zero, such that $c _0I+c_1B+c_2B^2+\ldots+c_rB^r=0$. Thus $I,B,B^2,\ldots,B^r$ are linearly dependent and which implies $k \le r$, i.e., the diameter of the hypergraph $G$ is less than the number of distinct eigenvalues of $B$. \item[(9)] Since $B$ is a symmetric matrix, the result in \cite[Theorem 2.2]{MR4208993} can be restated as follows. For a connected hypergraph $G(V,E)$ with $n$ vertices and minimum edge carnality 3, the diameter of $G$ $ diam(G) \le \bigg\lfloor 1 + \frac{\log((1-\alpha^2)/\alpha^2)}{\log(\lambda_{max}/\omega)} \bigg\rfloor,$ where $\omega$ is the second largest eigenvalue (in absolute value) of $B$, $\lambda_{max}$ is the largest eigenvalue of $B$ with the unit eigenvector $X_1=((X_1)_1, (X_1)_2,\dots, (X_1)_n)^t$, and $\alpha= \min_i \{(X_1)_i$\}. \end{itemize} \end{rem} \section{Normalized Laplacian operator}\label{nor-lap} In \Cref{gen}, we have mentioned that many conventional concepts of graph and hypergraph Laplacians, respectively, are actually special cases of the generalized Laplacian operator $\mathfrak{L}$. However, this generalized Laplacian fails to represent some symmetrically normalized Laplacians, for example, the normalized Laplacian of hypergraphs in \cite[section-4, Equation-16]{MR4208993} and the Laplacian given in \cite[section-1.2]{chung1997spectral}. In this section, we introduce and study a general normalized Laplacian $\Tilde{\mathfrak{L}}$ for hypergraphs. Suppose that $\gamma:\mathbb{R}^V\to\mathbb{R}^V$ is an operator defined by $(\gamma(x))(v)=(r(v))^{-\frac{1}{2}}x(v)$. We define the general normalized Laplacian operator $\Tilde{\mathfrak{L}}:\mathbb{R}^V\to\mathbb{R}^V$ as $$\Tilde{\mathfrak{L}}=\gamma\circ\mathfrak{L}\circ \gamma. $$ Now we have some observations on $\Tilde{\mathfrak{L}}$. \begin{enumerate} \item For all $x\in \mathbb{R}^V$ and $v\in V$ we have, $(\Tilde{\mathfrak{L}}(x))(v)=x(v)-\sum\limits_{e\in E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{1}{|e|^2}\sum\limits_{u\in e;u\neq v}(r(u)r(v))^{-\frac{1}{2}}x(u)$. \item Evidently, $0$ is an eigenvalue of $\Tilde{\mathfrak{L}} $ and the dimension of eigenspace of $0$ is the number of connected components in the hypergraph. The function $\gamma(\mathbf{1})\in \mathbb{R}^V$, is an eigenvector, belongs to the eigenspace of $0$. \item Since $(\Tilde{\mathfrak{L}}x,x)_V=\sum_{e\in E}\frac{\delta_E(e)}{|e|^2}\sum\limits_{\{u,v\}\subset E}(\gamma(x)(u)-\gamma(x)(v))^2$, the operator $\Tilde{\mathfrak{L}}$ is positive semidefinite. Therefore, we have \begin{equation}\label{normalquard} (\Tilde{\mathfrak{L}}x,x)_V\le 2(x,x)_V. \end{equation} \item For $\delta_E(e)=\frac{|e|^2}{|e|-1}$ and $\delta_V(v)=1$, the operator $ \Tilde{\mathfrak{L}}$ becomes the normalized Laplacian operator described in \cite[Section-4, Equation-15]{MR4208993}. If the hypergraph is a graph, $\Tilde{\mathfrak{L}} $ becomes the Laplacian given in \cite[Section-1.2]{chung1997spectral}. \item Consider the matrix $M=(M_{uv})_{u,v\in V}$ defined by $$M_{uv}= \sum_{e\in E_u\cap E_v}\frac{\delta_E(e)}{\delta_V(v)}\frac{|e|-1}{|e|^2} (r(u) r(v))^{-\frac{1}{2}}. $$ Evidently, $\Tilde{\mathfrak{L}}(x) =(I_{|V|}-M)(x) $. Therefore, if $\mu_1\le\mu_2\le\ldots\le\mu_{|V|}$ are the eigenvalues of $ \Tilde{\mathfrak{L}}$ then the following holds. \begin{enumerate} \item If the hypergraph $G$ has no isolated vertex, then $\sum\limits_{i=1}^{|V|}\mu_i=|V|$, \item Since $\mu_1=0$, we have $\mu_2\le\frac{|V|}{|V|-1}\le \mu_{|V|}$, \item \Cref{normalquard} leads us to $\mu_i\le 2$ for all $i=1,2,\ldots, |V|$. \end{enumerate} \end{enumerate} \section{Applications} \label{app} Now we focus on the applications of the connectivity operators introduced in this work. In this section we study some application of our work in some conventional abstract classes of hypergraphs and some real-world situations. Use of the different Laplacian matrices associated with graphs in discrete dynamical network, diffusion, synchronization, random walk, image processing are common in literature, see \cite{MR4079051,MR3730470,banerjee2020synchronization,MR1076116,MR1877614,wobrock2019image} and references therein. However, use of a hypergraph in place of the underlying graph may lead to better result sometimes. Instead of the conventional graph topology, some real-world networks need multi-body framework for better explanation. Indeed, incorporating hypergraph in proper way can accomplish the need of multi-body framework in many real-world phenomena. \subsection{Spectra of the Power of a Graph} Suppose that $G(V,E)$ is a graph, i.e., a $2$-uniform hypergraph. For any $k(\ge 3)\in \mathbb{N}$, the $k$-th power of $G$, denoted by $G^k=(U, F)$ is a $k$-uniform hypergraph, defined by $$U=V\cup \{\bigcup_{e\in E}W^k_e\} \text{~where,~}W^k_e=\{v_{ei}:i\in\mathbb{N},i\le k-2\}, \text{~and~} F=\{e^{(k)}=e\cup W^k_e:e\in E\}.$$ (See \cite[Definition 2.4]{MR3116407} for more details about the power of a graph). In a graph, a vertex $v$ is said to be a pendant vertex if $|E_v|=1$. Suppose that $e$ is an edge of the graph $G$. Since $f\cap W^K_e=\emptyset$ for all $f(\ne e^{(k)})\in F$, one can use \Cref{cute2} and \Cref{adj_cute_1} to determine eigenvalues of the Laplacian operator and adjacency operator of $G^k$. Thus, we have the following result. \begin{prop} Suppose that $G(V,E)$ is a graph ($2$-uniform hypergraph). For all $k\ge 4$ and $e\in E$, If $\delta_V(v)=c_e$ for all $v\in W_e^k$ then the eigenvalues of the Laplacian and adjacency matrix of $G^k$ are given below. \begin{enumerate} \item $\frac{\delta_E(e)}{c_e}\frac{1}{k}$ is an eigenvalue of the Laplacian operator associated to $G^k$ with multiplicity $k-3$. \item $-\frac{\delta_E(e^{(k)})}{c_ek^2}$ is an eigenvalue of the general adjacency matrix of multiplicity $k-3$. \end{enumerate} If $e(\in E)$ contains a pendant vertex then instead of $k-3$, in the above two cases, the multiplicity becomes $k-2$. \end{prop} As we have done before, here, we can also compute the eigenvalues in a particular framework by choosing $\delta_E,\delta_V$ appropriately. \subsection{Spectra of Squid} A squid is a $k$-uniform hypergraph $G(V,E)$ such that $$V:=\{v_0\}\cup(\bigcup\limits_{i=1}^{k-1}U_i)\text{~where,~} U_i=\{u_{ij}:j\in \mathbb{N}, 1\le j\le k\},$$ $$\text{~and~}E:=\{U_i\}_{i=1}^{k-1}\cup \{\{v_0\}\cup e_0\} \text{~where,~} e_0=\{u_{i1}:1\le i\le k-1\}.$$ We consider $\{v_0\}\cup e_0$ as a central hyperedge and all other hyperedge of squid as peripheral hyperedges (see \cite{MR3116407} for more details about squid). Since, $e\cap (U_{i}\setminus\{u_{i1}\})=\emptyset$ for all $e(\ne U_i)\in E$, therefore, using \Cref{cute2} and \Cref{adj_cute_1} we have the following result. \begin{prop} Suppose that $G(V,E)$ is a $k$-uniform squid. For any peripheral hyperedge $U_i$, if $ \delta_V(v)=c_i$ for all $v\in U_i$ then the eigenvalues of the general adjacency and Laplacian matrix of the squid is given below. \begin{enumerate} \item $\frac{\delta_E(U_i)}{c_i}\frac{1}{k}$ is an eigenvalue of the Laplacian operator associated to $G^k$ with multiplicity $k-2$. \item $-\frac{\delta_E(U_i)}{c_ik^2}$ is an eigenvalue of the general adjacency matrix of multiplicity $k-2$. \end{enumerate} \end{prop} \subsection{The network of disease propagation} Multi-body interactions are crucial to study disease propagation. In past few years, using of hypergraphs made the disease propagation models more realistic, see \cite{higham2021epidemics}. Here the vertices of the hypergraph $G(V,E)$ represent the individuals and hyperedges are the collection of individuals who are known to interact as a group. We summarize below the applicability of our work in this context. \begin{enumerate} \item If we set $\delta_E(e)=\beta |e|^2$ and $\delta_V(v)=1$ then according to the general infection model provided in \cite[p.6 , Section-3.2.]{higham2021epidemics}, a susceptible node $v$ becomes infectious with the rate $(A_G(\bar{f}(x_t)))(v)$. Here, $x:V\times T \to \mathbb{R}^+ $ is a function where $T$ is the domain of time and for any $(v,t)\in V\times T $, the functional value of $x(v,t)$ is denoted by $x_t(v)$. That is, $x_t\in{\mathbb{R}^+}^V$ is defined as $x_t(v):=x(v,t)$. In addition, the function $f:\mathbb{R}^+\to \mathbb{R}^+$ regulates the overall infectiousness of the disease and $\bar{f}:{\mathbb{R}^+}^V\to {\mathbb{R}^+}^V$ is defined as $\bar{f}(x)=\{f(x(v))\}_{v\in V}$. Similar infection rate is also reported in \cite{MR3494570}. Later in partitioned hypergraph model \cite[p.6 , Section-3.3.]{higham2021epidemics}, the hypergraph $G(V,E)$ is partitioned in to $K$ disjoint hypergraphs $\{G_i(V_i,E_i)\}_{i=1}^K$. According to this model the infection rate of the node $v$ at time $t$ is $\sum\limits_{i=1}^KA_{G_i}(\bar{f_i}(x_t)))(v)$, where the function $f_i:\mathbb{R}^+\to \mathbb{R}^+$ regulates the overall infectiousness of the disease in the $i$-th partition. \item To study random infection rates, in \cite[p.6 , Section-4.]{higham2021epidemics}, the mean field approximation is considered. According to that approach, the infection rate of a node $v$ at time $t$ is $(A_G(\bar{f}(P_t)))(v)$, where $p_t(v)$ is the probability of being the node $v$ is infected at time $t$ and $P_t:=\{p_t(v)\}_{v\in V}$. \end{enumerate} \subsection{Dynamical network}A \textit{dynamical network} is a network of evolving \textit{dynamical systems}. More precisely, a dynamical system is a system in which a function describes the evolution of a point in a geometric space with the flow of time. In a dynamical network, several dynamical systems are coupled through an underlying network in such a way that two neighbouring dynamical systems influence the dynamics of each other. The underlying network may be a graph\cite{MR3730470} or a hypergraph\cite{banerjee2020synchronization,MR4121260,carletti2020dynamical}. To discuss coupled dynamics on hypergraphs the adjacency operator $A_G$ is used in \cite[equation-(24),(27)]{MR4121260} with $\delta_E(e)=|e|^2 $ and $\delta_V(v)=1$. In \cite{banerjee2020synchronization}, the diffusion operator $L_G$ is used with $\delta_E(e)=w(e)\frac{|e|^2}{|e|-1} $ in order to discuss synchronization in dynamical networks on hypergraph. In \cite[Equation-3]{carletti2020dynamical}, one variant of the general Laplacian operator of hypergraph, $\mathfrak{L}$ is used in the model of dynamical systems on hypergraphs with $\delta_E(e)=(|e|-1)|e|^2$ and $\delta_V(v)=1$. Considering the use of different variant of the diffusion operator $L_G$ in distinct dynamical networks with hypergraph topology, we define a general discrete dynamical network model as \begin{equation} x_{n+1}=f(x_{n})+\epsilon (L_G(g(x_n))), \end{equation} where for any discrete time $n\in \mathbb{N}$, $x_n\in (\mathbb{R}^V$) is a function such that $x_n(v)$ is the state of the $n$-th node. Both $f:\mathbb{R}^V\to \mathbb{R}^V$ and $g:\mathbb{R}^V\to \mathbb{R}^V$ are differentiable functions, regulating the dynamics of all the node. The positive real $\epsilon$ is the coupling strength. Similarly, the continuous model can be defined as \begin{equation} \dot{x}_t=f(x_t)+\epsilon (L_G(g(x_t))), \end{equation} where $x_t\in \mathbb{R}^V$ is such that $x_t(v)$ is the state of the $v$-th node at time $t$ and $\dot{x}\in \mathbb{R}^V$ is defined by $\dot{x_t}(v)=\frac{dx_t(v)}{dt} $. \subsection{Random walk on hypergraphs} A random walk is a sequence of randomly taken successive steps by a walker in a mathematical space. If the mathematical space is the set of all the vertices $V$ of a hypergraph $G(V,E)$ then the random walk is referred as the random walk on the hypergraph. Thus, a random walk on a hypergraph $G(V,E)$ is a sequence of vertices $v_1,v_2,\ldots, v_k$ such that $v_i $ is the $i $-th step of the random walk. The whole theory pivot around the \textit{Transition probability}, $(P_G)_{uv}=prob(v_{i+1}=v|v_i=u)$, which is independent of $i$ and depends on the underlying hypergraph. Since, $\bigcup\limits_{v\in V}\{(v_{i+1}=v|v_i=u)\} $ is a certain event, $\sum\limits_{v\in V}(P_G)_{uv}=1$ for all $u\in V$. We define ${P_G}_{uv}$ as $${P_G}_{uv}= \begin{cases} \frac{1}{r(u)}\sum\limits_{e\in E_u\cap E_v}\frac{\delta_E(e)}{\delta_V(u)}\frac{1}{|e|^2} & \text{~if~} u\neq v,\\ 0& \text{~otherwise.~} \end{cases} $$ We summarise below some crucial observations. \begin{itemize} \item[(1)]Since, $ \sum\limits_{v\in V}\sum\limits_{e\in E_u\cap E_v}\frac{\delta_E(e)}{\delta_V(u)}\frac{1}{|e|^2}=\sum\limits_{e\in E_u}\frac{\delta_E(e)}{\delta_V(u)}\frac{|e|-1}{|e|^2}=r(u)$, we have $\sum\limits_{v\in V}(P_G)_{uv}=1$. \item[(2)] Suppose that there exists no isolated vertex in $G$, i.e., $E_v\neq \emptyset$ for all $v\in V$. So, $r(v)\neq 0$ for all $v\in V$ and this allow us to define the inner product $(\cdot ,\cdot)_R$ on $\mathbb{R}^V$ as $(x,y)_R:=\sum\limits_{u\in V}r(u)\delta_V(u)x(u)y(u)$. If $\Delta=\mathcal{I}-P_G$, where $\mathcal{I}:\mathbb{R}^V\to \mathbb{R}^V$ is the identity operator on $\mathbb{R}^V$, then $0$ is an eigenvalue of $\Delta$ with eigenvector $\mathbf{1}$. Moreover, $(\Delta x, y)_R=\sum\limits_{\{u,v\}\subset V}\sum\limits_{e\in E_u\cap E_v}\frac{\delta_E(e)}{|e|^2}(x(u)-x(v))^2\le 2(x,y)_R$. Therefore, $\Delta$ is a positive semidefinite operator and all the eigenvalues of $\Delta$ lies in $[0,2) $. Thus, all the absolute values of all the eigenvalues of $P_G$ lies in $[0,1]$. Moreover, if the hypergraph $G$ is connected then except the eigenvalue $0$ corresponding to the eigenvector $\mathbf{1}$, the absolute value of all other eigenvalues of $P_G$ lie in $(0,1)$. \item[(3)]Note that $ (\Delta x, y)_R=\sum\limits_{\{u,v\}\subset V}\sum\limits_{e\in E_u\cap E_v}\frac{\delta_E(e)}{|e|^2}(x(u)-x(v))^2=(\Delta y,x)_R=(x,\Delta y)_R$. Thus, $\Delta $ is self-adjoint. Therefore, $P_G$ is also self-adjoint. \item[(4)] Suppose that $\{x_n\}_{n\in\mathbb{N}}$ is a sequence in $\mathbb{R}^V$ such that $x_{n+1}=P_G(x_n)$ and the underlying hypergraph is connected. Evidently, $x_{n+1}=P_G^n(x_1)$. Since except the eigenvalue $0$ corresponding to the eigenvector $\mathbf{1}$, the absolute value of all the eigenvalues of $P_G$ lie in $(0,1)$, by spectral decomposition, $\lim\limits_{n\to\infty}x_n$ is the projection of the initial state $x_1$ along the vector $\mathbf{1}$. Therefore, $\lim\limits_{n\to\infty}x_n=\frac{(x_1,\mathbf{1})_R}{\sqrt{(\mathbf{1},\mathbf{1})_R}}\mathbf{1}$. Note that, the properties of general normalized Laplacian operator $\Tilde{\mathfrak{L}}$ suggest that we can replace $\Delta$ by $\Tilde{\mathfrak{L}}$. \end{itemize} We end this article with the following Remark. \begin{rem} Since $\delta_V \in {\mathbb R^+}^V$ and $\delta_E \in {\mathbb R^+}^E$, there exists uncountable choices for $\delta_E,\delta_V$. Each choice is going to give us a framework for the operators associated to a hypergraph. Although some results (see \Cref{cute-new}, \Cref{cute1}, \Cref{adj_cute_2}, \Cref{cute3}) imposes such conditions on $\delta_V$, that very few choices left for $\delta_V$ but since very few conditions are imposed on $\delta_E$, one still has uncountable choices for $\delta_E$. Therefore, our results are valid for uncountable number of frameworks of operators. Two of these frameworks are common in literature and considered in \cite{MR4208993} and \cite{rodriguez2003Laplacian,rodriguez2009Laplacian,bretto2013hypergraph}. \end{rem} \section*{Acknowledgement}The work of the author PARUI is supported by University Grants Commission, India (Beneficiary Code/Flag: BININ00965055 A). PARUI is sincerely thankful to Rajiv Mishra, Gargi Ghosh for fruitful discussions. \bibliographystyle{siam}
1,108,101,563,777
arxiv
\section{Introduction} More than a decade ago the observations of Type Ia Supernovae (SNe~Ia) led to the discovery of the accelerating expansion of the Universe and the need for an unknown repulsive force to drive it \citep{1998AJ....116.1009R,p99}. Understanding the nature of this force -- now dubbed "dark energy" -- is an outstanding goal of astrophysics and cosmology. It is now well-understood that no single observational technique will be able to achieve this alone \citep[e.g.][]{detf}. The combined constraints of many techniques will be needed to measure the equation-of-state parameter of the dark energy (the ratio between the pressure and the density) and eventually its evolution over the cosmic time. At present, SNe~Ia is the most matured and well-understood technique to accurately trace the cosmic expansion history and will continue to play an essential role in future cosmological experiments \citep{detf}. However, the SNe~Ia technique is affected by several systematic uncertainties, which need to be reduced to a level below $\sim2$\% to differentiate between different dark energy models. The use of SNe~Ia in Cosmology relies on the \emph{empirically} established tight relation between their light curve width and peak luminosity, which allows one to measure the luminosity distance with an accuracy of $\sim7$\% \citep{phil99}, and on the \emph{assumption} that the (standardized) peak luminosity of SNe Ia does not change over the cosmic time. There is now observational evidence that the slope of the "light curve shape - peak luminosity" relation does not depend on the redshift or the host galaxy mass \citep{2011ApJS..192....1C,2011ApJ...737..102S}. However, \citet{2010MNRAS.406..782S} and \citet{2010ApJ...715..743K} have found that the offsets of the SN~Ia peak magnitudes from the best-fitting Hubble line (from now on Hubble residuals or HR) correlate with the host stellar mass. Together with the mass-metallicity relation for galaxies \citep[e.g.,][]{2004ApJ...613..898T} and the overall increase of the metal content of the universe with the cosmic time, this may be an indication for possible luminosity evolution of SNe Ia at a level of 0.05-0.10 mag. It is now generally agreed that SNe~Ia are the result of thermonuclear disruption of carbon/oxygen (C/O) white dwarfs (WD), which ignite explosively when they approach the Chandrasekhar limit $M_\mathrm{Ch}\sim1.38M_{\sun}$ \citep{1960ApJ...132..565H}. However, there has been little observational evidence of the exact evolutionary scenario that leads to the explosion. C/O WDs are the end product of the evolution of stars with masses $\sim1.5-7M_{\sun}$ \citep[e.g., see][]{1980ApJ...237..111B,1999ApJ...524..226D}. The upper mass limit for C/O WDs is $\sim1.1M_{\sun}$ \citep[e.g., see][]{1987A&A...188...74W,1999ApJ...524..226D,2009ApJ...692.1013S} and therefore a mechanism that allows the WD to gain additional mass of at least $\sim0.3-0.4M_{\sun}$ is needed. In the single degenerate (SD) scenario the WD accretes mass from a non-degenerate companion star in a binary system \citep{1973ApJ...186.1007W}. However, the exact physical mechanism of the WD mass growth has not yet been identified. In the double-degenerate (DD) scenario two C/O WDs in a binary merge after loosing orbital angular momentum by gravitational wave radiation \citep{1984ApJ...284..719I,1984ApJ...277..355W,1985ASSL..113....1P}. While considered the most viable, both scenarios have considerable uncertainties \citep[see, e.g.][]{hille00,2011arXiv1111.4492M}. The numerical simulations of thermonuclear SN~Ia explosions suggest that the properties of the exploding WD may significantly influence the peak luminosity, the "light curve width - luminosity" relation and colors of the resulting supernovae \citep[e.g.,][]{hof98,1999ApJ...522L..43U,2001ApJ...557..279D,2006A&A...453..203R,2009Natur.460..869K,2010ApJ...711L..66B}. On the other hand, the properties of the WD just before the ignition (the central density, metallicity and C/O ratio) are sensitive to the properties and the evolution of its progenitor binary star, and to the subsequent WD mass growth mechanism. For example, the SD channel can produce $M_\mathrm{Ch}$ WDs with slightly different structure and chemical composition depending on the mass of the WD at the moment when the accretion started \citep[e.g.,][]{2001ApJ...557..279D}. In the DD scenario, the outcome of the merger may depend on the mass ratio of the two WDs. In addition, one may expect that some properties of SN~Ia progenitor stars will evolve with cosmic time, e.g. metallicity. Therefore, possible evolution of the properties of SN~Ia progenitors or, if more than one evolutionary channel exist, evolution of their relative contribution to the SNe~Ia population, may introduce systematic uncertainties in SN~Ia cosmology and potentially bias the cosmological results from the future large SN surveys \citep[e.g., see][]{2008ApJ...684L..13S,2008JCAP...02..008N}. To date no progenitor of a SN~Ia has been unambiguously identified and/or observed and information about the SNe~Ia progenitors has been inferred indirectly. \cite{2005A&A...433..807M} and \cite{2005ApJ...629L..85S} studied the SN~Ia rate as a function of redshift and host galaxies properties. Both studies found that the SN~Ia rate depends on both the on-going star formation rate (SFR) and total galaxy stellar mass. This result was also confirmed by others \citep{2006ApJ...648..868S,2006AJ....132.1126N,2006MNRAS.370..773M,pritchet08s,dahlen08,2011MNRAS.412.1508M} and appears to suggest that at least part of SNe~Ia are associated with the young stellar population capable of producing SN~Ia with short delay time $\leq400$ Myr. \cite{2008PASJ...60.1327T} and \cite{2010ApJ...722.1879M} have shown that the delay times from star formation to SN~Ia explosions between the shortest time probed $<400$ Myr and 10 Gyr are distributed as a power law with slope $\sim-1$. This delay time distribution (DTD) strongly favors the DD scenario. The SD scenario may also explain this DTD \citep{2008ApJ...683L.127H} but the efficiency of the symbiotic channel (WD+red giant) needs to be significantly increased \citep[see, e.g.,][]{2011arXiv1111.4492M}. The early discovery of \object{SN~2011fe} in \object{M101}, the nearest SN Ia in 25 years, provided the first real possibility to constrain the properties of a progenitor star of an SN Ia \citep{2012ApJ...744L..17B,2012ApJ...750..164C,2011Natur.480..344N}. The results reinforce the conclusion that the exploding star is a C/O WD and seem to rule out all but a degenerate star as its companion, thus favoring the DD scenario. On the other hand, based on high-resolution spectroscopy of a sample of nearby SNe Ia \cite{2011Sci...333..856S} favor the SD scenario. Many studies have shown that the intrinsically luminous SNe tend to occur in star-forming hosts, while the faint SNe prefer passive ones \citep[e.g.,][]{1996AJ....112.2391H,2000AJ....120.1479H, 2005ApJ...634..210G,2008ApJ...685..752G,2009ApJ...691..661H, 2009ApJ...707.1449N,2006ApJ...648..868S,2010MNRAS.406..782S,2010ApJ...715..743K,2010AJ....140..804B,2009ApJ...707...74R}. \citet{2008ApJ...685..752G} found that the Hubble residuals correlate with the global host metallicity. \citet{2010MNRAS.406..782S} and \citet{2010ApJ...715..743K} found such a correlation with the host stellar mass. However, \citet{2009ApJ...691..661H}, who used the galaxy stellar mass as a proxy for the metallicity, found no such correlation and suggested that instead the progenitor age may be a more important parameter. All studies of SN Ia hosts galaxies conducted so far, except that by \citet{2009ApJ...707...74R}, were based on an analysis of the global photometric or spectroscopic properties of the host galaxies. In this paper we take a different approach. We use for the first time integral field unit (IFU) spectroscopy at intermediate spectral resolution to study a sample of host galaxies of local SNe~Ia ($z\sim0.02$). This approach has an advantage over the previous studies because it allows us to derive spatially-resolved two-dimensional (2D) maps of host galaxy properties, e.g. the heavy element abundance in the interstellar medium (ISM). The intermediate spectral resolution makes it also possible to use full-spectrum fitting techniques to derive 2D maps of the properties of the stellar populations. The main objective of this pilot work is to test the methodology to correlate the properties of the SNe with the properties of the gas and the stellar populations \emph{at the location of the SN explosion}, in addition to the global host properties. By analyzing the properties of the stellar populations we also aim to constrain the nature of the SNe~Ia progenitors. Throughout the paper we assume the concordance cosmological model with $\Omega_M=0.27$, $\Omega_\Lambda=0.73$, $w=-1$ and $h=0.71$. \begin{table*}[!t] \caption{Supernovae and details of their host galaxies: morphological type, Milky Way dust reddening, offsets from the host nucleus, de-projected galactocentric distance, inclination, and position angle.} \label{t:hosts} \begin{tabular}{@{}lllccccccc@{}} \hline \hline\noalign{\smallskip} SN & Host & Type\tablefootmark{a} & z\tablefootmark{b} & $E(B-V)_{MW}$\tablefootmark{c} & RA offset & DEC offset & DGD\tablefootmark{b} & $i$\tablefootmark{b} & PA\tablefootmark{b} \\ & & & & & [arcsec] & [arcsec]& [kpc]& [deg] & [deg] \\ \hline\noalign{\smallskip} 1999dq & NGC 976 & SAb & 0.0144 & 0.110 & $-$4.0 & $-$6.0 & 2.4 & 36.5 & 77.6 \\ 1999ej & NGC 495 & SB(s)0/a & 0.0137 & 0.072 & +18.0 & $-$20.0 & 7.7 & 44.0 & 47.6 \\ 2001fe & UGC 5129 & SBbc & 0.0134 & 0.022 & $-$13.5 & $-$0.1 & 3.9 & 46.0 & 14.1 \\ 2006te & CGCG 207-042 & SB(r)bc & 0.0315 & 0.046 & $-$5.5 & $-$1.7 & 4.9 & 39.4 & 69.9 \\ 2007A & NGC 105 NED02& SBc & 0.0173 & 0.073 & $-$1.2 & +10.1 & 3.6 & 36.4 & 77.3 \\ 1997cw & NGC 105 NED02& SBc & 0.0173 & 0.073 & +8.0 & +4.0 & 4.0 & 36.4 & 77.3 \\ 2007R & UGC 4008 NED01 & SAa & 0.0308 & 0.047 & $-$1.9 & $-$3.9 & 3.4 & 48.1 & 76.4 \\ \hline \end{tabular}\\ \tablefoottext{a}{based on SDSS pseudo-color images.} \tablefoottext{b}{this work. Derived from analysis of the H$\alpha$ velocity field, except for the host of \object{SN 1999ej}, for which the stellar velocity field was used.} \tablefoottext{c}{from \cite{ebv}.} \end{table*} \section{Observations and data reduction} \subsection{Target selection} The list of targets for this program was selected from a sample of spiral galaxies that hosted SNe~Ia for which the important parameters such as luminosity, extinction, intrinsic color indices, luminosity decline rate, and deviation from the Hubble diagram have been accurately measured. The galaxies were carefully examined and selected to fulfill four additional criteria: \begin{enumerate} \item to have angular size $\simeq$40-60 arcsec; \item to be nearly face-on; \item the SN lies on a high surface brightness location in the galaxy; \item be observable at airmass less that 1.3 to minimize the effect of the differential atmospheric refraction. \end{enumerate} The first two requirements maximize the use of the large field-of-view (FOV) of the IFU instrument and minimize the projection effects when correlating the SN and its local host galaxy properties. The third requirement ensures that we will obtain a good signal-to-noise ratio (S/N) of the spectra at the location of the SN, and in particular will allow us to access the absorption and emission line spectra. From the large sample of galaxies six objects were observed. The galaxy details and the SNe offset from the nucleus are given in Table\,\ref{t:hosts}. All SNe are normal SNe~Ia, except for \object{SN~1999dq}, which was classified as a peculiar 1991T-like event \citep{1999IAUC.7250....1J}. Table\,\ref{t:snprop} gives the SALT2 $x_1$ and $C$ parameters of the supernova light curves \citep{2007A&A...466...11G} taken from \cite{2010ApJ...716..712A}, the offset from the best-fit Hubble line $\Delta\mu$, and the $\Delta M_{15}$ parameter, which shows how much the SN $B$-band magnitude has declined during the first 15 days after the time of the $B$ band maximum. $x_1$ and $C$ are parameters related to the SN light curve shape and $B-V$ color index at maximum, respectively. They are used to standardize the observed $B$-band peak magnitude $B_{\rm obs}$ via the relation $B_{\rm std}=B_{\rm obs}+\alpha x_1 - \beta C$, with $\alpha=0.121$ and $\beta=2.51$ as per \cite{2010ApJ...716..712A}. $\Delta\mu$ was computed after first correcting the redshifts of the SNe for large-scale coherent galaxy motions in the local universe based on the models of \cite{2004MNRAS.352...61H}. The accuracy of this correction is estimated to be $\sim150$\,km\,s$^{-1}$ and a random peculiar velocity of 150\,km\,s$^{-1}$ is added to the uncertainty of $\Delta\mu$. It should be noted that for this pilot project the selection criteria are solely optimized to maximize the quality of the observations and facilitate the analysis. We focus on late-type galaxies because one of our goals is to correlate the SN properties with the properties of the ISM determined from the ionized gas. This leads to strong biases, however, e.g. the galaxies in our sample are bright, massive, and likely metal-rich. \subsection{Observations} The six galaxies were observed on November 14 and 15, 2009 at the 3.5m telescope of the Calar Alto observatory using the Potsdam Multi-Aperture Spectrograph \citep[PMAS,][]{2005PASP..117..620R} in the PPAK mode \citep{2004AN....325..151V,2006PASP..118..129K}. The atmospheric conditions were variable with occasional thin clouds interrupting the observations. The seeing varied between 1.5\arcsec and 2.2\arcsec. The PMAS instrument is equipped with a 4k$\times$4k E2V\#231 CCD. We used a set-up with the 600 lines\,mm$^{-1}$ grating V600 and 2$\times$2 binned CCD, which provided a wavelength range of $\sim$3700-7000\AA\ with a spectral resolution of $\sim5.5$\AA. The PPAK fiber bundle of PMAS consists of 382 fibers with 2.7\arcsec diameter each, 331 of which (science fibers) are ordered in a single hexagonal bundle that covers a FOV of 72\arcsec$\times$64\arcsec. Thirty-six additional fibers form six mini-bundles (sky-bundles), which are evenly distributed along a circle of $\sim90$\arcsec radius and face the edges of the central hexagon \citep[see Fig.5 in][]{2006PASP..118..129K}. The remaining 15 fibers are used for calibration and can only be illuminated with the PMAS internal calibration unit. For a detailed description of the PPAK fiber bundle we refer the reader to \cite{2006PASP..118..129K}. Some details that are relevant for the data reduction are also given in Appendix~\ref{ap:instr}. For each object three 1800-sec long exposures were obtained. Because the filling factor of a single PPAK exposure is $\sim$65\%, we adopted a dithering pattern with the second and the third exposures offset by $\Delta$(R.A.,Decl.)=(1.56, 0.78) and (1.56, $-$0.78) arcsec with respect to the first exposure to ensure that every point within the FOV was spectroscopically sampled. Before and after the science exposures, spectra of HgNe and continuum halogen lamp were obtained to wavelength-calibrate and trace the spectra. The spectrophotometric standard stars \object{Feige\,34} and \object{BD+25\,3941} were observed to measure the sensitivity function of the instrument. In addition, series of exposures of blank sky regions were obtained during twilight and were used to equalize the fiber-to-fiber throughput variations. \subsection{Data reduction} The pre-reduction of the CCD images was performed with IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} and the rest of the reduction with our own programs written in IDL. Each individual science pointing was reduced independently. After the standard CCD reduction steps of bias subtraction, flat-field correction and removal of cosmic ray hits, the spectra were traced, extracted, wavelength- and flux-calibrated, and finally sky-subtracted. At the final step the three pointings were combined into a final 3D data-cube, taking into account the differential atmospheric diffraction. The full details of the data reduction are given in Appendix~\ref{ap:reduction}. \begin{table}[!t] \caption{SALT2 \citep{2007A&A...466...11G} $x_1$ and $C$ parameters of the SNe from \cite{2010ApJ...716..712A}, the offset from the best-fit Hubble line $\Delta\mu$, and the $\Delta M_{15}$ parameter. The uncertainties of the parameters are given in the parentheses.} \label{t:snprop} \begin{tabular}{@{}lcccc@{}} \hline \hline\noalign{\smallskip} SN & $x_1$ & C & $\Delta\mu$ & $\Delta M_{15}$\tablefootmark{a} \\ \hline\noalign{\smallskip} 1999dq & 0.89 (0.12) & 0.13 (0.01) & $-$0.33 (0.09) & 0.96 \\ 1999ej & $-$2.08 (0.43) & 0.07 (0.05) & 0.46 (0.20) & 1.49 \\ 2001fe & 0.41 (0.18) & 0.03 (0.02) & $-$0.06 (0.09) & 1.03 \\ 2006te & $-$0.36 (0.18) & $-$0.04 (0.02) & 0.16 (0.08) & 1.15 \\ 1997cw\tablefootmark{b} & 0.79 (0.25) & 0.40 (0.03) & $-$0.08 (0.13) & 0.97 \\ 2007A & $-$0.04 (0.14) & 0.18 (0.01) & 0.15 (0.07) & 1.10 \\ 2007R & $-$1.76 (0.16) & $-$0.07 (0.02) & 0.23 (0.07) & 1.42 \\ \hline \end{tabular}\\ \tablefoottext{a}{calculated from $x_1$ with the relation given in \cite{2007A&A...466...11G};} \tablefoottext{b}{the first photometric observation was taken $\sim$15 days past maximum and the photometric parameters are rather uncertain. This SN was included in the analysis because it is in the same host as \object{SN~2007A}.} \end{table} Three of the galaxies in our sample also have SDSS spectra. This allowed us to check the \emph{relative} flux calibration of our spectroscopy. Spectra within an aperture of 3\arcsec\ diameter centered on the galaxy nucleus were extracted from the data-cubes to emulate the SDSS spectra. The comparison, after our spectra were scaled to match the flux level of the SDSS spectra, is shown in Fig.\,\ref{f:sdss}. It demonstrates that the \emph{relative} flux calibration of our spectra is excellent and matches SDSS to within a few percent. The absolute flux scale of the data-cubes was set using the SDSS imaging. SDSS $g$ and $r$ magnitudes of the galaxies were computed within an aperture of 20\arcsec\ diameter. Spectra within the same aperture size were extracted from the data-cubes and synthetic $g$ and $r$ magnitudes were computed. The $g$ and $r$ scale factors that provided the match of the synthetic magnitudes to the observed ones were computed and the average of the two was applied to the data-cubes. We note that the $g$ and $r$ scale factors coincided to within 3\%, which additionally supports our conclusion that the \emph{relative} flux calibration is accurate. \section{Data analysis} \label{sec:analysis} The individual spectra in the data-cubes were analyzed to derive 2D maps of the properties of the galaxies. This included the properties of the ionized gas and the stellar populations. The properties of the galaxies at the SN position and the galaxy center were derived by interpolating the 2D maps. The galaxy centers were computed from the data-cubes and the SN positions were computed with respect to it, using the offsets quoted in the discovery IAU circulars and \cite{jha44}. As previously mentioned, one of the main goals of this study is to test the feasibility of using IFU spectroscopy to compare the properties of the host as derived from integrated spectroscopy to those derived from spatially resolved spectroscopy. For this purpose, we also analyzed for each galaxy the total spectrum formed by simply summing all spaxel spectra in the data-cube. This simulates an observation of the same galaxy with long-slit spectroscopy as is performed for high-redshift galaxies. For each galaxy the analysis was also performed on azimutally averaged spectra at several de-projected galactocentric radii. This was performed as an alternative way to derive the radial dependence of the galaxy properties such as the metallicity. To compute the azimutally averaged spectra we first computed the de-projected galactocentric distance of each spaxel with the position angle and inclination (Table~\ref{t:hosts}) computed from the analysis of the H$\alpha$ velocity maps\footnote{For \object{NGC~495} the stellar velocity map was used because this galaxy shows no emission lines.} (see Sec.~\ref{sec:gasvel}). The spectra were corrected to rest-frame wavelength with the stellar velocities estimated from the fits to the absorption line spectrum (Sec.~\ref{sec:starfits}). Finally, for each galaxy the spectra within several (4 to 6) radial bins were averaged. Because the stellar velocities were used to correct the spectra to rest frame, we used only the spectra that had a sufficiently high S/N to allow fitting with {\tt STARLIGHT}. All quoted uncertainties of the derived quantities are statistical and do not include systematic and intrinsic uncertainties of the methods, which will be additionally discussed when appropriate. In the next sections we present the main steps in the data analysis. \begin{figure}[!t] \includegraphics [width=8.8cm]{compare_sdss} \caption{Comparison between SDSS spectra (red) and spectra extracted from our data-cubes within an aperture of 3\arcsec diameter centered on the galaxy nuclei (black).} \label{f:sdss} \end{figure} \subsection{Properties of the ionized gas} The presence of nebular emission lines in the galaxy spectra allows us to study the properties of the ionized gas such as its oxygen abundance and ionization state, and to derive other important properties such as the star formation rate, dust extinction, etc. Some of the methods used to derive these quantities, for example the strong line methods to estimate the gas metallicity, can only be applied if the ionization source is exclusively arising from the stellar radiation. For this reason and to search for possible AGN contamination, we used the diagnostic diagram [\ion{O}{iii}]\,$\lambda$5007/H$\beta$ \emph{vs.} [\ion{N}{ii}]\,$\lambda$6584/H$\alpha$ \citep[BPT diagram;][]{1981PASP...93....5B} (Fig.\,\ref{f:bpt}). The central spaxels that fall in the AGN area of the diagnostic diagram according to the \cite{2003MNRAS.346.1055K} criterion were excluded from the relevant parts of the analysis. \subsubsection{Emission line fluxes} Five of the six galaxies in our sample show strong nebular emission lines. The fluxes of the prominent emission lines [\ion{O}{ii}]\,$\lambda$3727, H$\beta$, [\ion{O}{iii}]\,$\lambda\lambda$4959/5007, H$\alpha$, and [\ion{N}{ii}]\,$\lambda\lambda$6549/6584 were used in the analysis. Whenever possible, the [\ion{S}{ii}]\,$\lambda\lambda$6716/6731 lines were also measured. In the spectra of galaxies the emission lines are superimposed on the underlying stellar absorption spectrum. The stellar absorption lines can bias the measurement of the emission line fluxes, an effect that is especially prominent in the H$\beta$ line (Fig.~\ref{f:contsub}). Therefore, to measure the emission line fluxes accurately, the stellar absorption spectrum needs to be subtracted first. For this we used the STARLIGHT software \citep{2005MNRAS.358..363C}. All spectra that had S/N greater than 5 at $\sim4600$\AA\ were fitted with STARLIGHT and the emission line fluxes were measured on the continuum-subtracted spectrum. For the remaining the spectra the measurements were made without continuum subtraction. Each emission line was fitted with a single Gaussian plus a linear term, and the area under the Gaussian was taken as flux estimate. Details of the adopted procedure and the Monte Carlo simulations that were performed to estimate the uncertainties of the line fluxes are given in Appendix~\ref{ap:lines}. \subsubsection{H$\alpha$ velocity field} \label{sec:gasvel} The fitted positions of the strongest of all emission lines, H$\alpha$, provide the best estimate of the gas velocity field. These fields, shown in Figs.~\ref{f:g:1999dq}-\ref{f:g:07A-07R}, were analyzed with the methods and IDL programs developed by \cite{2006MNRAS.366..787K}. The program analyzes the velocity field at several radii and for each of them returns the inclination and position angle and quantifies the degree of deviation from pure disk rotation. From this analysis we also derived the redshift, the average position angle (PA) and inclination $i$ for each galaxy, which are listed in Table~\ref{t:hosts}. \subsubsection{Extinction, H$\alpha$ flux, and star formation rate maps} For the purpose of the following analysis the measured line fluxes were corrected for dust extinction using the observed Balmer decrement $I(\mathrm{H}\alpha)/I(\mathrm{H}\beta)$ and assuming a foreground dust screen. For the intrinsic Balmer decrement $I(\mathrm{H}\alpha)/I(\mathrm{H}\beta)_{\rm intr}$ a value of 2.86 was assumed, which is appropriate for case-B recombination with electron temperature $T_{\rm e}=10000$\,K and electron density 10$^2$\,cm$^3$ \citep[e.g.,][]{2006agna.book.....O}. The dust is described by the \cite{fitzpatrick99} law with $R_V$=3.1. The extinction-corrected H$\alpha$ flux was converted into instantaneous SFR using the \cite{1998ARA&A..36..189K} relation: \begin{equation} \mathrm{SFR}\,[M_{\sun}\,\mathrm{yr}^{-1}]=7.9\times 10^{-42}\,L(\mathrm{H}\alpha), \end{equation} where $L$(H$\alpha$) is the H$\alpha$ luminosity in units of erg\,s$^{-1}$. \subsubsection{ISM oxygen abundance} The most accurate method to measure the ISM abundances -- the so-called \emph{direct} method -- involves determining of the ionized gas electron temperature, $T_\mathrm{e}$, which is usually estimated from the flux ratios of auroral to nebular emission lines, e.g. [\ion{O}{iii}]\,$\lambda\lambda$4959/5007/[\ion{O}{iii}]\,$\lambda$4363 \citep[e.g.][]{2006A&A...454L.127S,2006A&A...448..955I}. However, the temperature-sensitive lines such as [\ion{O}{iii}]\,$\lambda$4363 are very weak and difficult to measure, especially in metal-rich environments. A careful examination of our data-cubes revealed that the [\ion{O}{iii}]\,$\lambda$4363 line was not present. For this reason we used other strong emission line methods to determine the gas oxygen abundance. Many such methods have been developed throughout the years, the most commonly used being R$_{23}=$([\ion{O}{ii}]\,$\lambda$3727+[\ion{O}{iii}]\, $\lambda\lambda$4959/5007)/H$\beta$ ratio-based methods \citep{1979MNRAS.189...95P,1991ApJ...380..140M,1994ApJ...420...87Z, 2004ApJ...613..898T,2002ApJS..142...35K, 2004ApJ...617..240K,2001A&A...374..412P,2005ApJ...631..231P}, N2=log[[\ion{N}{ii}]\,$\lambda$6584/H$\alpha$] \citep{1994ApJ...429..572S,2004MNRAS.348L..59P} and O3N2=log[([\ion{O}{iii}]\, $\lambda$5007)/H$\beta$)/([\ion{N}{ii}]\,$\lambda$6584/H$\alpha$)] \citep{1979A&A....78..200A,2004MNRAS.348L..59P}. More recently, \cite{2006ApJ...652..257L,2007A&A...473..411L} and \cite{2007A&A...462..535Y} have verified and re-calibrated these and other strong-line methods using Sloan Digital Sky survey (SDSS) spectroscopy. Unfortunately, there are large systematic differences between the methods, which translate into a considerable uncertainty in the absolute metallicity scale \cite[for a recent review see, e.g.,][]{2008ApJ...681.1183K}. In particular, there is $\sim0.4$ dex difference between the so-called \emph{empirical} and \emph{theoretical} strong-line methods. The \emph{empirical} methods are calibrated against \ion{H}{II} regions and galaxies whose metallicities have been previously determined by the \emph{direct} method, e.g. O3N2 and N2 \citep{2004MNRAS.348L..59P}, R$_{23}-P$ \citep{2005ApJ...631..231P}. The \emph{theoretical} methods, on the other hand, are calibrated by matching the observed line fluxes with those predicted by theoretical photoionization models \cite[most of the R$_{23}$-based methods, e.g.,][]{1991ApJ...380..140M,2004ApJ...617..240K,2004ApJ...613..898T,2002ApJS..142...35K}. The cause of these discrepancies is still not well-understood. Recently \cite{2010ApJS..190..233M} discussed this problem and concluded that the \emph{empirical} methods may underestimate the metallicity by a few tenths of dex \citep[see also][]{2007RMxAC..29...72P}, while the \emph{theoretical} methods overestimate it. In this situation, we followed the recommendation of \cite{2008ApJ...681.1183K} to use one method to compute the metallicities in all galaxies and discuss the results in \emph{relative} sense, and use another method to confirm the observed trends. As our primary method we used the \emph{empirical} O3N2 method of \cite{2004MNRAS.348L..59P} (PP04 from now on) and checked the results with the \emph{theoretical} R$_{23}$ method of \cite{2004ApJ...617..240K} (KK04 from now on). Both methods have advantages and disadvantages, which have been discussed in several papers \citep[e.g.,][]{2008ApJ...681.1183K,2007A&A...462..535Y}. \subsection{Stellar populations} The star formation history and chemical evolution of a galaxy is imprinted in the properties of its present-day stellar populations. Determining the properties of the stellar populations in the galaxies has been a major research topic in astrophysics and through the years many different methods have been used, ranging from analysis of the color-magnitude diagrams \citep[CMD,][]{1972A&A....20..361F} to equivalent widths of absorption lines \citep[e.g., the Lick indices,][]{1994ApJS...94..687W}. However, in most galaxies several stellar population are simultaneously present. Disentangling their contribution to the galaxy spectrum is a very difficult task because of various astrophysical and numerical degeneracies. \subsubsection{Full-spectrum fitting technique} Recently, the so-called \emph{evolutionary population synthesis methods} \citep{1968ApJ...151..547T,1996ApJS..106..307V,2003MNRAS.344.1000B} coupled with full-spectrum fitting techniques \citep[e.g,][]{1999ApJ...525..144V,2001MNRAS.327..849R,2005MNRAS.358..363C,2008MNRAS.385.1998K,2009MNRAS.395...28M} have emerged as powerful means to analyze galaxy spectra. The evolutionary population synthesis methods produce synthetic galaxy spectra using as input theoretical evolutionary tracks, libraries of stellar spectra, initial mass function (IMF), and prescriptions for star formation and chemical evolution. The models are then compared to the observed spectra to infer the properties of the stellar populations that contribute to the formation of the observed spectrum. One possible approach is to fit the observed spectrum with a linear combination of model spectra of single stellar populations (SSP) of different ages and metallicities \citep[e.g.,][]{2005MNRAS.358..363C,2008MNRAS.385.1998K,2009MNRAS.395...28M}. The fitting returns the contribution of the different SSPs (called population vector) that best describe the observed spectrum, which then can be used to study the stellar populations of the galaxy. However, because of astrophysical and numerical degeneracies, and the presence of noise in the observed spectra, it is well-known that the solution may not be unique and the results should be interpreted with caution \citep[e.g, see the discussion in][]{2005MNRAS.358..363C}. The best known is the age-metallicity degeneracy\footnote{Dust reddening also adds to this problem, partly because the dust extinction law may be different in different galaxies; galaxies in the Local Group are a good example for this.} where young metal-rich stellar populations are confused with older metal-poor ones \citep[see for example Fig. 10 in][]{2003MNRAS.344.1000B}. As noted by \cite{2003MNRAS.344.1000B}, while the shape of the stellar continuum is roughly the same, the strength of the metal lines increases. Therefore, analyzing well-calibrated spectra with high S/N and spectral resolution to resolve the absorption lines has the potential to brake the age-metallicity degeneracy. In addition, uncertainties in the input ingredients needed for computing the SSPs, such as non-uniform coverage of the age/metallicity parameter space of the stellar libraries, IMF and the difficulties in describing some phases of the stellar evolution (e.g., the thermal-pulsating asymptotic giant branch phases), add even more uncertainties when interpreting the results \citep[see, e.g.,][]{2009ApJ...699..486C}. \subsubsection{Choice of the base} In this study we used the {\tt STARLIGHT} code described in \cite{2005MNRAS.358..363C} and \cite{2007MNRAS.381..263A} coupled with a version of the Bruzual \& Charlot\footnote{circa 2007; unpublished} SSP models based on the new MILES spectral library \citep{2006MNRAS.371..703S}. The selection of the SSP basis is important for any full-spectrum fitting algorithm and the interpretation of the results. To minimize the computing time one should select few SSPs that are maximally independent and at the same time are capable of reproducing the variability of the full SSP set for a given metallicity. If a large basis is selected, many of its components will be close neighbors. This will lead to increased non-uniqueness of the solution and increase the time for the fitting algorithm to converge. On the other hand, if too small a basis is selected, it will not be able to capture the full variability of the SSP models, the fits may be poor, and the results will be unreliable. In our work we used the following approach to select the basis. For a given metallicity all SSPs were normalized to the flux in the 4600-4800\,\AA\ interval. Then the evolution of the flux in seven spectral windows in the range 3700-7000\,\AA\ was tracked as a function of the SSP age. The goal was to identify age intervals where the flux in \emph{all} seven spectral windows evolves linearly (or close to) with time. If such intervals exist, then the SSPs within them are not independent; all SSPs in a given interval can be closely reproduced as a linear combination of the two SSPs at the extremes. By selecting the basis at the ages connecting the linear intervals we form a small independent set of basis vectors, which at the same time can reproduce the SSPs at all other ages. Following this approach we were able to select $N_{\ast}=16$ or 17 SSPs per metallicity that formed our fitting basis of 66 SSP models with ages between 1 Myr and 18 Gyr, and four metallicities $Z$=0.004, 0.008, 0.02 (the solar metallicity) and 0.05. \subsubsection{Voronoi binning} To increase the S/N in the outer parts of the galaxies the data cubes were spatially binned using adaptive Voronoi tessellations \citep{2003MNRAS.342..345C,2006MNRAS.368..497D}. The binning of the spaxels was determined from the S/N measured in the interval 4580-4640\,\AA, after discarding the spectra with S/N$<$1. The targeted S/N of the binned spaxels was S/N$\sim$20, with the exception of the host of \object{SN~2006te}, for which a lower S/N of 15 was used. To keep the spatial resolution reasonably small, an upper limit of the size of the bins was also imposed: 5 for \object{NGC976}, 17 for \object{NGC495}, and 12 for the remaining four. \begin{table*}[!th] \caption{Total galaxy SFR, SFR surface density $\Sigma_{\rm SFR}$ and gas extinction $A_{\rm V}$ at the SN location derived from our observations. } \label{t:sfr} \begin{tabular}{@{}llcccccc@{}} \hline \hline\noalign{\smallskip} SN & Host galaxy & \multicolumn{4}{c}{total SFR\,[M$_{\sun}$\,yr$^{-1}$] } & $\Sigma_{\rm SFR}$ at SN position & $A_{\rm V}$ at SN position \\ \cline{3-6}\noalign{\smallskip} & & H$\alpha$\tablefootmark{a} & \multicolumn{2}{c}{{\tt STARLIGHT}\tablefootmark{a}} & Neill et al.\tablefootmark{b} & [M$_{\sun}$\,yr$^{-1}$\,kpc$^{-2}$] & [mag] \\ \cline{4-5}\noalign{\smallskip} & & & $<$50 Myr & $<$0.5 Gyr & & & \\ \hline\noalign{\smallskip} 1999dq & NGC 0976 & 5.30 (0.06) & 6.8 &16.0 & 10.4 (2.5,36.6) & 8.9(1.2)$\times$10$^{-2}$ & 0.97 (0.18) \\ 2001fe & UGC 5129 & 0.76 (0.02) & 1.0 & 3.6 & 3.2 (0.6,10.4) & 1.5(0.4)$\times$10$^{-2}$ & 0.79 (0.35) \\ 2006te & CGCG 207-042 & 2.03 (0.05) & 1.6 & 5.2 & 2.9 (1.3,6.9) & 8.3(2.2)$\times$10$^{-3}$ & 0.74 (0.33) \\ 2007A & NGC 105 NED02 & 3.42 (0.04) & 5.3 & 9.0 & 11.6 (2.4,27.5) & 2.6(0.4)$\times$10$^{-2}$ & 0.52 (0.22) \\ 1997cw & NGC 105 NED02 & 3.42 (0.04) & 5.3 & 9.0 & 11.6 (2.4,27.5) & 1.6(0.3)$\times$10$^{-2}$ & 0.06 (0.22) \\ 2007R & UGC 4008 NED01& 5.33 (0.12) & 5.8 &24.1 & 3.6 (2.1,205.6)& 3.2(0.8)$\times$10$^{-2}$ & 1.17 (0.32) \\ \hline \end{tabular}\\ \tablefoottext{a}{this work;} \tablefoottext{b}{from \cite{2009ApJ...707.1449N}. The errors are asymmetric and the values in the parentheses are the $\mp1\sigma$ uncertainties.} \end{table*} \subsubsection{{\tt STARLIGHT} fits} \label{sec:starfits} The Voronoi-binned spectra along with the un-binned ones were fitted with the {\tt STARLIGHT} code allowing for \emph{all} SSPs to be reddened by the same amount of dust described by the \cite{ext_law} law. In our analysis, the spectra and the basis were normalized to the mean flux in the region 4580-4620\AA. Thus the population vector is the fractional contribution $x_j$ of the different SSP models at $\sim$4600\AA. In addition to the population vector the code also returns the fractional contributions $\mu_j$ of each SSP to the total stellar mass of the galaxy, which is the more relevant physical quantity. The code also returns the velocity shift and the Gaussian broadening that need to be applied to the model in order to fit the observed spectrum. The shifts provide the velocity maps for the stars and the broadening is related to the velocity dispersion of the stars. From the population vectors we can compute the mass- and light-weighted mean age and metallicity following \cite{2005MNRAS.358..363C}: \begin{equation} \langle \log t_\ast\rangle_{\rm L/M}=\sum\limits_{j=1}^{N_{\ast}} w_j\,\log t_j \end{equation} \begin{equation} \langle Z_\ast\rangle_{\rm L/M}=\sum\limits_{j=1}^{N_{\ast}} w_j\,Z_j, \end{equation} where $t_j$ and $Z_j$ are the age and the metallicity of the $j$-th SSP model, and $w_j=x_j$ or $w_j=\mu_j$ for light- and mass-weighted quantities, respectively. \subsubsection{Compressed population vectors} The simulations performed by \cite{2005MNRAS.358..363C} demonstrated that the individual components of the population vectors computed by {\tt STARLIGHT} are very uncertain. Instead of analyzing the individual components, \cite{2005MNRAS.358..363C} showed that a coarsely binned version of the population vectors provides a more robust description of the current stellar content of the galaxies. Thus, following \cite{2004MNRAS.355..273C} and \cite{2005MNRAS.358..363C}, we formed compressed the population vectors in three age bins: young (age $<$ 300 Myr), intermediate (300 Myr $<$ age $<$ 2.4 Gyr), and old (age $>$ 2.4 Gyr) stellar populations. \section{Results} \subsection{Ionized gas} Figures~\ref{f:bpt}-\ref{f:rad_met} show the main results obtained from the analysis of the emission line fluxes. The 2D maps of the galaxy properties that are discussed in this section are shown by galaxy in Figs~\ref{f:g:1999dq}-\ref{f:g:07A-07R}. \subsubsection{BPT diagnostic diagram} \cite{1981PASP...93....5B} introduced several diagnostic diagrams to segregate spectra of emission-line galaxies and AGNs according to their main excitation mechanism. These diagrams are based on easily measured optical emission line flux ratios. Figure\,\ref{f:bpt} shows the positions of the galaxies in our sample on the log([\ion{N}{ii}]\,$\lambda$6584/H$\alpha$) -- log([\ion{O}{iii}]\, $\lambda$5007/H$\beta$) diagnostic diagram. The filled circles, filled triangles, and crosses show the measurements at the position of the SN, the galaxy nucleus, and the total galaxy spectrum, respectively. The dotted and dashed lines show two widely used criteria to separate emission-line galaxies and AGNs introduced by \cite{2001ApJ...556..121K} and \cite{2003MNRAS.346.1055K}, respectively. From Fig.\,\ref{f:bpt} it is evident that the hosts of SNe 2001fe and 2007A/1997cw harbor AGNs and the host of \object{SN~199dq} is on the border of composite galaxies and AGNs. However, the line ratios measured in the total spectra of these three galaxies still fall into the star-forming region of the BPT diagram, which suggests that the AGNs are not strong enough to significantly affect the total galaxy spectra. At high redshift good S/N, spatially resolved spectroscopy is difficult to obtain and weak AGNs may remain unrecognized in slit spectroscopy because typically the whole galaxy falls into the slit. The presence of AGNs, even though weak, may still bias the metallicity estimation from integrated galaxy spectra. The locations of the SNe fall into the (small) region of the BPT diagram with the highest density of SDSS galaxies. This is the region where the metal-rich galaxies are typically found \citep[see, e.g.,][]{2007MNRAS.375L..16C}. High metallicity in this region of the BPT diagram is also expected from the O3N2 method for metallicity estimation \citep{1979A&A....78..200A,2004MNRAS.348L..59P}. These are indications that for the emission line galaxies in our sample, the SNe likely exploded in metal-rich environments. \subsubsection{H$\alpha$ velocity field} \begin{figure}[!h] \includegraphics [width=8.8cm]{bpt2} \caption{BPT diagram \citep{1981PASP...93....5B}. The contours show the density of SDSS emission line galaxies. The dotted and the dashed lines of \cite{2001ApJ...556..121K} and \cite{2003MNRAS.346.1055K}, respectively, separate star-forming galaxies, AGNs, and composite galaxies. The filled circles, filled triangles, and crosses show the measurements at the position of the SN, the galaxy nucleus, and the total galaxy spectrum, respectively. } \label{f:bpt} \end{figure} \begin{figure*}[!t] \centering \includegraphics [width=18cm]{f02a} \caption{{\bf Upper row:} From left to right, the color SDSS image of \object{NGC~976}, the observed H$\alpha$ flux and velocity maps. {\bf Lower row:} The ionization parameter $\log(U)$, the visual extinction $A_{\rm V}$ estimated from the Balmer decrement and the metallicity map derived with the \cite{2004MNRAS.348L..59P} O3N2 method. In all maps presented in this paper, $\times$ marks the galaxy center and $+$ the SN position. The four contour levels overplotted on the extinction and metallicity maps are derived from the H$\alpha$ map. The four levels are 0.8, 0.6, 0.4 and 0.2 of the maximum H$\alpha$ flux. The $x,y$ coordinates are in arcsec with respect to the map centers. The orientation of the images is north -- up, east -- left.} \label{f:g:1999dq} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics [width=18cm]{f03a}\vspace{1cm} \\ \includegraphics [width=18cm]{f04a} \caption{Same as Fig.~\ref{f:g:1999dq} but for \object{UGC~5129} and \object{CGCG~207-042}.} \label{f:g:01fe-06te} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics [width=18cm]{f05a}\vspace{1cm} \\ \includegraphics [width=18cm]{f06a} \caption{Same as Fig.~\ref{f:g:1999dq} but for \object{NGC~105 NED02} and \object{UGC~4008 NED01}.} \label{f:g:07A-07R} \end{figure*} The gas velocity maps derived from the H$\alpha$ emission line show smooth gradients and no apparent irregularities. The analysis with the method of \cite{2006MNRAS.366..787K} shows that the velocity fields of all five galaxies are consistent with pure disk rotation without signs of significant disturbances. The $\sigma$-maps (not shown here) derived form the width of H$\alpha$ emission line also show a simple structure with a single peak at the center. These results suggest that the galaxies in our sample are likely relaxed systems. \subsubsection{H$\alpha$ flux, extinction, and star formation rate} \label{sec:ha} On small scales the H$\alpha$ flux distribution follows the spiral arms visible in the broad-band images. Overall, the H$\alpha$ flux increases toward the centers of the galaxies. It can be seen that all six SNe in these galaxies are projected onto regions with strong H$\alpha$ emission with fluxes above the galaxy-average. In the late-type spirals \object{NGC~976}, \object{UGC~5129}, and \object{NGC~105 NED02} there is a clear H$\alpha$ flux deficit in the bulge. Interestingly, these are also the galaxies that have AGNs. However, these two phenomena are probably unrelated. \cite{2009A&A...501..207J} studied the radial distribution of H$\alpha$ emission in a large sample of spiral galaxies and found that the late-type spirals (Sc+) show a H$\alpha$ flux deficit in the bulge regardless of the presence of bars. This effect is much less pronounced in the Sa-type spirals or even absent in their barred counterparts, which tend to have a high concentration of the H$\alpha$ emission toward the center. This is clearly the case for \object{UGC~4008 NED01}, which is the only Sa emission line galaxy in our sample. According to \cite{2009A&A...501..207J}, there is a significant difference in the H$\alpha$ radial profile of barred and unbarred Sb galaxies. The unbarred Sb galaxies show a smooth profile similar to Sa galaxies. The barred counterparts have a strong peak of H$\alpha$ emission at the centers, followed by a decrease of the H$\alpha$ flux, before it increases again because of the H$\alpha$ emission ring at the outer radius of the bar. The barred Sb galaxy \object{CGCG~207-042} shows exactly the same characteristics with a clear H$\alpha$ emission ring at the outer radius of the bar. Thus, the radial H$\alpha$ emission profiles in our galaxies are consistent with the findings of \cite{2009A&A...501..207J}. The gas extinction maps presented in Figs~\ref{f:g:1999dq}-\ref{f:g:07A-07R} also show increase of the extinction toward the galaxy center. Although the extinction maps do not show small-scale structures as clearly as in the H$\alpha$ flux maps, there is a general trend that the extinction increases with the H$\alpha$ flux. This is expected because an increased amount of dust is typically observed in the regions of active star formation. The total extinction along the lines of sights of the SN position is low, except for SN 1997cw (marked with the leftmost of the three signs). The extinction along the SN lines-of-sight is given in Table\,\ref{t:sfr}. Table\,\ref{t:sfr} lists the total on-going SFR and the SFR surface density, $\Sigma_{\rm SFR}$, at the SN positions derived from the extinction-corrected H$\alpha$ flux map. The $\Sigma_{\rm SFR}$ values at the SN position are consistent with the disk-averaged values for normal spiral galaxies \cite[see Fig.~5 in][]{1998ARA&A..36..189K}. Our values fall in the upper half of the \cite{1998ARA&A..36..189K} distribution, which can be attributed to the SN being projected on regions with higher-than-average H$\alpha$ flux. For comparison the total SFR and its confidence intervals derived by \cite{2009ApJ...707.1449N} are also given in Table\,\ref{t:sfr}. In general, our values are consistent with \cite{2009ApJ...707.1449N}, although in all cases but one we derive lower values. However, it should be noted that the values of \cite{2009ApJ...707.1449N} were derived with a completely different technique -- fitting model galaxy SEDs to broad-band photometry -- and represent the average SFR during the last 0.5 Gyr, while our estimates from the H$\alpha$ flux represent the very resent, $<20$ Myr, SFR. \subsubsection{Ionization parameter and electron density} The ionization parameter $\log(U)$ -- the ratio of the ionizing photon density to the gas density -- is a measure of the degree of ionization of the nebula and can be determined from the ratio of two lines of the same element corresponding to two different ionization states. The ionization maps of the galaxies were computed from the ratio of the [\ion{O}{ii}]\,$\lambda$3727 and [\ion{O}{iii}]\,$\lambda$5007 lines using the relation of \cite{2000MNRAS.318..462D}. The three AGN galaxies clearly show an increased degree of ionization toward the center, while the remaining two galaxies do not. In three of the galaxies, \object{NGC~976}, \object{CGCG 207-042}, and \object{NGC~105 NED02}, there is also a hint of increasing of the ionization parameter toward the outer spirals. The mean ionization parameters for all five galaxies fall into a rather narrow interval of $\log(U)= -3.6\div-3.4$. The [\ion{O}{ii}]\,$\lambda$3727/ [\ion{O}{iii}]\,$\lambda$5007 line ratio is known to provide lower values for the ionization parameters compared to other available methods \cite[e.g.,][]{2000MNRAS.318..462D}. In comparison with the ionization parameter maps that were computed as part of the \cite{2004ApJ...617..240K} method for oxygen abundance determinations, the [\ion{O}{ii}]\,$\lambda$3727/ [\ion{O}{iii}]\,$\lambda$5007-based maps show very similar features, but are shifted toward lower values by $\sim0.3-0.4$~dex. Even taking this offset into account, the average values for our galaxies fall into the lower end of the distribution of the \ion{H}{ii} galaxies studied by \cite{1998Ap&SS.263..143D}. This part of the distribution is mostly populated with \ion{H}{ii} galaxies without measurable [\ion{O}{iii}]\,$\lambda$4363 line, which tend to be metal-rich. Given the spectral resolution and wavelength range of our spectroscopy, the electron density, $n_\mathrm{e}$, can be estimated only from the flux ratio of the [\ion{S}{ii}]\,$\lambda\lambda$6716/6731 lines \citep{2006agna.book.....O}. The ratio of these two lines is sensitive to $n_\mathrm{e}$ in the range $\sim10^2-10^4$~cm$^{-3}$. Unfortunately, for the two highest redshift galaxies in our sample, \object{CGCG 207-042} and \object{UGC 4008 NED01}, the [\ion{S}{ii}] lines are outside the covered wavelength range. For the remaining three galaxies the ratio is constant $\sim$1.4 across the galaxy and no apparent structure is visible. Ratios of $\sim$1.4 indicate low electron density $n_\mathrm{e}\leq10^2$~cm$^{-3}$ and are similar to the measurements of $n_\mathrm{e}$ in other galaxies, e.g. the sample of SDSS galaxies studied by \cite{2004ApJS..153..429K}. The low electron density suggests that there are no shocks in the galaxy. \subsubsection{ISM oxygen abundance} Figures~\ref{f:g:1999dq}-\ref{f:g:07A-07R} also show the distributions of the metallicity (indicated as 12+log(O/H)$_{\rm PP04}$) estimated with the O3N2 method of \cite{2004MNRAS.348L..59P}. For the three galaxies that likely harbor AGNs, namely, \object{NGC~976}, \object{UGC~5129}, and \object{NGC~105~NED02}, the central regions affected by the AGN are masked. The decrease of the S/N in the outer parts of the galaxies affects the measurements of the line ratios and some spaxels are also marked as affected by AGN according to the \cite{2001ApJ...556..121K} criterion. Because no AGN activity is expected in the outer parts of the galaxies the line ratios measured in those spaxels are likely dominated by the noise and have also been masked in the plots. In these three galaxies there are indications for ring-like structures with enhanced metallicity and H$\alpha$ flux. However, it is difficult to assess whether these are real structures or artifacts caused by the central AGN altering the line ratios. Thus, these ring-like structures should be regarded with caution. \begin{figure}[!ht] \includegraphics [width=8cm]{rad_met1} \caption{Radial dependence of 12+log(O/H)$_{\rm PP04}$. The red dots show the measurement at the SN position. The horizontal dashed lines indicate the metallicity measured from the total galaxy spectra (Table~\ref{t:oh}). As a guide to the eye, we plot metallicity gradients of $-0.022$, $-0.035$,$-0.030$, $-0.058$ and $+0.007$ dex\,kpc$^{-1}$ (from top to bottom) with the solid lines. We estimate that the accuracy of these gradients is $\sim0.005$. The blue squares show the metallicities measured from the azimuthally averaged spectra. The points shown with the cross and plus signs are the metallicities of the \ion{H}{ii} regions discussed in Sect.~\ref{notes:gas}.} \label{f:rad_met} \end{figure} Figure~\ref{f:rad_met} shows the dependence of the metallicity on the de-projected galactocentric distance, which was computed from the position angle and inclination derived from the analysis of the H$\alpha$ velocity fields. Excluding the host of \object{SN~2007R}, which has a quite uniform metallicity distribution, the other four galaxies show decreasing of the metallicity with the radius. The plots also suggest that the high metallicities measured in some of the outermost spaxels in \object{UGC~5129}, \object{CGCG 207-042}, and \object{NGC~105~NED02} are most likely due to noise in the line flux measurements. The solid lines are guides to the eye, showing metallicity gradients of $-0.022$, $-0.035$,$-0.030$ and $-0.058$ dex\,kpc$^{-1}$ for \object{NGC~976}, \object{CGCG 207-042}, \object{UGC~5129}, and \object{NGC~105~NED02}, respecively. These metallicity gradients fall well into the range observed in other nearby galaxies, e.g. \object{M51} \citep{2004ApJ...615..228B}, \object{NGC~300} \citep{2009ApJ...700..309B}, and \object{NGC~628} \citep{2011MNRAS.415.2439R}. In contrast, in \object{UGC 4008 NED01} the metatllicity is nearly uniformly distributed with a hint of a very small positive gradient of $+0.007$ dex\,kpc$^{-1}$. The blue squares in Fig.~\ref{f:rad_met} show the metalliciy estimates from the azimutally averaged spectra described in Sect.~\ref{sec:analysis}. These estimates trace the measurements on the individual spaxel spectra very well. Figure~\ref{f:rad_met} and the inspection of the 2D maps reveals that the SN explosion sites are projected onto regions that have the highest, or close to the highest, metallicity within the corresponding galaxy. Table\,\ref{t:oh} shows the metallicity measurements in the total galaxy spectra, at the nucleus, and at the SN position. The metallicities at the SN positions in all five galaxies are very similar to each other, 12+log(O/H)$_{\rm PP04}\sim8.8$ and 12+log(O/H)$_{\rm KK04}\sim9.1$, and are on average by 0.1 dex higher than the metallicities measured from the total galaxy spectra. For the three galaxies that host AGNs we also computed the metallicity in the total spectra, excluding the central spaxels, which are affected by the AGNs. The metallicity was in all cases identical to the one measured from the total spectra that included the central spaxels (the latter are given in Table\,\ref{t:oh}). This result shows that in these cases the AGNs are too weak to significantly affect the total galaxy spectrum and the metallicity estimation. \begin{table*} \caption{ISM metallicity estimates from the total galaxy spectra, at the nucleus, and at the position of the SN using the PP04 and KK04 methods.} \label{t:oh} \begin{tabular}{@{}llcccccccc@{}} \hline \hline\noalign{\smallskip} SN & Host galaxy& \multicolumn{8}{c}{12+log(O/H)} \\ \cline{3-10} & & \multicolumn{2}{c}{total galaxy spectrum} & & \multicolumn{2}{c}{nucleus} & & \multicolumn{2}{c}{at the SN position} \\ \cline{3-4}\cline{6-7}\cline{9-10}\noalign{\smallskip} & & PP04 & KK04 & & PP04 & KK04 & &PP04 & KK04 \\ \hline\noalign{\smallskip} 1999dq & NGC 976 & 8.75 (0.17) & 9.08 (0.05) & & 8.69\tablefootmark{a} (0.08) & 8.84\tablefootmark{a} (0.23) & & 8.81 (0.05) & 9.15 (0.03) \\ 2001fe\tablefootmark{b} & UGC 5129 & 8.72 (0.16) & 8.98 (0.11) & & $\dots$ & $\dots$ & & 8.82 (0.10) & 9.10 (0.09) \\ 2006te & CGCG 207-042 & 8.57 (0.14) & 8.96 (0.10) & & 8.72 (0.06) & 9.10 (0.05) & & 8.75 (0.09) & 9.03 (0.13) \\ 2007A\tablefootmark{b} & NGC 105 NED02 & 8.65 (0.14) & 9.02 (0.08) & & $\dots$ & $\dots$ & & 8.83 (0.06) & 9.09 (0.07) \\ 1997cw\tablefootmark{b} & NGC 105 NED02 & 8.65 (0.14) & 9.02 (0.08) & & $\dots$ & $\dots$ & & 8.76 (0.06) & 9.10 (0.04) \\ 2007R & UGC 4008 NED01& 8.75 (0.16) & 9.08 (0.06) & & 8.73 (0.12) & 8.66 (0.55) & & 8.73 (0.09) & 9.12 (0.06) \\ \hline \end{tabular}\\ \tablefoottext{a}{may have AGN contamination.} \tablefoottext{b}{the metallicity not measured at the center because these galaxies harbor AGNs.} \end{table*} \subsubsection{Notes on the individual galaxies} \label{notes:gas} \noindent \object{{\bf NGC 976 }}: The metallicity distribution is nearly symmetric around the galaxy center except for a somewhat extended region located at coordinates (+8,+22). The metallicity of this region is higher compared to the other parts of the galaxy at the same radial distance. Examining the $[\ion{O}{iii}]5007/{\rm H}\beta$ and $[\ion{N}{ii}]6584/{\rm H}\alpha$ maps (not shown in the paper), we noted that this is caused by an asymmetry in $[\ion{O}{iii}]5007/{\rm H}\beta$. Both ratios are nearly symmetrically distributed around the galaxy center except for the region at (+8,+22), which has a lower $[\ion{O}{iii}]5007/{\rm H}\beta$ ratio resulting in a higher metallicity estimate. There is also a slight decrease of the degree of ionization at the same location. The galaxy was included by \cite{1997ApJ...485..552M} in the control sample for their study of the cause of the elevated star formation in Seyfert 2 compared with Seyfert 1 galaxies. The authors found no obvious trigger of star formation in \object{NGC 976}. \noindent \object{{\bf NGC 495}}: This red barred Sa galaxy shows no emission lines. \cite{2002AJ....123.3018M} found it to be a member of a poor galaxy cluster, which was the richest cluster among those studied in their work, however. It is therefore possible that the gas component of \object{NGC 495} was separated from the galaxy by the tidal interaction with the other cluster members. \noindent \object{{\bf UGC 5129}}: This galaxy was included in the study of isolated disk galaxies by \cite{2004A&A...420..873V}. It was included in the final list of 203 galaxies (out of an initial 1706) that were likely not affected by other galaxies during the last few Gyr of their evolution. \noindent \object{{\bf CGCG~207-042}}: The spirals arms of the galaxy are barely visible in the SDSS image. However, there are three \ion{H}{ii} regions along one of them that are clearly visible in the H$\alpha$ map. They are roughly located at ($x,y$) coordinates (+8,--27), (+27,--10) and (+20,+27). The metallicity decreases considerably along the spiral arm, which is also accompanied by a strong increase of the ionization parameter. The two outermost \ion{H}{ii} regions are shown with blue and magenta points in Fig.~\ref{f:rad_met}. \noindent \object{{\bf NGC~105~NED02}}: In the H$\alpha$ velocity map there is a spot located at (+28,+4) that clearly does not follow the velocity of the underlying part of the galaxy but moves away $\sim120$ km\,s$^{-1}$ faster. At the same position there is a very faint spot in the broad-band images. This sport also clearly shows increased H$\alpha$ emission and a marginal increase of the ionization parameter. The metallicity of the spot is lower than the rest of the galaxy by at least 0.2 dex. The points corresponding to this spot are shown with blue points in Fig.~\ref{f:rad_met}. Given the properties of this feature, it is possible that this is a dwarf satellite galaxy of \object{NGC~105~NED02}. Another interesting feature is that the central ring-like pattern of increased metallicity is interrupted by a region of slightly lower metallicity located at coordinates (--3,--2). Examining the $[\ion{O}{iii}]5007/{\rm H}\beta$ and $[\ion{N}{ii}]6584/{\rm H}\alpha$ maps, we again noted that this is caused by an asymmetry in $[\ion{O}{iii}]5007/{\rm H}\beta$. The $[\ion{O}{iii}]5007/{\rm H}\beta$ ratio at (--3,--2) is slightly higher and causes the lower metallicity estimate. \noindent \object{{\bf UGC~4008~NED01}}: This is the only galaxy in our sample that shows a positive metallicity gradient. \subsection{Stellar populations} \subsubsection{Stellar \emph{vs.} gas dust extinction} {\tt STARLIGHT} fits provide an estimate of the extinction by dust suffered by the stellar light. The assumption that the stellar populations of different age are subject to the same extinction is probably not entirely correct. It is reasonable to assume that the young populations can be still embedded in the dusty nebula where the stars formed and can be subject to higher extinction. {\tt STARLIGHT} has the capability to take this into account and can determine different extinctions for the different SSP models. However, this approach adds additional uncertainty to the already complex problem of recovering the properties of the stellar populations. We have chosen to assume a single extinction for all SSPs. The extinction maps of the star light are shown in Fig.~\ref{f:popaveL} and there were no easily identifiable features in them. In comparison with the extinction derived from the emission lines (Figs~\ref{f:g:1999dq}-\ref{f:g:07A-07R}), the extinction derived by {\tt STARLIGHT} fits is lower. The relation between the star and gas extinction shows considerable scatter and the two quantities appear to be uncorrelated, except for \object{NGC~105~NED02}. In this galaxy there is a clear linear relation between the star and gas extinction, with the gas extinction being about twice the star's extinction. A similar relation was also derived by \cite{2005MNRAS.358..363C} in their analysis of a sample of SDSS galaxies. \onlfig{7}{ \begin{figure*}[!ht] \centering \includegraphics [width=17cm]{f07mska} \caption{From left to right: SDSS color images of the galaxies, the light-weighted average stellar population age $\langle \log t_\ast\rangle_{\rm L}$ and metallicity $\langle Z_\ast\rangle_{\rm L}$, and the visual extinction $A_{\rm V}$ determined by {\tt STARLIGHT} fits to the stellar spectra.} \label{f:popaveL} \end{figure*} } \subsubsection{Mean stellar age and metallicity} \begin{table*} \caption{Stellar population metallicity and age estimates from the {\tt STARLIGHT} fits. } \label{t:stm} \begin{tabular}{@{}llccccccccc@{}} \hline \hline\noalign{\smallskip} SN & Host galaxy & \multicolumn{4}{c}{total galaxy/minus AGN$^a$} & & \multicolumn{4}{c}{at the SN position$^b$} \\ \cline{3-6}\cline{8-11}\noalign{\smallskip} & & $\left\langle\log(t_\ast)\right\rangle_M$ & $\left\langle Z_\ast\right\rangle_M$ & $\left\langle\log(t_\ast)\right\rangle_L$ & $\left\langle Z_\ast\right\rangle_L$ & & $\left\langle\log(t_\ast)\right\rangle_M$ & $\left\langle Z_\ast\right\rangle_M$ & $\left\langle\log(t_\ast)\right\rangle_L$ & $\left\langle Z_\ast\right\rangle_L$ \\ \hline\noalign{\smallskip} 1999dq & NGC 976 & 9.89/9.96 & 0.039/0.036 & 8.82/8.80 & 0.029/0.030 & & 9.75 & 0.042 & 8.82 & 0.030 \\ 1999ej & NGC 495 & 10.17 & 0.045 & 9.96 & 0.038 & & 10.14 & 0.042 & 10.03 & 0.037 \\ 2001fe & UGC 5129 & 9.66/9.96 & 0.038/0.023 & 8.95/8.92 & 0.029/0.026 & & 9.45 & 0.037 & 8.71 & 0.031 \\ 2006te & CGCG 207-042 & 9.46 & 0.029 & 8.56 & 0.022 & & 9.55 & 0.027 & 8.83 & 0.022 \\ 2007A & NGC 105 NED02 & 9.86/9.80 & 0.030/0.033 & 8.53/8.51 & 0.026/0.031 & & 9.78 & 0.040 & 8.48 & 0.029 \\ 1997cw & NGC 105 NED02 & 9.86/9.80 & 0.030/0.033 & 8.53/8.51 & 0.026/0.031 & & 9.77 & 0.039 & 8.43 & 0.028 \\ 2007R & UGC 4008 NED01& 9.68 & 0.042 & 9.01 & 0.031 & & 9.96 & 0.044 & 9.11 & 0.030 \\ \hline \end{tabular}\\ \tablefoottext{a}{'total galaxy' - values derived by fitting the spectra formed by summing (un-weighted) \emph{all} spectra in the data cubes, 'minus AGN' - with the AGN affected spaxels excluded.} \tablefoottext{b}{interpolated from the maps in Fig.\,\ref{f:popaveL}.} \end{table*} \onlfig{8}{ \begin{figure*}[t] \centering \includegraphics [width=18cm]{f08a} \caption{{\tt STARLIGHT} fits to the total galaxy spectra. For each galaxy we also show the ratio between the observed and the best-fit spectrum. The vertical brown lines in the spectrum panels show the location of the strongest night-sky lines, which could not be cleanly subtracted and the corresponding wavelength regions were excluded from the fits. To the right of the plots are shown the population vectors and the mass-fraction vectors along with the extinction, the mass- and light-averaged age and metallicity, and the contribution of young (age $<$ 300 Myr), intermediate (300 Myr $<$ age $<$ 2.4 Gyr) and old (age $>$ 2.4 Gyr) stellar populations. The short brown bars show the ages of the SSPs used in the fits and the two vertical dotted lines separate the young, intermediate and old populations. } \label{f:totfit} \end{figure*} } Table\,\ref{t:stm} lists the mass- and light-weighted mean stellar population age and metallicity determined from fitting the total galaxy spectrum formed as the sum of \emph{all} spaxels with (these fits are shown in Fig.~\ref{f:totfit}) and without the AGN-affected central spaxels. The results show that all galaxies in our sample have a higher mean stellar metallicity than solar. This is in accord with the findings from the emission lines analysis. The mean mass-weighted stellar age of the five emission line galaxies is $\sim5$ Gyr. \object{NGC~495}, which shows no emission lines, has an older stellar population of about 12 Gyr. These values can be compared with studies based on total galaxy spectra, e.g. obtained with drift-scanning with a long-slit of local galaxies \citep[e.g.,][]{2005ApJ...634..210G} or spectroscopy of high-redshift galaxies when practically the whole galaxy light falls into the slit. Note that the residuals show a large-scale pattern with a full amplitude of up to $\sim$4\% (Fig.~\ref{f:totfit}). This signals either a problem with the relative flux calibration of the observed spectra or a problem in the SSP models. At present it is difficult to quantify what effect this would have on the results that are based on the spectral fitting. \onlfig{9}{ \begin{figure*}[t] \centering \includegraphics [width=16cm]{st_rad_plot_ml} \caption{Mass- and light-weighted average stellar populations metallicity (left panel) and age (right panel) from the {\tt STARLIGHT} fits as function of the de-projected galactocentric distance. The small black dots are the measurements on the individual spaxels and the large blue dots are from the azimuthally averaged spectra.} \label{f:popaverad} \end{figure*} } Figure~\ref{f:popaveL} shows the mass- and light-weighted mean stellar population age and metallicity maps of the six galaxies. In Table\,\ref{t:stm} are given the measurements for the total galaxy and at the SN position. The maps show considerable scatter and it is difficult to identify clear structures in them. Many spaxels that indicate high metallicity appear in the outer parts. This is most likely not real but rather a result of the insufficient S/N of the spectra even after applying the Voronoi binning. Nevertheless, there may be a slight increase of the metallicity toward the center, especially in \object{NGC~495} and \object{UGC~4008~NED01}. The same is also true for the stellar age maps, and again there is an indication of an older stellar population toward the nucleus, which can be expected. To investigate the matter in more detail, we plot in Fig.~\ref{f:popaverad} the the mass- and light-weighted mean stellar population age and metallicity measurements as a function of the de-projected galactocentric distance. Unfortunately, the plot confirms that the measurements from the individual spaxels spectra show too large scatter. Unlike the ionized gas metallicity measurements, which show a small scatter of $\leq$0.05 dex at a given radius (Fig.~\ref{f:rad_met}), the stellar metallicities estimated from the {\tt STARLIGHT} fits show scatter as large as 0.3 dex, for example at a radial distance of 6 kpc in \object{NGC~976}, \object{UGC~5129}, and \object{NGC~105 NED02}. The age estimates also show considerable scatter. \begin{table*}[!ht] \caption{Compressed population vectors showing the contribution of the young (age $<$ 300 Myr), intermediate (300 Myr $<$ age $<$ 2.4 Gyr), and old (age $>$ 2.4 Gyr) stellar populations to the formation of the observed total galaxy spectrum. The total galaxy stellar masses derived from our {\tt STARLIGHT} fits and by \cite{2009ApJ...707.1449N} are also given.} \label{t:popfrac} \begin{tabular}{@{}llcccrccccrccc@{}} \hline \hline\noalign{\smallskip} SN & Host galaxy & \multicolumn{4}{c}{total spectrum} & & \multicolumn{4}{c}{at the SN position$^b$} & & \multicolumn{2}{c}{log($M_{\star}$ [$M_{\sun}$])} \\ \cline{3-6}\cline{8-11}\cline{13-14}\noalign{\smallskip} & & Young & Inter. & Old & S/N$^a$ & & Young & Inter. & Old & S/N$^c$ & & this work & \cite{2009ApJ...707.1449N} \\ \hline\noalign{\smallskip} 1999dq & NGC 976 & 0.19 & 0.49 & 0.32 & 81 & & 0.19 & 0.41 & 0.40 & 84 & & 10.98 & 10.78 \\ 1999ej & NGC 495 & 0.00 & 0.11 & 0.89 & 54 & & 0.00 & 0.00 & 1.00 & 33 & & 10.85 & $\dots$ \\ 2001fe & UGC 5129 & 0.16 & 0.40 & 0.44 & 97 & & 0.29 & 0.33 & 0.38 & 81 & & 10.22 & 10.22 \\ 2006te & CGCG 207-042 & 0.25 & 0.35 & 0.40 & 35 & & 0.21 & 0.18 & 0.62 & 42 & & 10.25 & 10.31 \\ 2007A & NGC 105 NED02 & 0.31 & 0.35 & 0.34 & 78 & & 0.36 & 0.38 & 0.26 & 82 & & 10.61 & 10.87 \\ 1997cw & NGC 105 NED02 & 0.31 & 0.35 & 0.34 & 78 & & 0.38 & 0.38 & 0.24 & 82 & & 10.61 & 10.87 \\ 2007R & UGC 4008 NED01& 0.12 & 0.36 & 0.53 & 54 & & 0.14 & 0.29 & 0.57 & 85 & & 11.10 & 10.98 \\ \hline \end{tabular}\\ \tablefoottext{a}{S/N of the total galaxy spectra.} \tablefoottext{b}{values at the SN radial distance interpolated from the radial dependencies derived from the azimuthally averaged spectra (Fig.~\ref{f:poprad}). \tablefoottext{c}{S/N of the azimuthally averaged spectrum that is closest to the radial distance on the SN.} } \end{table*} The analysis of the emission lines shows that most of the ISM properties have a well-defined axial symmetry. One can expect this to be also the case for the stellar populations and hence asymmetries are unlikely to be responsible for the observed scatter in the outer parts of the galaxies. The scatter clearly increases with the radial distance (Fig.~\ref{f:popaverad}), suggesting that the lower S/N of the spectra in the outer parts of the galaxies is causing it. After the Voronoi binning the analyzed spectra a have minimum S/N$\sim$15-20 at 4600\AA. The large scatter that we observe in the derived quantities demonstrates the limitations of the full-spectrum fitting technique in the low-S/N regime and suggests that an S/N significantly higher than 20 is needed to achieve reliable results. \onlfig{10}{ \begin{figure*}[t] \centering \includegraphics [width=15.2cm]{f09a} \caption{Same as in Fig.~\ref{f:totfit}, but for the azimutally averaged spectra.} \label{f:rfits1} \end{figure*} } \onlfig{11}{ \begin{figure*}[t] \centering \includegraphics [width=17cm]{f10a} \caption{Same as in Fig.~\ref{f:totfit}, but for the azimutally averaged spectra.} \label{f:rfits2} \end{figure*} } \onlfig{12}{ \begin{figure*}[t] \includegraphics [width=17cm]{f11a} \caption{Same as in Fig.~\ref{f:totfit}, but for the azimutally averaged spectra.} \label{f:rfits3} \end{figure*} } Given the large scatter of the measurements from the individual spaxel spectra, interpolating at the location of the SNe from the 2D maps is not recommended. An alternative approach is to use the measurements obtained from the azimuthally averaged spectra and interpolate them at the radial location of the SNe. This approach is better when there is evidence that the galaxy properties are symmetric around the nucleus. In Fig.~\ref{f:popaverad} the blue symbols show the values estimated from the fits of the azimuthally averaged spectra and the vertical dashed lines show the radial distance of the SNe. The corresponding fits are shown in Figs.~\ref{f:rfits1}-\ref{f:rfits3}. The mean age and metallicity show a smooth radial dependence. In some cases the metallicity derived from the azimutally averaged spectra suggests negative gradients of up to $-0.03$ dex\,kpc$^{-1}$. However, given the large uncertainty with which the stellar metallicity is estimated ($\geq0.2$ dex), the significance of these gradients is difficult to assess. The mean ages qualitatively show the same behavior with decreasing age outward. We note that the different types of weighting, mass or light, lead to different radial dependencies, with the light-weighted quantities showing stronger variation. The metallicity and the age at the locations of the SNe linearly interpolated from these radial dependencies are given in Table\,\ref{t:stm}. It is also worth mentioning that the light-weight quantities appear to have a slightly lower scatter, most pronounced in the inner regions where the spectra have a higher S/N. In most cases the light-weighted metallicies are lower than the mass-weight ones. The light-weighting gives much more weight to the younger stellar population and this result may imply that the younger populations have lower metallicity. \onlfig{13}{ \begin{figure*}[!ht] \centering \includegraphics [width=17cm]{f12a} \caption{From left to right: SDSS color images of the galaxies and the maps of the fraction of young (age $<$ 300 Myr), intermediate (300 Myr $<$ age $<$ 2.4 Gyr) and old (age $<$ 2.4 Gyr) stellar populations.} \label{f:popfrac} \end{figure*} } \subsubsection{Binned population vectors} The 2D maps representing the fractional contribution of the young (age $<$ 300 Myr), intermediate (300 Myr $<$ age $<$ 2.4 Gyr), and old (age $>$ 2.4 Gyr) stellar populations are shown in Fig.\,\ref{f:popfrac}. In Fig.~\ref{f:poprad} the measurement from the individual spaxel spectra and the azimutally averaged spectra are plotted {\it vs.} the de-projected galactocentric distance. The measurements from the individual spectra again show considerable scatter in the outer parts of the galaxies. For this reason we again estimated the values at the radial distance SN from the azimutally averaged spectra and not from interpolation of the 2D maps. The estimated stellar population fractions at the SN radial distances are given in Table\,\ref{t:popfrac} along with the values derived from the total galaxy spectra. The S/N of the spectra used to derive these values are also shown. The analysis shows that the five emission line galaxies contain stellar populations of different ages, including a considerable fraction of young stars. In general, there is a clear trend of increasing the fraction of young stars with the radial distance. Depending on the galaxy, at a distance of 4-8 kpc the trend is reversed and the fraction of young stars starts to decrease. In four of the galaxies the fraction of old stellar populations monotonically increases toward the galaxy nucleus, which is expected for most star-forming spiral galaxies. The exception is \object{CGCG~207-042}, the host of \object{SN~2006te}, which shows a decrease of the fraction of old stellar populations toward the center. \object{NGC~495} is dominated by old stellar populations with possibly a small fraction of younger stars in the central few kpc. The contribution of the younger population is small, however, and its presence cannot be confidently confirmed. The behavior of the intermediate age stellar populations is the opposite to that of the old ones. \cite{2005MNRAS.358..363C} showed that the compressed population vectors can be recovered with an accuracy better than $\sim$10\% for S/N$>$10. However, considering the uncertainties involved in the computation of the SSP models as well as other uncertainties such as the correlations between the fitted parameters, the relative flux calibration and the dust extinction laws in the galaxies, the accuracy is probably no better than $\sim$10\%. This is also supported by the level of the scatter in Fig.~\ref{f:poprad}. In this context, the population vectors at the locations of SNe \object{1999dq}, \object{2007A}, \object{1997cw}, and \object{2007R} are un-distinguishable from those of the whole galaxies (Table\,\ref{t:popfrac}). At the location of \object{SN~2006te} there is a larger contribution from old populations at the expense of the intermediate age, while the fraction of young stars is the same as for the whole galaxy. For \object{SN~2001fe} there is marginal evidence for an increased contribution of a young population at the position of the SN. The host of \object{SN~1999ej} formed the bulk of its stars about 13 Gyr ago followed by a less intense star-forming period about 2 Gyr ago. At the distance of \object{SN~1999ej} we only find evidence for the older population. \onlfig{14}{ \begin{figure*}[t] \sidecaption \includegraphics [width=12cm]{st_rad_plot_yio} \caption{Compressed population vectors corresponding to the contribution of young (age $<$ 300 Myr), intermediate (300 Myr $<$ age $<$ 2.4 Gyr), and old (age $>$ 2.4 Gyr) stellar population to the formation of the observed spectra as function of the de-projected galactocentric distance. The small black dots are the measurements obtained from the individual spaxels and the large blue dots are from the azimuthally averaged spectra.} \label{f:poprad} \end{figure*} } \subsubsection{Stellar kinematics} The velocity dispersion maps derived from the {\tt STARLIGHT} fits show a simple morphology with a single peak centered at the galaxy nucleus. In Fig.~\ref{f:velmaps} the velocity fields of the stars in the five emission line galaxies are compared to the velocity fields derived from the H$\alpha$ emission line. The two maps are very similar and small systematic differences are only revealed after subtracting the two maps (the last column in Fig.~\ref{f:velmaps}). Evidently, the gas rotates faster in the central regions than the stars, with the difference being largest in \object{UGC~4008~NED01}. These differences between the rotation of stars and ionizied gas in the central regions of galaxies are well-known and have been extensively studied \citep[see, e.g., ][and references therein]{2004A&A...424..447P}. We note that none of the galaxies shows a sign of counter-rotating gaseous disk \citep[e.g.,][]{1996ApJ...458L..67B,1992ApJ...394L...9R}. The stellar velocity fields were also analyzed with the methods of \cite{2006MNRAS.366..787K}. As with the H$\alpha$ velocity map, within the uncertainty we also found no evidence for deviations from pure disk rotation. \subsubsection{Current stellar mass} An estimate of the present-day stellar mass of the galaxies was obtained from the {\tt STARLIGHT} fits of the total galaxy spectra. The fits are shown in Fig.~\ref{f:totfit} and the masses are given in Table\,\ref{t:popfrac}. All the galaxies have masses exceeding $2\times10^{10}\,M_{\sun}$ and can be classified as quite massive. The fairly high metallicity that we derived for both the ionized gas and the stellar component are therefore in line with the expectation from the mass-metallicity relation, e.g. \cite{2004ApJ...613..898T}. We note that the values that we obtain are very close to those of \cite{2009ApJ...707.1449N}, which were obtained by a different methodology (see Sec.~\ref{sec:ha}). \onlfig{15}{ \begin{figure*}[t] \centering \includegraphics [width=18cm]{f13a} \caption{From left to right for each galaxy we show the color SDSS image, H$\alpha$ velocity map, the star velocity map, and the difference between them. The $x,y$ coordinates are in arcsec with respect to the map centers. The orientation of the images is north -- up, east -- left.} \label{f:velmaps} \end{figure*} } \section{Discussion} \subsection{Galaxy mass and metallicity} We used IFU spectroscopy to derive the spatially resolved properties of six face-on spiral galaxies that hosted seven nearby SNe~Ia. The masses of the galaxies derived from the analysis of the total spectra with the {\tt STARLIGHT} code are all higher than $2\times10^{10}\,M_{\sun}$. Recently, \citet{2010ApJ...715..743K}, \citet{2010MNRAS.406..782S}, and \cite{2010ApJ...722..566L} have claimed that the residuals from the best-fit Hubble line correlate with the SN host stellar mass. Furthermore, \citet{2010MNRAS.406..782S} proposed to incorporate into the cosmological SN~Ia analyses two different absolute peak magnitudes for SNe in hosts with masses lower or higher than $10^{10}\,M_{\sun}$; after the "lightcurve width -- luminosity" and color corrections the SNe in the more massive hosts are found to be $\sim0.06-0.09$ mag brighter than their counterparts in lower mass hosts. The galaxies in our sample fall into the high-mass/low-specific SFR bins defined by \citet{2010MNRAS.406..782S}. Accordingly, one can expect the SN in these galaxies to have on average negative Hubble residuals. From Table~\ref{t:snprop} one can see that four of the SNe have significant positive residuals ($>2\sigma$). The other three have negative residuals, but only one of them is bigger than the uncertainly. The mean weighted residual is positive, $+0.07\pm0.22$; however, one should keep in mind that we used only very few SNe in our analysis. The cause of the apparent dependence of the SN~Ia luminosity on the host galaxy stellar mass is still unclear. Theoretical investigations have shown that various parameters of the exploding WD, such as its metallicity, C/O ratio, central density, and progenitor age can affect the amount of $^{56}$Ni synthesized in the explosion to a different degree and hence the SN luminosity \citep[see, e.g.,][ and references therein]{2003ApJ...590L..83T,2006A&A...453..203R,2009ApJ...691..661H,2010ApJ...711L..66B}. Among these parameters, the metallicity is known to correlate with the galaxy mass \citep[see, e.g., ][]{2004ApJ...613..898T} and is likely to have the strongest impact. Our analysis of the emission line fluxes and the stellar populations revealed that the galaxies in our sample have on average solar and higher metallicity (Tables~\ref{t:oh} and \ref{t:stm}). This is not surprising because the galaxies are quite massive and by the virtue of the mass-metallicity relation \citep[see, e.g., ][]{2004ApJ...613..898T} may be expected to have high metallicities. For five of the SNe, the ISM metallicity measured at the location of the SN is higher than the galaxy average by about $\sim0.1$ dex (Table~\ref{t:oh}). This can be explained by the presence of radial metallicity gradients and our target selection criteria. Figures~\ref{f:rad_met} and \ref{f:popaverad} show that the galaxies in our sample have radial metallicity gradients. At the same time, the selection criterion that the SNe are located on a high surface brightness location in the galaxies led to a SN sample that is biased toward SNe close to the galaxy nuclei. Together with the presence of the metallicity gradients, this resulted in most of the SNe being at locations with higher-than-average metallicity within the galaxies \citep[see also][]{2005PASP..117..227K}. \begin{figure}[!t] \centering \includegraphics*[width=8cm]{met_gas_stars1} \caption{Mass-weighted stellar metallicity {\it vs.} gas-phase oxygen abundance estimated from the azimutally averaged spectra.} \label{f:met2} \end{figure} While the gas-phase metallicity is easier to estimate, a more relevant quantity is the stellar metallicity. Figure\,\ref{f:met2} shows the mass-weighted stellar metallicity vs. the gas-phase oxygen abundance estimated from the azimutally averaged spectra. The two quantities appear to be correlated. The slope of the linear fit is $\sim1.8$ with a dispersion of $\sim0.1$ dex. Note that \cite{2005MNRAS.358..363C} also found that the gas-phase and the stellar metallicities are correlated from an analysis of a large sample of SDSS galaxies. However, the two relations are difficult to compare because the \cite{2005MNRAS.358..363C} analysis also included low-metallicity galaxies and galaxies in a somewhat higher redshift interval. \subsection{The impact of the metallicity gradient} The presence of abundance gradients in both spiral and elliptical galaxies is now a well-established fact \citep[e.g.,][]{1994ApJ...420...87Z,1999PASP..111..919H}. If not taken into account, the gradients will affect any attempt to study the properties of SNe~Ia and/or their progenitors as a function of their host galaxy metallicity. The values of the gradients seen in the galaxies in our sample suggest that SN~Ia progenitors that form at radial distances greater than $\sim15$ kpc may have metallicities that are lower by a factor at least 2-3 than progenitors in the central parts. Studies of the radial distribution of SNe~Ia within their hosts galaxies have shown that more SNe explode in the central regions \citep[e.g.,][]{2000ApJ...542..588I,1997ApJ...483L..29W,1997AJ....113..197V,2007HiA....14..316B}. However, SNe~Ia are also found at large galactocentric distances in both spiral and elliptical galaxies. In Fig.~\ref{f:rad_stat} we show the distribution of the projected galactocentric distances (PGD) for a sample of 305 SNe with modern CCD observations (observed after 1990) and with known host galaxy type, redshift, and offset from the center. About 7\% of the SNe in spiral galaxies and 20\% in the ellipticals are found at PGD$>20$ kpc. Since the real galactocentric distances are always greater than, or equal to, the PGD, the above-mentioned fractions are lower limits. Therefore, a significant fraction of SNe may have progenitors with a metallicity that is much lower than that of the host average. An important question is whether the present-day galaxy metallicity is a good proxy of the metallicity of SN~Ia progenitors. This was recently studied by \cite{2011MNRAS.414.1592B}, who used simplified one-zone galaxy evolution models coupled with the SN delay-time distribution (DTD) functions of \cite{pritchet08s} and \cite{2010ApJ...722.1879M}. The authors concluded that the galaxy ISM metallicity is a good proxy for the SN progenitor metallicity and derived simple linear relations to estimate the progenitor metallicity from the present-day host metallicity. However, \cite{2011MNRAS.414.1592B} did not include metallicity gradient, and more importantly, its possible evolution with time. There is growing evidence that the disks in late-type galaxies formed and evolved slowly under the constant inflow of metal-pool gas from the galactic halo. The galaxy chemical evolution models and hydrodynamical simulations have shown that the metallicity gradient evolves considerably during the last 10 Gyr of the galaxy evolution \citep{2012arXiv1201.6359P,2009ApJ...696..668F,2006MNRAS.366..899N,2009MNRAS.398..591S, 2005MNRAS.358..521M,2001ApJ...554.1044C,1997ApJ...475..519M}. Although the exact results depend of the particular code and model used \citep{2012arXiv1201.6359P}, all studies but one \citep{2001ApJ...554.1044C} show that the metallicity gradient was steeper in the past and gradually flattens out to reach present-day values similar to those observed in local spiral galaxies. Recently, there has also been observational support for this conclusion. \cite{2011ApJ...732L..14Y} and \cite{2010ApJ...725L.176J} reported metallicy gradients of $-$0.16\,dex\,kpc$^{-1}$ and $-$0.27\,dex\,kpc$^{-1}$ for galaxies at redshifts z=1.5 and z=2.0, respectively. We note that the galaxy chemical evolution studies show that the mean disk metallicity has increased slowly by $\sim0.3-0.5$ dex during the last several Gyr. The gradients seen in the galaxies in our sample and in other galaxies at low and high redshift imply that the metallicity differences within the same galaxy may exceed the cosmological increase of the mean metallicity. In addition, some studies have pointed out that the metallicity gradient in the outermost parts of the galaxies may be steeper than in the inner disk \citep[see, e.g.,][]{2009ApJ...696..668F}. The above studies highlight the complexity of estimating the metallicity of the SN~Ia progenitors from their host galaxy present-day metallicity. The difference between the present host metallicity and the SN progenitor metallicity is a complex function of several factors, some of which are poorly understood and not very well constrained with observations: the radial distance at which the progenitor formed, the age of the progenitor, and the evolution of the metallicity gradient. For example, a progenitor that formed at large radial distance will have increasingly larger difference from the preset-day metallicity at the same radius as the progenitor ages. Another uncertainty can be added if the galaxies have experienced major mergers and radial star migrations, which tend to flatted the metallicity gradient \citep[see, e.g.,][]{2010ApJ...721L..48K,2009MNRAS.398..591S}. This complexity may be the reason why the attempts to correlate the Hubble residual with the host global metallicity have not led to conclusive results \citep{2008ApJ...685..752G,2009ApJ...691..661H,2005ApJ...634..210G,2011arXiv1110.5517D}. Note however that \cite{2009ApJ...691..661H} did not directly measure the metallicity but rather estimated it from a mass-metallicity relation. All SNe in our sample but one are within 5 kpc from the galaxy centers. Generally, the chemical evolution models show that the metallicity close to the galaxy nuclei changes least. Therefore, the metallicity of the SN progenitors that formed near the center should be closer to the present-day galaxy metallicity compared to the progenitors that formed in the outer parts. Together with the fact that we measured fairly high present-day metallicity at the locations of all SNe, this suggests that their progenitors did not form in metal-poor environments, unless they came from very old stellar population with a long delay time. \begin{figure}[!t] \centering \includegraphics*[width=8cm]{rad_stat} \caption{Distribution of the \emph{observed} galactocentric distance for a sample of 305 nearby SNe~Ia in late- and early-type hosts. Because the distances have not been de-projected these are the \emph{minimum} galactocentric distances. } \label{f:rad_stat} \end{figure} \subsection{Star formation history} Much of the recent progress on the question of SN~Ia progenitors has been achieved through studies of the SN rates. It is now well-established that the SN~Ia rate depends on both the total stellar mass and the recent SFR in the host galaxy \citep[e.g.,][]{2005A&A...433..807M,2005ApJ...629L..85S,2006ApJ...648..868S,2010ApJ...722.1879M,2010AJ....140..804B}, which led to a two component model for the SN~Ia rate, the co-called A+B model. Along with the fact that SNe~Ia are also observed in old, passive galaxies, this points to the existence of at least two evolution channels for SNe~Ia associated with young and old stellar populations. Except for \object{NGC~495}, all other galaxies in our sample contain a considerable fraction of young stars and strong H$\alpha$ emission, indicating ongoing star formation activity. The {\tt STARLIGHT} fits of the total galaxy spectra are shown in Fig.~\ref{f:totfit}. Except for \object{NGC~495}, all other galaxies show a similar pattern, namely, the population vectors $x_j$ show the largest contribution from SSPs with ages 0.5-5 Gyr. Young populations, $\sim$50 Myr, are also confidently detected in all cases. From Figs.~\ref{f:popfrac} and \ref{f:popaverad} it can be seen that the fraction of young stars increases with increasing the radial distance. It is known that the full-spectrum fitting techniques tend to estimate suspiciously large components with ages $\sim1$ Gyr \citep[see, e.g.,][]{2007MNRAS.381..263A,2006MNRAS.365..385M,2007MNRAS.378.1550P} and one may ask whether the large contribution of SSPs of similar ages that we see in our analysis is real. \cite{2007MNRAS.381..263A} and \cite{2009RMxAC..35..127C} report that the problem disappeared once they switched from the original \cite{2003MNRAS.344.1000B} fitting basis based on STELIB to a new basis that uses the MILES spectral library. Because we also used the newer Bruzual \& Charlot basis based on MILES, our results are also likely unaffected by the above-mentioned problem. We estimated the current SFR rate from the H$\alpha$ emission line flux (Table~\ref{t:sfr}). Because most of the ionizing photons are produced by massive, short-lived stars, the H$\alpha$ flux is a tracer of the very recent star formation, $\leq 20$\,Myr. Another estimate of the SFR can be obtained from the {\tt STARLIGHT} fits following the methodology described in \cite{2007MNRAS.381..263A}. We estimated the mean SFRs during the last 0.5 Gyr and last 50 Myr. The values are given in Table~\ref{t:sfr}. The mean SFRs over the last 50 Myr are very similar to the estimates obtained from the H$\alpha$ flux; \cite{2007MNRAS.381..263A} have already demonstrated that there is a tight correlation between these two estimations using a large sample of SDSS galaxies. On the other hand, the mean SFRs during the last 0.5 Gyr are by a factor 3-5 higher than the SFR estimates from H$\alpha$ flux, but are similar to those of \cite{2009ApJ...707.1449N}. The only exception is \object{UGC 4008 NED01}, for which we obtain a much higher value; note, however, that the confidence interval quoted by \cite{2009ApJ...707.1449N} has an upper limit higher by an order of magnitude than our estimate. Note also that the model SEDs that \cite{2009ApJ...707.1449N} fitted to the broad-band photometry are based on eight galaxy models with pre-defined SFHs, which were meant to represent the Hubble galaxy types plus one star-burst galaxy model. {\tt STARLIGHT} does not assume any pre-defined SFH and the contributions of all SSPs are free parameters. Therefore, {\tt STARLIGHT} is much more flexible to describe galaxies with arbitrary SFHs. If the two components of the A+B model represent the contribution of two different channels to produce SNe~Ia, we can estimate the probability from which channel the SNe in our galaxies were produced. With the A and B constants estimated by \cite{2006ApJ...648..868S} and our measurement of the galaxies' total mass and SFR, the SNe in the five star-forming galaxies have an about equal chance to have come from the young or the old channel. \subsection{Correlation between the SN and host galaxy properties} Despite the low statistics of our sample, we can still test correlations between the SALT2 $x_1$ parameter and the Hubble residuals of the SNe with the various parameters that we derived for the total galaxy and at the SN locations (Tables~\ref{t:sfr}-\ref{t:popfrac}). We found no statistically significant correlations. The small number of objects is certainly not ideal for this analysis, but the metallicity and the masses of the galaxies in our sample also span quite a small range. To search for correlations it is necessary to expand the sample toward lower masses and metallicities. This may not be an easy task because SNe~Ia in metal-poor galaxies in the local Universe are rare. Besides, the low-mass, low-metallicity galaxies tend to be faint and are more difficult to observe with a sufficient S/N. In addition to the above concerns, when correlating the SN properties with the properties of their host galaxies at the location of the SN one should always bear in mind that the SN progenitors may be very old stars \citep[e.g.,][]{2011MNRAS.412.1508M}. The progenitor system may have migrated from its birth place and the galaxy properties at its present location may be different from those where the progenitor has formed. \cite{2011MNRAS.412.1508M} discussed that random stellar motions will affect the SN progenitor and its surroundings in the same way. As a result, the population that gave birth to the SN progenitor will also be present in the new SN location. Following the same argument, the radial star migrations should not have a significant effect either. Additionally, \cite{2009MNRAS.398..591S} found that within the central 5-10 kpc the radial star migration during the whole galaxy evolution is fairly small $\sim1.5$ kpc and increases to only $\sim3.5$ kpc in the outer parts. Another problem is related to the projection effects. Because the SN~Ia progenitors have a broad range of ages between $\sim100$~Myr and 10~Gyr, SNe can explode anywhere along the line of sight. An SN produced by an young progenitor would have most likely exploded in the galactic disk, where the newly formed stars and the ionized gas typically reside. In this case the progenitor star's metallicity should be close to that of the ionized gas at the (projected) location of the SN. In the case of old progenitors, however, the SN may have exploded in the galactic halo and the metallicity of its progenitor may be very different from the gas. Similarly, the stellar continuum at a given spaxel is the sum of all star light along the light of sight. All these effects indicate that the correlation of the SN properties with the properties of its local environment is not unambiguous. Clearly, to study the correlation between the SN properties and its local environment, a large, unbiased sample of galaxies observed with a large-field IFU spectrograph is needed. Our sample of seven SNe~Ia in six galaxies is not large enough to draw any conclusion. The ongoing CALIFA\footnote{\url{http://www.caha.es/CALIFA/public_html/}} survey \citep{2011arXiv1111.0962S} will provide IFU observations of about 600 galaxies at redshift $z\sim0.02$. CALIFA uses the same instrument as the observations presented in this paper with similar setup and exposure times, and will provide data of similar quality as ours. The CALIFA targets are selected from a larger pool of about 1000 galaxies based only on the visibility of the targets at the time of the observations. Many of these galaxies are known to have hosted SNe. In addition, there are several ongoing large-field SN searches with that will potentially discover many new SNe in CALIFA-targeted galaxies. Thus, the CALIFA survey will provide a solid base to further expand the studies SNe~Ia properties as a function of their local environment to all SN types. In addition, CALIFA will provide the full galaxy spectra that will allow one to avoid the aperture effects to which the SDSS spectroscopy is subject. At low redshift the 3\arcsec-diameter fibers of the SSDS spectrograph cover only the galaxy nucleus, whose properties may be very different from those of the disk and may not be representative for the environment of the SNe that exploded elsewhere in the galaxy, for example because of the radial gradients of these properties. \section{Conclusions} In this pilot study we have obtained and analyzed IFU spectroscopy of six nearby spiral galaxies that hosted seven SNe~Ia. For the data reduction we developed and tested a robust reduction pipeline. A set of tools that implement various methods to derive the properties of the ionized gas and the stellar populations from the data-cube were also developed. This allowed us to generate 2D maps of the galaxies properties. The analysis of the maps showed that the quality of the data is sufficient to accurately derive the properties of the ionized gas even in the outer low surface brightness parts of the galaxies. However, the parameters of the stellar populations are determined with much larger uncertainties. We showed that analysis of azimutally averaged spectra at several de-projected galactocentric radii instead of the 2D maps provides a more robust way to derive the radial dependencies of the stellar population properties in galaxies with well-defined axial symmetry. The main results of our study can be summarized as follows: \begin{itemize} \item the six galaxies are quite massive with masses exceeding $2\times10^{10}\,M_{\sun}$; \item the ionized gas and the stellar populations both indicate metallicities above the solar value; \item five of the galaxies are currently forming stars at a rate of 1--5 $M_{\sun}$,yr$^{-1}$, which is typical for spiral galaxies at $z\simeq0$. The sixth galaxy shows no signs of star formation; \item the five star-forming galaxies have mean mass-weighted stellar age $\sim5$ Gyr and the passive one $\sim12$ Gyr; \item four of the five star-forming galaxies show radial gradients of their ionized gas metallicity in the range from $-$0.022 to $-$0.058\,dex\,kpc$^{-1}$. These values are typical for other spiral galaxies in the local universe. The fifth galaxy has a nearly uniformly distributed metallicity with a hint of a very low positive gradient of $+0.007$ dex\,kpc$^{-1}$; \item the radial dependence of the stellar population properties can be more robustly derived if azimutally averaged spectra at several de-projected galactocentric radii are analyzed. By this analysis we found indication of low negative radial metallicity gradients of the stellar populations in some galaxies of up to $-$0.03\,dex\,kpc$^{-1}$. Given the large uncertainties with which the stellar metallicity is estimated, the significance of these gradient is difficult to assess; \item in the five star-forming galaxies the fraction of young stellar populations increases until 4-8 kpc and shows signs of a subsequent decrease. In four of them the fraction of old stars monotonically decreases in the disk and one galaxy shows a more complex behavior. \item the passive galaxy has mostly old stars, with a possible small fraction of younger stars in the central few kpc; \item the kinematic analysis indicates that the galaxies are relaxed systems that most likely have not experienced recent a major merger; \item most of the SNe in our sample are projected on regions with metallicity and star formation rates above the galaxy average, likely as a result of our target selection criteria and the radial metallicity gradients; \item the BPT diagnostic diagram revealed that two of the galaxies host AGNs. Another galaxy is on the border between the AGNs and the star-forming galaxies. Our analysis shows that the AGNs are not strong enough to affect the quantities derived from the total galaxy spectra. This AGNs may not be recognized in studies of host galaxies of SNe~Ia at high redshift; \item the correlation of the SALT2 $x_1$ parameter of the SNe and the Hubble residuals with the various parameters of the host galaxies did not lead to conclusive results. The low statistics and the small ranges spanned by the galaxy parameters render such an attempt still premature. We also note that the HRs of our SNe are on average positive, although their host galaxies have masses in the range where the other studies have shown negative HRs. \end{itemize} In conclusion, we have demonstrated the viability to study the host galaxies of SNe~Ia at low redshift using wide-field IFU spectroscopy at 4m-class telescopes. Intermediate-resolution spectra with sufficient S/N can be obtained out to the outer low-surface brightness parts of the galaxies with a reasonably long exposure time of $\sim1.5$ hours. Compared to a integrated spectroscopy and analysis of multi-color broad-band imaging, the IFU spectroscopy provides much more detailed information about the properties of the galaxies, e.g. metallicity and age gradients, detailed star formation histories, etc. In principle, the S/N of our data is sufficient to perform correlation analyses between the SN properties and the properties of the host galaxies at the location of the SN. However, our current sample it too small and suffers from strong selection biases to provide robust correlation results. The ongoing CALIFA survey may soon provide IFU spectroscopy of a larger sample of SNe~Ia host galaxies, which will be a solid basis to further explore this path to study the SNe~Ia progenitors and improve SNe~Ia as distance indicators. We have tested the methodology and developed semi-automated tools that will allow expanding our analysis once the CALIFA data become available. \begin{acknowledgements} V.S. acknowledges financial support from Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT) under program Ci\^{e}ncia 2008. This work was partly funded by FCT with the research grant PTDC/CTE-AST/112582/2009 and a Ph.D. scholarship SFRH/BD/28082/2006, and under the Marie Curie Actions of the European Commission (FP7-COFUND). This work has made use of the NASA/IPAC Extragalactic Database (NED), NASA's Astrophysics Data System, and data products from SDSS and SDSS-II surveys. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is \url{http://www.sdss.org/}. \end{acknowledgements}
1,108,101,563,778
arxiv
\section{Introduction} \subsection{History.} An operator $A$ on a Hilbert space $H$ is \textit{hyponormal} if $A^{\ast}A-AA^{\ast} \geq 0$, or equivalently if $\|A^{\ast}f\| \leq \|Af\|$ for all $f \in H$. An operator $A$ is \textit{quasinormal} if $A$ and $A^{\ast}A$ commute. It is known that every quasinormal operator is hyponormal (see \cite[Proposition 1.7 p. 29]{c4} and \cite[Proposition 4.2, p. 46]{c4}), and certainly every normal operator is both quasinormal and hyponormal. Normality and similar weaker conditions have been intensely studied in recent years for weighted composition operators on $H^2$ and related spaces. The normal and unitary weighted composition operators on $H^{2}$ were discovered in part by Bourdon and Narayan \cite{Bourdon}. They showed that every automorphism $\varphi$ of $\mathbb{D}$ has a companion weight function $\psi$ such that \W\ is unitary on $H^{2}$. They also found some other normal weighted composition operators on $H^{2}$. Le \cite{l} found similar results in several variables. Recently in \cite{coko}, Cowen, Jung, and Ko investigated when \Ws\ is hyponormal, and Jung, Kim, and Ko \cite{jkk}, while studying binormal composition operators, showed that \C\ on $H^2$ is quasinormal if and only it is normal. We are focused on discovering hyponormal and quasinormal weighted composition operators on $H^{2}$ and $A_{\alpha}^{2}$ with linear fractional compositional symbol. The rest of Section 1 will give necessary preliminaries, including a proof of the spectrum of $C_{\ph}$ on $A^2_{\alpha}$ when \ph\ is a parabolic non-automorphism. In Section 2, we show that for almost all cases, if $\ph \in \mbox{LFT}(\D)$ and $\psi \in \Hi$ is continuous at the Denjoy-Wolff point of $\ph$, then \W\ is quasinormal if and only if it is normal (the only possible exception is when \ph\ is an automorphism but \W\ is not invertible). We also show that \C\ is quasinormal with $\ph \in \mbox{LFT}(\D)$ if and only if it is normal and $\ph(z) = \lambda z$ where $|\lambda|\leq 1$ (this extends the theorem of \cite{jkk} to $A^2_{\alpha}$). In Section 3, we turn our attention to hyponormal weighted composition operators. In particular, we give examples of new hyponormal weighted composition operators on \Ht\ which are not quasinormal, and eliminate some other possibilities. \subsection{Preliminaries.} The Hilbert spaces we are considering are the classical Hardy space $H^{2}$ and the weighted Bergman spaces $A_{\alpha}^{2}$ on the complex unit disk $\D$. For $f = \sum_{n=0}^{\infty} a_n z^{n}$ analytic on $\mathbb{D}$, $H^2$ is the set $$\{ f: \|f\|^{2}=\sum_{n=0}^{\infty}|a_n|^{2}<\infty \}.$$ For $\alpha > -1$, the weighted Bergman space $A_{\alpha}^{2}$ consists of all analytic functions $f$ on $\mathbb{D}$ such that $$\|f\|_{\alpha+2}^{2}=\int_{\mathbb{D}}|f(z)|^{2}(\alpha+1)(1-|z|^{2})^{\alpha}dA(z)<\infty,$$ where $dA$ is the normalized area measure on $\mathbb{D}$. The case when $\alpha=0$ is the (unweighted) Bergman space, denoted $A^{2}$. Both the weighted Bergman space and the Hardy space are reproducing kernel Hilbert spaces, when the reproducing kernel for evaluation at $w$ is given by $K_{w}(z)=(1-\overline{w}z)^{-\gamma}$ for $z,w \in \mathbb{D}$, with $\gamma=1$ for $H^{2}$ and $\gamma=\alpha+2$ for $A_{\alpha}^{2}$. We write $H^{\infty}$ for the space of bounded analytic functions on $\mathbb{D}$, and denote its norm by $\|.\|_{\infty}$, i.e. $$\|f\|_{\infty}:=\sup_{z\in \mathbb{D}}|f(z)|.$$ A \textit{composition operator} $C_{\varphi}$ on $H^2$ or $A^2_{\alpha}$ is defined by the rule $C_{\ph}(f)=f \circ \ph$ for $\ph: \D \rightarrow \D$ and analytic in $\D$. Moreover, for $\psi \in \Hi$ and an analytic self-map \ph\ of $\mathbb{D}$, we define the weighted composition operator \W\ by $\W f=\psi f \circ \varphi$ . Such weighted composition operators are clearly bounded on $H^{2}$ and $A_{\alpha}^{2}$. A linear fractional self-map of $\mathbb{D}$ is a map of the form $\varphi(z)=(az+b)/(cz+d), ad-bc \neq 0$ for which $\varphi(\mathbb{D}) \subseteq \mathbb{D}$. We denote the set of those maps by $\mbox{LFT}(\mathbb{D})$. If $\varphi$, not the identity map, is considered as a map of the extended complex plane $\mathbb{C} \cup \{\infty\}$ onto itself, it will have exactly two fixed points, counting multiplicities. The automorphisms of $\mathbb{D}$, denoted $\mbox{Aut}(\mathbb{D})$, are the maps in $\mbox{LFT}(\mathbb{D})$ that take $\mathbb{D}$ onto itself. They are necessarily of the form $\varphi(z)=\lambda(a-z)/(1-\overline{a}z)$, where $|\lambda|=1$ and $|a| < 1$ (see \cite{c3}). The automorphisms are divided into three subclasses, based on their fixed point behavior: \begin{enumerate} \item \textit{elliptic} if \ph\ has an interior fixed point, \item \textit{hyperbolic} if \ph\ has a boundary fixed point $\zeta$ with $\ph'(\zeta) < 1$, and \item \textit{parabolic} if \ph\ has a boundary fixed point $\zeta$ with $\ph'(\zeta) = 1$. \end{enumerate} When describing the derivative at the boundary, we mean this in the sense of radial limits, since \ph\ need only be defined on \D. However, since the automorphisms (and all linear fractional-maps) extend to maps of $\mathbb{C} \cup \{ \infty \}$, there is no confusion here. Linear fractional transformations are particularly interesting choices for the symbol of a composition operator, because of their connection to the reproducing kernels. This is exemplified in the Cowen adjoint formula. Though well-known, we re-state here due to its repeated use. \begin{proposition}[Cowen adjoint formula]\label{cowen} Suppose $\ph= (az+b)/(cz+d)$ maps \D\ into \D. Then the adjoint of $C_{\varphi}$ acting on $H^{2}$ and $A^{2}_{\alpha}$ is given by $$C_{\varphi}^{\ast}=T_{g}C_{\sigma}T_{h}^{\ast},$$ where \begin{enumerate} \item $\sigma(z):=({\overline{a}z-\overline{c}})/({-\overline{b}z+\overline{d}})$ is a self-map of $\mathbb{D}$, \item $g(z):=(-\overline{b}z+\overline{d})^{-\gamma}$ and $h(z):=(cz+d)^{\gamma}$, with $\gamma=1$ for $H^{2}$ and $\gamma=\alpha+2$ for $A^{2}_{\alpha}$, and \item $g$ and $h$ belong to $H^{\infty}$. \end{enumerate} The map $\sigma$ is called the \textit{Krein adjoint} of \ph. We will refer to $g$ and $h$ as the \textit{Cowen auxiliary functions} for \ph. \end{proposition} Composition operators on these spaces are also deeply intertwined with the function theory of \D, particularly the Denjoy-Wolff theorem: \begin{proposition}[Denjoy-Wolff] Let \ph\ be an analytic map from the open unit disk into itself which is not the identity map and not an elliptic automorphism. Then \ph\ has a unique point $\zeta$ in $\overline{\D}$ so that the iterates $\{ \ph_n \}$ of $\ph$ converge uniformly on compact subsets of \D\ to $\zeta$. \end{proposition} Another consequence of the Denjoy-Wolff theorem is that this unique point $\zeta$ always has $\ph'(\zeta) \leq 1$. For more on geometric function theory of the disk, see \cite{co2}. \subsection{Parabolic non-automorphisms and a spectral interlude.} We will quickly see in Section 2 that parabolic non-automorphic maps are central to this paper, so considerable time is spent on them in this section, including finding the spectrum of composition operators with such symbols on $A^2_{\alpha}$, a fact we require later. A map $\varphi \in \mbox{LFT}(\mathbb{D})$ is called parabolic if it has a fixed point $\zeta \in \partial \mathbb{D}$ of multiplicity $2$. The map $\tau(z):=(1+\overline{\zeta}z)/(1-\overline{\zeta}z)$ takes the unit disk onto the right half-plane $\Pi$ and sends $\zeta$ to $\infty$. Therefore, $\phi:=\tau\circ\varphi\circ\tau^{-1}$ is a self-map of $\Pi$ which fixes only $\infty$, and so must be the mapping of translation by some complex number $t$, where necessarily $\textrm{Re } t \geq 0$. Appropriately, we will call this the \textit{translation number $t$ of \ph}. Hence $\varphi(z)=\tau^{-1}(\tau(z)+t)$ for each $z \in \mathbb{D}$. Therefore, it is easy to see that \begin{eqnarray*} \ph(z)=\frac{(2-t)z+t\zeta}{2+t-t\overline{\zeta}z}. \end{eqnarray*} Note that if $\mbox{Re} \textrm{ } t=0$, then $\varphi \in \mbox{Aut}(\mathbb{D})$. If, on the other hand, $\mbox{Re}\textrm{ } t > 0$, then $\varphi \not\in \mbox{Aut}(\mathbb{D})$. When the translation number $t$ is strictly positive, we call $\varphi$ a \textit{positive parabolic non-automorphism}. Among the linear fractional self-maps of $\mathbb{D}$ fixing $\zeta \in \partial \mathbb{D}$, the parabolic ones are characterized by $\varphi'(\zeta)=1$.\\ Recall that the Denjoy-Wolff theorem only guarantees uniform convergence under iteration on \textit{compact subsets} of \D. In the following proposition, we can see that there are some linear fractional self-maps of $\mathbb{D}$ such that the iterates $\{ \varphi_{n} \}$ converge uniformly on \textit{all} of $\mathbb{D}$. We will use the following result, which was proved in \cite[Example 5]{derek}, in the proof of Theorem \ref{qnn}. \begin{proposition}\label{uci} If \ph\ is a parabolic non-automorphism, then \ph\ converges uniformly under iteration on all of \D\ to the constant function equal to its Denjoy-Wolff point. \end{proposition} To our knowledge, no one has found the spectrum $\sigma(C_{\ph})$ on $A^2_{\alpha}$ when \ph\ is a parabolic non-automorphism, though Cowen and MacCluer \cite[Section 7.7]{cm1} proved that $\sigma(C_{\ph})$ is a logarithmic spiral from 1 to 0 when $C_{\ph}$ acts on \Ht. Here, we prove the same fact for $A^2_{\alpha}$ by a small adjustment of the proofs found in \cite{cm1}. \begin{proposition}\label{spectrum} Let \ph\ be a parabolic non-automorphism with Denjoy-Wolff point $\zeta$ and translation number $t$. On $H^2$ or $A^2_{\alpha}$, $$\sigma(C_{\ph}) = \{ e^{-\beta t} : 0 \leq \beta < \infty \} \cup \{ 0 \} $$ and every point of $\sigma(C_{\ph})$ except $0$ is an eigenvalue. \end{proposition} \begin{proof} We need only prove this fact for $A^2_{\alpha}$. Without loss of generality, assume $\zeta = 1$ (otherwise, choose $\theta$ so that $e^{i\theta}=\zeta$ and consider the operator $C_{e^{i\theta}}C_{\varphi}C_{e^{-i\theta}}=C_{\tilde{\varphi}}$ whose symbol has fixed point $1$). In \cite[Theorem 6]{hu}, Cowen and MacCluer proved $\sigma(C_{\ph}) = \{ e^{-\beta t} : |\arg \beta| = 0 \} \cup \{ 0 \}$ for $H^2$. The only time they used the fact that they were on $H^2$ was that the spectral radius of $C_{\ph}$ on $H^2$ is 1. By \cite[Theorem 6]{hu}, since $\ph'(\zeta) = 1$, the spectral radius of \C\ is also $1$ on $A^2_{\alpha}$, so the same containment holds on $A^2_{\alpha}$. In \cite[Corollary 7.42]{cm1}, the reverse containment was given by showing that each such $e^{-\beta t}$ is an eigenvalue with an eigenvector in \Hi, meaning that they belong to $A^2_{\alpha}$ as well as \Ht. Therefore, we have the desired conclusion. \end{proof} \section{Quasinormal weighted composition operators on $H^{2}$ and $A^{2}_{\alpha}$} In this section, we investigate quasinormal composition operators and weighted composition operators on $H^{2}$ and $A^{2}_{\alpha}$. We will look at the non-automorphic and automorphic cases separately. \\ \subsection{$\ph \notin \mbox{Aut}(\mathbb{D})$.}The next theorem explains why we have focused on the properties of parabolic non-automorphisms. \begin{proposition}\label{parabolic} Suppose that $\varphi \in \mbox{LFT}(\mathbb{D})$ is not an automorphism of $\mathbb{D}$ and that $\varphi(\zeta)=\eta$ for some $\zeta,\eta \in \partial\mathbb{D}$. Let $\psi \in H^{\infty}$ be continuous at $\zeta$ and $\psi(\zeta)\neq 0$. If \W\ is quasinormal on $H^{2}$ or $A^{2}_{\alpha}$, then $\varphi$ is parabolic.\end{proposition} \begin{proof} In this proof, we will let $A \equiv B$ denote that two operators $A$ and $B$ have compact difference. Let \W\ be quasinormal on $H^{2}$ or $A^{2}_{\alpha}$. By \cite[Proposition 2.3]{fash} and \cite[Corollary 2.2]{kmm}, we obtain $\W \Ws \W \equiv \psi^{2}(\zeta)\overline{\psi(\zeta)}C_{\varphi}C_{\varphi}^{\ast}C_{\varphi}$ and $W^{\ast}_{\psi,\varphi}W_{\psi,\varphi}W_{\psi,\varphi}\equiv\psi^{2}(\zeta)\overline{\psi(\zeta)} C^{\ast}_{\varphi}C_{\varphi}C_{\varphi}$. Now using \cite[Theorem 3.1]{kmm} and \cite[Theorem 3.2]{mw2}, we see that \begin{eqnarray*} \W \Ws \W &\equiv& s \psi^{2}(\zeta)\overline{\psi(\zeta)}C_{\varphi}C_{\sigma}C_{\varphi}\\ &=&s\psi^{2}(\zeta)\overline{\psi(\zeta)}C_{\varphi\circ\sigma\circ\varphi}, \end{eqnarray*} and also by the same argument, we conclude that \begin{eqnarray*} \Ws \W \W &\equiv&s\psi^{2}(\zeta)\overline{\psi(\zeta)}C_{\sigma}C_{\varphi}C_{\varphi}\\ &=&s\psi^{2}(\zeta)\overline{\psi(\zeta)}C_{\varphi\circ\varphi\circ\sigma}, \end{eqnarray*} where $\sigma$ is the Krein adjoint of $\varphi$ and by \cite[Proposition 3.6]{kmm} and \cite[Theorem 3.2]{mw2}, $s=|\varphi'(\zeta)|^{-1}$ for $H^{2}$ and $s=|\varphi'(\zeta)|^{-(\alpha+2)}$ for $A^{2}_{\alpha}$. It is not hard to see that $C_{\varphi\circ\sigma\circ\varphi}$ is not compact (see \cite[Corollary 3.14]{cm1}). Let $\tilde{\varphi}:=\varphi\circ\varphi\circ\sigma$. If $\zeta \neq \eta$, then $\overline{\tilde{\varphi}(\mathbb{D})} \subseteq \mathbb{D}$. Hence by \cite[p. 129]{cm1}, $C_{\tilde{\varphi}}$ is compact. Therefore, if $\zeta \neq \eta$, then \W\ is not quasinormal. Now assume that $\zeta =\eta$. Since \W\ is quasinormal, \cite[Theorem 5.13]{km} shows that $\varphi\circ\sigma\circ\varphi=\varphi\circ\varphi\circ\sigma$. Since $\varphi$ is univalent, $\sigma\circ\varphi=\varphi\circ\sigma$. Hence \cite[p. 139]{kmm} implies that $\varphi$ is a parabolic non-automorphism.\end{proof} We can now see that \W\ is quasinormal if and only if it is normal in this case. \begin{theorem}\label{qnn} If $\ph \in \mbox{LFT}(\mathbb{D})$ is not an automorphism of \D\ and $\psi \in \Hi$ is continuous at the Denjoy-Wolff point of \ph, then \W\ on $H^2$ or $A^2_{\alpha}$ is normal if and only if it is quasinormal. \end{theorem} \begin{proof} Let $W_{\psi,\varphi}$ be quasinormal. First, assume as well that \ph\ has Denjoy-Wolff point $\zeta$ on $\partial \D$ and $\psi(\zeta) \neq 0$. By Proposition \ref{parabolic}, \ph\ must be a parabolic non-automorphism. Such maps converge uniformly under iteration to the Denjoy-Wolff point by Proposition \ref{uci}. By \cite[Theorem 8]{derek}, $\sigma(W_{\psi,\varphi})\subseteq \sigma(\psi(\zeta)C_{\ph})$ (\cite{derek} was written as if the setting was $H^2$, but the results hold for $A^2_\alpha$ as well, with identical arguments). In Proposition \ref{spectrum}, we showed that such a composition operator $C_\varphi$ has spectrum equal to logarithmic spiral from $1$ to $0$. Therefore, $W_{\psi,\varphi}$ has a spectrum with $1$ to $0$. However, hyponormal (and therefore also quasinormal) operators with zero area are normal \cite[Theorem 4]{stam}. If instead $\psi(\zeta) = 0$, then by \cite[Theorem 8]{derek}, the operator has spectrum equal to the singleton $\{ 0 \}$, but hyponormal operators have norm equal to their spectral radius, which is a contradiction. Lastly, suppose $\zeta$, the Denjoy-Wolff point of \ph, is inside \D. By Theorem \ref{parabolic}, since \ph\ is not parabolic, \ph\ must not map any element of $\partial \D$ to $\partial \D$. Then, since $\overline{\varphi(\mathbb{D})} \subseteq \mathbb{D}$, $C_{\ph}$ is compact (see \cite[p. 129]{cm1}), and thus so is \W. This forces any such hyponormal operator again to be normal. The other direction is trivial. \end{proof} In \cite{Bourdon}, these normal weighted composition operators have already been completely characterized for $H^2$ when the Denjoy-Wolff point of \ph\ is in $\D$. However, those authors do not completely identify the weights $\psi$ that make \W\ normal when \ph\ has its Denjoy-Wolff point on $\partial \D$, but only give known examples. Next, we will turn attention to weighted composition operators with automorphic compositional symbol. \subsection{$\ph \in \mbox{Aut}(\mathbb{D})$.} In \cite{g}, Gunatillake showed that \W\ is invertible on $H^{2}$ if and only if $\psi$ is both bounded and bounded away from zero on $\mathbb{D}$ and $\varphi$ is an automorphism of $\mathbb{D}$. After that, in \cite{Bourdon2}, Bourdon showed the same result for invertible weighted composition operators on the weighted Bergman spaces. In both settings, ~Hyv\"arinen et al. \cite[Corollary 5.1]{HLNS} discovered the spectrum of all such operators. In \cite{mahsapre1}, the first two authors found all normal weighted composition operators \W\ when $\varphi \in \mbox{Aut}(\mathbb{D})$ and $\psi$ is analytic on $\overline{\mathbb{D}}$. In the following proposition, we add the assumption that $\psi$ is bounded away from zero on $\mathbb{D}$, and by the similar idea which was used in \cite{mahsapre1}, we characterize all invertible quasinormal weighted composition operators. \begin{proposition}\label{auto} Suppose that \ph, not the identity and not an elliptic automorphism of $\mathbb{D}$, is in $\mbox{Aut}(\mathbb{D})$. Let $\psi \in \Hi$ is continuous on $\partial \D$, and for each $z \in \overline{\mathbb{D}}$, $\psi(z)\neq 0$. Then \W\ is normal on $H^{2}$ or $A_{\alpha}^{2}$ if and only if $\psi(z)=\psi(0)K_{\sigma(0)}$, where $\sigma$ is the Krein adjoint of $\varphi$. \end{proposition} \begin{proof} Let \W\ be normal. Assume that $\varphi(a)=0$, where $a \in \mathbb{D}$. Then \W\ is essentially normal. Hence \cite[Lemma 2]{Bourdon}, \cite[p. 603]{l} and \cite[Corollary 3.5]{fash} imply that $\psi(z)=\psi(0)/(1-\overline{a}z)^{\gamma}$, where $\gamma=1$ for $H^{2}$ and $\gamma=\alpha+2$ for $A_{\alpha}^{2}$. \par Conversely, assume that $$w=\frac{(1-|a|^{2})^{\gamma/2}}{\psi(0)}\psi,$$ where $\gamma=1$ for $H^{2}$ and $\gamma=\alpha+2$ for $A_{\alpha}^{2}$. By \cite[Theorem 6]{Bourdon} and \cite[Corollary 3.6]{l}, we see that $W_{w,\varphi}$ is unitary. Therefore, \W\ is normal.\end{proof} Now, we characterize the invertible quasinormal weighted composition operators on $H^2$ and $A^2_{\alpha}$. \begin{theorem}\label{invertible} Suppose that \W\ is an invertible quasinormal weighted composition operator on $H^{2}$ or $A^{2}_{\alpha}$. Then \W\ is normal and $\varphi$ is an automorphism; moreover, \\ (a) If $\varphi$ is the identity, then $\psi$ is a constant function. \\ (b) If $\varphi$ is an elliptic automorphism of $\mathbb{D}$, then $\psi=\psi(p)K_{p}/K_{p} \circ \varphi$, where $p \in \mathbb{D}$ is the fixed point of $\varphi$.\\ (c) If $\varphi$ is either a hyperbolic automorphism or a parabolic automorphism and $\psi \in A(\mathbb{D})$, then $\psi(z)=\psi(0)K_{\sigma(0)}$, where $\sigma$ is the Krein adjoint of $\varphi$.\end{theorem} \begin{proof} By \cite[Theorem 3.4]{Bourdon2}, $\varphi$ is an automorphism. It is elementary to see that every invertible quasinormal operator on a Hilbert space is normal, so \W\ is normal. If $\varphi$ is the identity, then it has many fixed points in $\mathbb{D}$. Also if $\varphi$ is an elliptic automorphism, then it has only one fixed point in $\mathbb{D}$. Hence \cite[Theorem 10]{Bourdon} and \cite[Theorem 4.3]{l} imply (a) and (b). Now assume that $\varphi$ is either a hyperbolic automorphism or a parabolic automorphism and $\psi \in A(\mathbb{D})$. By \cite[Theorem 3.4]{Bourdon2}, $\psi$ is bounded away from $0$ on $\mathbb{D}$. Therefore, the result follows from Propotion \ref{auto}.\end{proof} If $\psi$ is not bounded away from zero, then \W\ is not invertible. The spectra of such operators was studied in \cite{gaozhou}. Our conjecture is that no (non-normal) quasinormal weighted composition operators would exist in such a case. \subsection{Unweighted composition operators.} Here, we show our work above implies that if $\phi \in \mbox{LFT}(\D)$, then $C_\phi$ is quasinormal on $H^{2}$ or $A^{2}_{\alpha}$ if and only if it is normal and $\ph(z) = \lambda z, |\lambda| \leq 1$. In \cite{jkk}, this was proven already for $H^2$. We use a different proof technique that allows to extend the result to $A^2_{\alpha}$. \\ In \cite{z}, Zorboska showed that if $C_{\varphi}$ is hyponormal on \Ht\ or $A^2_{\alpha}$, then $\varphi(0)=0$. Since every quasinormal operator is hyponormal, we record that result again here for our use. \begin{proposition}[Zorboska] \label{zero}Let $C_{\varphi}$ be quasinormal on $H^{2}$ or $A^{2}_{\alpha}$. Then $\varphi(0)=0$.\end{proposition} \begin{theorem} Let $\varphi \in \mbox{LFT}(\mathbb{D})$. Then $C_{\varphi}$ is quasinormal on $H^{2}$ or $A^{2}_{\alpha}$ if and only if $\ph(z)=\lambda z$, where $|\lambda| \leq 1$.\end{theorem} \begin{proof} Suppose \C\ is quasinormal. If \ph\ is not an automorphism, then by Theorem \ref{qnn}, it is normal. Now assume that \ph\ is an automorphism. By Proposition \ref{zero}, $\ph(0)=0$, so we can easily see that $\varphi(z)=\lambda z$, where $|\lambda| = 1$, which means that \C\ is normal. So, in all cases, \C\ is normal if it is quasinormal, and the only normal composition operators $C_{\varphi}$ on \Ht\ and $A^2_{\alpha}$ are given by $\ph(z)=\lambda z$, where $|\lambda| \leq 1$. The other direction is trivial. \end{proof} \section{Hyponormal Weighted Composition Operators} In this section, we focus on $\ph \in \mbox{LFT}(\D)/\mbox{Aut}(\D)$. In most scenarios, we achieve the same result that if \W\ is hyponormal with this assumption, it is normal. However, in one case, we give new examples of hyponormal weighted composition operators which are not quasinormal. We will split this section up based on upon the location of the Denjoy-Wolff point $\zeta$ of \ph\ as well as the size of the derivative there. \subsection{$|\zeta|=1, \ph'(\zeta) < 1$.} In this case, \ph\ is of hyperbolic type. Like the parabolic non-automorphisms, these symbols converge uniformly under iteration to the Denjoy-Wolff point (see \cite[Theorem 4]{derek}). This case has already been covered in \cite[Theorem 22]{derek} as well, but we record it again here with a simpler proof. \begin{theorem} Suppose \ph\ is a hyperbolic non-automorphism with Denjoy-Wolff point on $\partial \D$. There is no $\psi \in \Hi$ continuous at the Denjoy-Wolff point $\zeta$ of \ph\ so that \W\ is hyponormal on \Ht\ or $A^2_{\alpha}$. \end{theorem} \begin{proof} In \cite[Corollary 11]{derek} and \cite[Corollary 16]{derek}, it is shown that $\sigma(\W) = \sigma(\psi(\zeta) C_\phi)$ and the same is true for the point spectrum; $\sigma_{p}(\W) = \sigma_{p}(\psi(\zeta) C_\phi)$ as well (again, the work of that paper extends to $A^2_{\alpha}$ though it was written as if $H^2$ was the only possible setting). First, suppose $\psi(\zeta) \neq 0$. The composition operator $C_\phi$ has an uncountable point spectrum \cite[Lemma 7.24]{cm1} on $H^2$, and this is also true on $A^2_{\alpha}$ since any eigenvector for \C\ on \Ht\ will also be in $A^2_{\alpha}$. Therefore \W\ has an uncountable point spectrum on each of these spaces as well. Since hyponormal operators require eigenvectors corresponding to different eigenvalues to be orthogonal \cite[Proposition 4.4]{c4} and these spaces have countable bases, we have reached a contradiction. If instead $\psi(\zeta) = 0$, then by \cite[Theorem 8]{derek}, we have $\sigma(\W) = \{0\}$, but any hyponormal operator must have norm equal to its spectral radius, and this is a contradiction. \end{proof} \subsection{$|\zeta|=1, \ph'(\zeta) = 1$.} In this case, \ph\ is of parabolic type. Our earlier work shows that \W\ cannot be strictly hyponormal. \begin{theorem}\label{parahypo} Suppose \ph\ is a parabolic non-automorphism $\psi \in \Hi$ is continuous at the Denjoy-Wolff point of $\ph$. Then \W\ is hyponormal on $H^2$ or $A^2_{\alpha}$ if and only if it is normal. \end{theorem} \begin{proof} The proof is contained entirely in the proof of Corollary \ref{qnn}. \end{proof} In \cite[Theorem 20]{derek}, the following theorem was proved on $H^2$. By using the methods outlined in \cite[Theorem 20]{derek} that result holds in $A^2_{\alpha}$. \begin{theorem} Let $\ph:\D\rightarrow\D$ be a parabolic non-automorphism with positive translation number $t$ and Denjoy-Wolf point $\zeta$ and let $\psi\in \Hi$ be continuous at $\zeta$. If \W\ is hyponormal on either \Ht\ or $A^2_{\alpha}$, then it is normal and $\psi$ is a multiple of $K_{\sigma(0)}$, where $\sigma$ is the Krein adjoint of $\ph$. Furthermore, if $\psi(\zeta)$ is real, then \W\ is self-adjoint. \end{theorem} It is possible that when \ph\ is a parabolic non-automorphism which is not positive, there are other weights $\psi$ so that \W\ is normal. However, we suspect that $\psi$ must be a multiple of $K_{\sigma(0)}$ if \W\ is normal and \ph\ is any parabolic non-automorphism. \subsection{$|\zeta| < 1$, \ph\ has no fixed point on $\partial \D$.} Here, we again see that any hyponormal weighted composition operator must be normal. \begin{theorem} Suppose $\ph \in \mbox{LFT}(\D)$ is not an automorphism, with Denjoy-Wolff point $\zeta \in \D$ and no other fixed points in $\overline{\D}$. Suppose also that $\psi \in \Hi$. Then \W\ on $H^2$ or $A^2_{\alpha}$ is hyponormal if and only if it is normal. \end{theorem} \begin{proof} Since \ph\ is linear fractional and does not have a fixed point on the boundary, $\overline{\ph_{n}(\D)} \subseteq \D$ for some integer $n$. Therefore, \C\ is power-compact on $H^2$ as well as on $A^2_{\alpha}$ and thus so is $\W$. Then $\sigma(\W)$ has zero area and so $\W$ is normal. The other direction is trivial. \end{proof} These operators are already classified exactly in \cite{coko} for $H^2$. \subsection{$|\zeta| < 1$, \ph\ has a fixed point on $\partial \D$.} A representative example from this class is $\ph(z) = z/(2-z)$. In that case, $C_{\ph}$ is known to be subnormal on $H^2$ and therefore hyponormal. Expanding on the techniques of \cite{sadraoui}, we give examples of new weighted composition operators which are hyponormal but not quasinormal, and whose weights are not necessarily linear fractional. We begin with a lemma owed to \cite{douglas}. \begin{lemma} \cite[Theorem 1]{douglas} \label{operatorc} A bounded operator $A$ on a Hilbert space $H$ is hyponormal if and only if there exists a bounded operator $C$ on $H$ such that $||C|| \leq 1$ and $A^{*} = CA$. \end{lemma} Sadraoui \cite{sadraoui} used Lemma \ref{operatorc} to good effect; the following is a narrowed example of what he proved in Section 2.5. \begin{example}\label{sadraouiexample} For $0<s<1$, let \begin{align*} \ph &= sz/(1-(1-s)z), \\ \psi &= 1/(1-(1-s)z), \\ \sigma &= sz + 1-s, \\ \tau &= (sz+1-s)/(sz(1-s)+1-s+s^2), \textrm{ and } \\ \eta &= s/(sz(1-s)+1-s+s^2). \end{align*} Then by Proposotion \ref{cowen}, $(T_{\psi} C_{\ph})^{*} = C_\sigma$. Additionally, as proved in \cite[Corollary 2.5.2]{sadraoui}, $C_\sigma = T_\eta C_{\tau} T_\psi C_{\ph}$ and $\| T_\eta C_{\tau} \| = 1$. Therefore, $T_\psi C_\phi$ is hyponormal by Lemma \ref{operatorc}.\end{example} We expand on Sadraoui's example to construct other weights $f$ so that $T_f T_\psi C_{\ph}$ is hyponormal on \Ht. \begin{theorem}\label{sadraouidisk} Suppose that $\ph, \psi, \sigma, \eta, \tau$ are as in Example \ref{sadraouiexample}. Let $f$ be such that $f, 1/f \in H^{\infty}$. Suppose further that there exists $g \in \Hi$ such that $g \circ \sigma = f$ and $|g(z)| \leq |f(z)|$ for all $z \in \D$. Then $W_{f\psi, \ph}$ is hyponormal on $H^2$, but not quasinormal. \end{theorem} \begin{proof} Note that the adjoint of $W_{f\psi, \ph} = T_f T_\psi C_{\ph}$ is $C_\sigma T_f^*$. Let $ C = T_\eta C_\tau T_{g}^{*} T_{1/f}$. Since by Example 3.7, $C_{\sigma}^{\ast}=T_{\psi}C_{\varphi}$ and $C_\sigma=T_{\eta}C_{\tau}C_{\sigma}^{\ast}$, we have \begin{align*} C T_f T_\psi C_{\varphi} &= T_\eta C_\tau T_{g}^{*} T_{1/f} T_f T_\psi C_{\ph} \\ &= T_\eta C_\tau T_{g}^{*} T_\psi C_{\ph} \\ &= T_\eta C_\tau T_{g}^{*} C_\sigma^* \\ &= T_\eta C_\tau (C_\sigma T_{g})^{*} \\ &= T_\eta C_\tau (T_{g \circ \sigma} C_\sigma)^{*} \\ &= T_\eta C_\tau (T_{f} C_\sigma)^{*} \\ &= T_\eta C_\tau C_\sigma^{*} T_f^* \\ &= C_\sigma T_f^*. \end{align*} It remains to show that $||C|| \leq 1$. Let $x \in H^{2}$ have $\|x\| \leq 1$. Note that all analytic Toeplitz operators are hyponormal on $H^2$, so $||T_{g}^{*} T_{1/f} x|| \leq ||T_{g} T_{1/f} x|| = ||T_{\frac{g}{f}}x||$. Since $|g(z)| \leq |f(z)|$ for all $z \in \D$, we have $$\left|\frac{g(z)}{f(z)}\right| \leq \left|\frac{f(z)}{f(z)}\right| = 1$$ on $\mathbb{D}$. This means that $||\frac{g}{f}||_{\infty} \leq 1$ and $||\frac{g}{f} x|| \leq ||\frac{g}{f}||_{\infty} ||x|| \leq ||x||$, so finally we have $||T_{g}^{*} T_{1/f} x|| \leq 1$. Since it has been shown that $T_\eta C_\tau$ has norm $1$, we see from the calculations here that $||Cx|| \leq 1$ for any $x$ with $||x|| \leq 1$, so $||C|| \leq 1$ and $W_{f\psi, \ph}$ is hyponormal on $H^2$. By Theorem \ref{parabolic}, it is not quasinormal. \end{proof} \begin{example} Let $\ph, \psi, \sigma, \eta, \tau$ be as in Example \ref{sadraouiexample} with $s = 1/2$. Then $f(z) = 2+z$ and $f(z) = e^z$ are examples of weights so that $W_{f\psi, \ph}$ is hyponormal on $H^2$ (take $g = f\circ \sigma^{-1}$ in both cases). The operator \W\ with $\psi = 2e^z/(2-z)$ and $\ph = z/(2-z)$ is the example of a hyponormal weighted composition operator such that the weight is not rational (of course, many more can be constructed from this theorem). \end{example} \section{Further Questions} Below are some questions that could extend our work: \begin{enumerate} \item Are there quasinormal weighted composition operators on $H^2$ or $A^2_{\alpha}$ where \ph\ is an automorphism and $\psi$ is \textit{not} bounded away from zero, so that \W\ is not invertible? \item When \ph\ is a parabolic non-automorphism, if \W\ is quasinormal, then it is normal. Some of these normal operators were already described for $H^2$ in \cite{Bourdon}, but it was assumed that $\psi$ had a very particular form. Are there other weights so that \W\ is normal? Our conjecture is no, since it is already impossible when \ph\ is a positive parabolic non-automorphism. \item If the Denjoy-Wolff point of \ph\ is in \D\, then \ph\ must be linear fractional if \W\ is normal or even cohyponormal on $H^2$ \cite{coko}. Is this also true when \W\ is quasinormal? Furthermore, no authors have proven that if \W\ is normal on $H^2$ that \ph\ must then be linear fractional (particularly when the Denjoy-Wolff point is on the boundary of $\D$). \item We have shown several different weights in Section 3 that make \W\ hyponormal. Is it possible to identify all weights $\psi \in \Hi$ so that \W\ is hyponormal, even with the assumption that \ph\ is linear fractional? \end{enumerate} \footnotesize \bibliographystyle{amsplain}
1,108,101,563,779
arxiv
\section{\label{sec:one}INTRODUCTION} Warm dense matter (WDM) is an intermediate state bridges the condesed matter and ideal plasma \cite{Fortov2011}. The transport properties of WDM, such as diffusion, viscosity, thermal conduction, and temperature relaxation, etc. \cite{Stanton2016,White2017,Heinonen2020,Collins2016,Barry2011}, play important roles in the field of astrophysics and inertial confinement fusion \cite{Nettelmann2008,Xu2011,DDKang2020,Rinderknecht2014,Huang2018} (ICF). For WDM, the ionic coupling parameter $\Gamma=Z_i^2e^2/\left(r_ik_BT\right)$ is larger than 1, and the electron degeneracy parameter $\Theta=T/T_F$ is less than 1. That requires us to consider both strong coupling between ions and the partial ionization and partial degeneration of electrons when studying the unique state. There has not been a mature theory that is good enough to describe the properties of WDM. At present, numerical simulation methods are more popular schemes, such as molecular dynamics (MD) methods \cite{Glosli2008,Haxhimali2014,QMa2014} and density functional theory \cite{JYDai2012,Kress2010,Horner2009} (DFT). Most of these models are based on the Born-Oppenheimer (BO) approximation. BO approximation---decouples ions from electrons to the instantaneously adjusting potential energy surface (PES) formed by fast electrons---has achieved great success on complex many-body systems. However, it may have difficulty in WDM considering the excitation and ionization of electrons. The drastic dynamic electron-ion collisions cause great disturbances in the PES, and the non-adiabatic effect will exhibit significant effects on the equilibrium and the non-equilibrium processes \cite{Andrew2012,Strickler2016,BBLu2019,QYZeng2020}. With the improvement of diagnostic methods, especially the application of X-ray Thomson scattering techniques \cite{Glenzer2009}, electronic information of WDM can be obtained in the laboratory. To interpret the experimental data, a more precise theory beyond BO approximation is required on account of the complex environment of WDM. The non-adiabatic effect has been considered by some methods to get more accurate interactions between electrons and ions in WDM. Derived from time-dependent Kohn-Sham equation, time-dependent density function theory \cite{Campetella2017} (TDDFT) gives the relatively exact electronic structure information. Thanks to the coupling of the electrons and ions, TDDFT-Ehrenfest approach can give the results such as energy dissipation process, excitation energies and optical properties \emph{etc} \cite{Graziani2014,Baczewski2016}. However, TDDFT is extremely time-consuming, limited by finite time and size scale. Thus low frequency modes can not be described well and the convergence of scale is required to be verified carefully. Quantum Langevin molecular dynamics (QLMD) holds a more efficient first principles computation efficiency, simultaneously regarding dynamic electron-ion collisions as frictional forces in Langevin dynamical equation of ions \cite{JYDai2010}. Using the QLMD method, a stronger ionic diffusive mode at low frequency has been found when the selected friction parameter becomes larger, as well as the decrease of the sound-speed \cite{Mabey2017}. Nevertheless, the determination of the friction parameter is \emph{a priori}. Recently, Simoni \emph{et al} have provided ab-initio calculations of the friction tensor in liquid metals and warm dense plasma \cite{Simoni2020}. They obtain a non-diagonal friction tensor, reflecting the anisotropy of instantaneous dynamic electron-ion collisions. Electron force field (EFF) expresses electrons as Gaussian wave packets, so that it can include the non-adiabatic effect intrinsically in molecular dynamics simulation \cite{Su2007,Jaramillo2011}. Lately the method has been applied to warm dense aluminum and found similar conclusions that non-adiabatic effect enhances ion modes around $\omega$ = 0, however, the effect is not sensitive to the sound speed \cite{Davis2020}. Q. Ma \emph{et al} have developed the EFF methodology to study warm and hot dense hydrogen \cite{QMa2019,QMa2018}. They conclude that dynamic electron-ion collisions reduce the electrical conductivities and increase the electron-ion temperature relaxation times compared with adiabatic and classical framing theories. As another approach, Bohmian trajectory formalism has been applied by Larder \emph{et al} recently \cite{Larder2019}. Constructing a thermally averaged, linearized Bohm potential, fast dynamical computation with coupled electronic-ionic system is achieved \cite{Larder2019}. The result also reveals different phenomenon of dynamic structure factor (DSF) and dispersion relation from DFT-MD simulation. All researches reflect that electron-ion collisions affect significantly on the study of dynamic properties of WDM, for both electrons and ions. Nevertheless, the effect of non-adiabatic effect on the ionic transport properties such as diffusion coefficient \cite{Hansen1975} is few studied, in both numerical simulations and analytical models. We could image the existence of dynamic electron-ion collisions will induce new effects such as dissipation or friction. In particular, for the analytical models based on traditional BO methods, we should study the non-adiabatic dynamic collisions effect on the self-diffusion in warm dense matter, and propose a new model including collisions induced friction (CIF). The paper is organized as follows. Firstly, details of theoretical methods and the computation of diffusion coefficient are introduced in Section~\ref{sec:two}. Then, in Section~\ref{sec:three}, the static and transport results of QMD, OFMD, QLMD, and (C)EFF simulations are showed and the dynamic collisions effect is discussed. In section~\ref{sec:four}, we systematically study the colision frequency effect on ionic diffusions and the CIF model is introduced to estimate the impact of electron-ion collisions. In section~\ref{sec:five}, the results are compared with the YOCP, and EOCP models. Finally, the conclusions are given in section~\ref{sec:six}. All units are in atom unit if not emphasized. \section{\label{sec:two}THEORETICAL METHODS AND COMPUTATIONAL DETAILS} \subsection{(Constrained) electron force field methodology} EFF method is supposed to be originated from wave packet molecular dynamics (WPMD) \cite{Heller1975} and floating spherical Gaussian orbital (FSGO) method \cite{Frost1967}. Considering each electronic wave function as a Gaussian wave packet, the excitation of electrons can be included with the evolution of positions and wavepacket radius. N-electrons wave functions are taken as a Hartree product of single-electron Gaussian packet written as \begin{eqnarray} \Psi\left({\bf r}\right)&=&\left(\frac{2}{s^{2}\pi}\right)^{3/4}\exp\left(-\left(\frac{1}{s^2}-\frac{2p_{s}i}{s\hbar}\right)\left({\bf r}-{\bf x}\right)^{2}\right)\nonumber \\ & &\cdot\exp\left(\frac{i}{\hbar}{\bf p_x}\cdot{\bf r}\right). \end{eqnarray} where $s$ and ${\bf x}$ are the radius and average positions of the electron wave packet, respectively. $p_s$ and ${\bf p_x}$ correspond to the conjugate radial and translational momenta. Nuclei in EFF are treated as classical charged particles moving in the mean field formed by electrons and other ions. Substituting simplified electronic wave function in the time-dependent Schr\"odinger equation with a harmonic potential, equation of motion for the wave packet can be derived \begin{subequations} \begin{equation} {\bf \dot{x}}={\bf p_x}/m_e, \end{equation} \begin{equation} {\bf \dot{p_x}}=-\nabla_{x}V, \end{equation} \begin{equation} \dot{s}=(4/d)p_{s}/m_e, \end{equation} \begin{equation} \dot{p_s}=-\partial{V}/\partial{s}. \end{equation} \end{subequations} where $d$ is the dimensionality of wave packets. For a three-dimensional system, $d$ is equal to 3, and it becomes 2 in 2D systems. $V$ is the effective potential. Combining with ionic equations of motion, EFF MD simulations have been implemented in LAMMPS package\cite{Jaramillo2011}. In addition to electrostatic interactions and electron kinetic energy, spin-dependent Pauli repulsion potential is added in the Hamiltonian as the anti-symmetry compensation of electronic wave functions. In EFF methodology, the exchange effect is dominated by kinetic energy. All interaction potentials are expressed respectively as \begin{subequations} \begin{equation} E_{nuc-nuc}=\sum_{i<j} Z_{i}Z_{j}/R_{ij}, \end{equation} \begin{equation} E_{nuc-elec}=\sum_{i<j} -\left(Z_{i}/R_{ij}\right)\emph{erf}\left(\sqrt{2}R_{ij}/s_j\right), \end{equation} \begin{equation} E_{elec-elec}=\sum_{i<j} \left(1/r_{ij}\right)\emph{erf}\left(\sqrt{2}r_{ij}/\sqrt{s_{i}^{2}+s_{j}^{2}}\right), \end{equation} \begin{equation} E_{ke}=\sum_{i} \left(3/2\right)\left(1/s_{i}^2\right), \end{equation} \begin{equation} E_{Pauli}=\sum_{\sigma_{i}=\sigma_{i}} E\left(\uparrow\uparrow\right)_{ij}+\sum_{\sigma_{i}\neq\sigma_{i}} E\left(\uparrow\downarrow\right)_{ij}. \end{equation} \end{subequations} where $Z$ is the charge of nucleus, $r_{ij}$ and $R_{ij}$ correspond to the relative positions of two particles (nuclei or electrons). $\emph{erf}\left(x\right)$ is error function, $\sigma$ means the spin of electrons. Pauli potential is consists of same and opposite spin electrons repulsive potentials. More details can be found in Refs.~\onlinecite{Su2007,Jaramillo2011,Su2009,Patrick2012}. However, the EFF model also suffers from the limitation of WPMD. That is the wave packets spread at high temperature \cite{Grabowski2013}. To avoid excessive spreading of wave packets, the harmonic constraints are often added. Recently, Constrained EFF (CEFF) method has been proposed using $L=\lambda_{D}+b_0$ as the boundary of the wave packets \cite{QMa2019}, getting much lower electron-ion energy exchange rate agreeing with experimental data \cite{Celliers1992,White2014}. We use the EFF method to calculate the static and transport properties of hydrogen at $\rho=5\text{g/cm}^3$ and $\rho=10\text{g/cm}^3$. The temperature is from 50kK to 300kK. In the simulations, the real electron mass is used so that we choose the time step as small as $0.2\emph{as}$. 1000 ions and 1000 electrons are used in the simulation. $5\emph{ps}$ microcanonical ensemble with a fixed energy, volume, and number of particles (NVE) has been performed to calculate statistical average after $10\emph{ps}$ simulations with fixed temperature, volume, and number of particles (NVT). When the temperature becomes higher, CEFF is applied to avoid wave packets spreading \cite{QMa2019}. \subsection{Quantum molecular dynamics and orbital-free molecular dynamics} For comparision, we also run the adiabatic simulations including QMD and OFMD. The QMD simulations have been performed using Quantum-Espresso (QE) open-source software \cite{Giannozzi2009}. In QMD simulations, electrons are treated quantum mechanically through the finite temperature DFT (FT-DFT). While ions evolve classically along the PES determined by the electric density, and the electron-ion interaction is described as plane wave pseudopotential. Each electronic wave function is solved by the Kohn-Sham equation \cite{Cowan2001} \begin{equation} \left(-\frac{1}{2}\nabla^{2}+V_\text{KS}[n_e\left({\bf r}\right)]\right)\varphi_i\left({\bf r}\right)=E_i\varphi_i\left({\bf r}\right). \end{equation} where $E_i$ is the eigenenergy, $-1/2\nabla^2$ is the kinetic energy contribution, and the Kohn-Sham potential $V_\text{KS}[n_e\left({\bf r}\right)]$ is given by \begin{equation} V_\text{KS}[n_e\left({\bf r}\right)]=\upsilon\left({\bf r}\right)+\int \frac{n_{e}\left({\bf r}'\right)}{|{\bf r}-\bf{r}'|}d{\bf r}'+V_{\text{xc}}[n_e\left({\bf r}\right)]. \end{equation} where $\upsilon\left({\bf r}\right)$ is the electron-ion interaction, the second term in the right hand of the above equation is the Hartree contribution, an $V_{\text{xc}}[n_e\left({\bf r}\right)]$ represents the exchange-correlation potential, which is represented by the Perdew-Burke-Ernzerhof (PBE) functional \cite{Perdew1996} in the generalized-gradient approximation (GGA) during the simulations. The electronic density consists of single electronic wave function \begin{equation} n_{e}\left({\bf r}\right)=2\sum_{i}\left|\varphi_i\left({\bf r}\right)\right|^2. \end{equation} In our simulations, only the $\Gamma$ point $\left(\boldsymbol{k}=0\right)$ is sampled in the Brillouin-zone, and we used supercells containing 256 H atoms. The velocity Verlet algorithm \cite{Verlet1967} is used to update position and velocity of ions. The time step is set from $0.05\emph{fs}$ to $0.1\emph{fs}$ at different temperature to ensure convergence of energy. The cutoff energy is tested and set from 100 Ry to 150 Ry. The number of bands is sufficient for the occupation of electrons. Each density and temperature point is performed for at least 4000-10000 time steps in the canonical ensemble, and the ensemble information is picked up after the system reaches equilibrium. At high temperatures, the requirement of too many bands limits the efficiency of QMD method. OFMD is a good choice when dealing with high temperature conditions \cite{ZQWang2020,Clerouin1997,Collins1995,Meyer2014}. Within orbital-free frame, the electronic free energy is expressed as \begin{eqnarray} & & F_{e}{\bf [}{\bf R},n_e{\bf ]}= \frac{1}{\beta}\int d{\bf r}\Big{\{}{n_e\left({\bf r}\right)\Phi[n_e\left({\bf r}\right)]-\frac{2\sqrt{2}}{3\pi^2\beta^{3/2}}I_{\frac{3}{2}}\{{\Phi[n_e({\bf r})]}\}}\Big{\}} \nonumber \\ & & +\int d{\bf r}V_{\text{ext}}\left({\bf r}\right)+\frac{1}{2}\iint d{\bf r}d{\bf r}'\frac{n_e\left({\bf r}\right)n_e\left({\bf r}'\right)}{|{\bf r}-{\bf r}'|}+F_{\text{xc}}[n_e\left({\bf r}\right)]. \label{eq:freenergy} \end{eqnarray} where ${\bf R}$ is the ionic position, $\beta=1/k_BT$ where $T$ is the temperature and $k_B$ is the Boltzmann constant. $I_\nu$ is the Fermi integral of order $\nu$. $V_{\text{ext}}\left({\bf r}\right)$ represents the external or the electron-ion interaction, and $F_{\text{xc}}[n_e\left({\bf r}\right)]$ is the exchange-correlation potential. The electrostatic screening potential is represented by $\Phi[n_e({\bf r})]$ depend on electronic density $n_e({\bf r})$ only \begin{equation} \nabla^2\Phi[n_e\left({\bf r}\right)]=4{\pi}n_e\left({\bf r}\right)=\frac{4\sqrt{2}}{\pi^2\beta^{3/2}}I_{\frac{1}{2}}\{\Phi[n_e\left({\bf r}\right)]\}. \end{equation} The OFMD simulations are performed with our locally modified version of PROFESS\cite{MHChen2015}. The PBE functional \cite{Perdew1996} is also used to treat the exchange-correlation potential. 256 H atoms are also used in the supercell. The kinetic energy cutoff is $7000eV$ when the density $\rho=5\text{g/cm}^3$, and $10000eV$ at $\rho=10\text{g/cm}^3$. The time step is set from $0.04\emph{fs}$ to $0.15\emph{fs}$ with the temperature increasing. The size effect has been tested in all MD simulations. \subsection{Quantum Langevin molecular dynamics} QMD and OFMD are good tools in describing static properties of warm dense matters. However, the information of electron-ion dynamical collisions is lost because of the assumption of BO approximation. In addition, electron-ion collisions are important for WDM in which electrons are excited because of the increasing temperature and density. To describe the dynamic process, QMD has been extended by considering the electron-ion collision induced friction (EI-CIF) in Langevin equation, and corresponding to the QLMD model \cite{JYDai2010}. In QLMD, ionic trajectory is performed using the Langevin equation\cite{DDKang2018}. \begin{equation} M_{I}\ddot{{\bf R}}_I={\bf F}-{\gamma}M_I\dot{{\bf R}}_I+{\bf N}_I. \end{equation} where $M_I$ and ${\bf R}_I$ is the mass and position of the ion respectively, ${\bf F}$ is the force calculated from DFT simulation, $\gamma$ means the friction coefficient, and ${\bf N}_I$ represents a Gaussian random noise. In QLMD, the force produced by real dynamics of electron-ion collisions can be replaced by the friction on account of less time scale of electronic motions comparing with that of ions. The friction coefficient $\gamma$ is the key parameter should be determined \emph{a-priori}. Generally, at high temperature such as WDM and HDM regimes, the EI-CIF dominates the friction coefficient, and can be estimated from the Rayleigh model \cite{Plyukhin2008} \begin{equation} \gamma=2\pi\frac{m_e}{M_I}Z^*(\frac{4{\pi}n_i}{3})^{1/3}\sqrt{\frac{k_BT}{m_e}}. \label{eq:Rayliegh} \end{equation} where $m_e$ is the electronic mass, $Z^*$ is the average ionization degree. In the paper, we used average atom (AA) model, in which the energy level broadening effect is considered, to estimate the average ionization degree. $n_i$ means the ionic number density. There is another way to assess $\gamma$ based on the Skupsky model\cite{Skupsky1977,Stanton2018}, and in this work, we adopted Rayleigh model only considering the hydrogen we studied has high density and high temperature. To make sure that the particle velocity satisfies the Boltzmann distribution, the Gaussian random noise ${\bf N}_I$ should obey the fluctuation-dissipation theorem \cite{JYDai2009} \begin{equation} \left \langle {\bf N}_I \left(0\right){\bf N}_I\left(t\right) \right \rangle=6{\gamma}M_Ik_BTdt. \end{equation} where $dt$ is the time step in the MD simulation. The angle bracket denotes the ensemble average. \subsection{Self-diffusion coefficient} In MD simulations, the self-diffusion coefficient is often calculated from the velocity autocorrelation function (VACF) using Green-Kubo formula \cite{Kubo1957} \begin{subequations} \begin{equation} D=\lim_{t \to \infty}D\left(t\right), \end{equation} \begin{equation} D\left(t\right)=\frac{1}{3}\int_{0}^{t}dt\left \langle {\bf v}_i\left(t\right)\cdot {\bf v}_i\left(0\right)\right \rangle. \end{equation} \end{subequations} in which ${\bf v}_i\left(t\right)$ is the center of mass velocity of the $i$th particle at time $t$, and the angle bracket represents the ensemble average. Generally, the integral is computed in long enough MD trajectories so that the VACF becomes nearly zero and has less contribution to the integral. All the same species of particles are considered in the average to get faster convergent statistical results. In practical, it is impossible to get a strict convergent result because the infinite simulation is forbidden. Thus, we usually use a exponential function $\left \langle {\bf v}\left(t\right)\cdot {\bf v}\left(0\right)\right \rangle=a\exp {\left(-t/\tau\right)}$ to fit the VACF to get the self-diffusion coefficient $D=a\cdot \tau$. Where $a$ and $\tau$ are fitting parameters determined by a least-squares fit. $\tau$ is corresponds to the decay time. In moderate and strong coupling regimes, a more sophisticated fitting expression is need to be considered \cite{Meyer2014}. In the exponential function fitting, the statistical error can be estimated by \cite{Hess2002} \begin{equation} \epsilon=\sqrt{\frac{2\tau}{NT_{\emph{traj}}}} \end{equation} where $N$ is the number of particles, $T_{\emph{traj}}$ is the total time in the MD simulation. \section{\label{sec:three}RESULTS AND DISCUSSION} \subsection{Static and transport properties} We firstly calculate the radial distribution function (RDF) $g\left(r\right)$ of H-H, as shown in Fig.~\ref{fig:rdf}. It is shown that the RDFs from OFMD calculations agree well with RDFs of QMD results. Moreover, the RDFs calculated from (C)EFF reflect similar microscopic characteristics with QMD and OFMD results, especially when the temperature is relatively low, where the electron-ion collisions are not so important. For these cases, it is appropriate to show the intrinsic different physics between static and transport properties if the RDFs shown are very close to each other. It should be noticed that the RDFs of (C)EFF model shows a little more gradual than QMD's and OFMD's with the increase of temperature. It is deduced that the non-adiabatic effect plays little role in the static structures of warm dense hydrogen shown here, which is similar to the effects of Langevin dynamics on the static structures \cite{JYDai2009,Mabey2017}, in which the choice of friction coefficients has little effect on the RDFs. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figure1a} \\ \includegraphics[width=0.4\textwidth]{figure1b} \caption{\label{fig:rdf}The RDFs of H-H at $5\text{g/cm}^3$ (a) and $10\text{g/cm}^3$ (b). The ordinate is differentiated by adding factors at different temperatures. Blue double dots lines represent the results from (C)EFF simulation. Black solid and red dashed lines are the QMD and OFMD results, respectively.} \end{figure} However, non-adiabatic effects on dynamic properties are significant \cite{Mabey2017,JYDai2010,Larder2019}. We calculated the self-diffusion coefficients for warm dense hydrogen by integrating the VACF. To get a convergent value, a simple exponential function mentioned in Section~\ref{sec:two} is applied. The self-diffusion coefficient varies with temperature at $5\text{g/cm}^3$ and $10\text{g/cm}^3$ are shown in Fig.~\ref{fig:diff_simu} using different methods of (C)EFF, QMD, and OFMD. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figure2a} \\ \includegraphics[width=0.4\textwidth]{figure2b} \caption{\label{fig:diff_simu}The self-diffusion coefficients of H as a function of temperature at $5\text{g/cm}^3$ (a) and $10\text{g/cm}^3$ (b), calculated by QMD, OFMD, QLMD, and (C)EFF methods. Black squares represent the QMD results, red circles are the results of OFMD's, and the (C)EFF results are represented by blue triangles. The QLMD results\cite{JYDai2010} are represented by purple diamonds} \end{figure} It is very interesting that three methods give consistent results when temperature is relatively low. And the OFMD and QMD results have close values even with the increase of temperature. However, the (C)EFF simulations have a distinct reduce on the self-diffusion coefficients comparing with QMD and OFMD results. And the difference becomes more obvious at higher temperature. We boil it down to the non-adiabatic electron-ion dynamic collisions, which is lost in the framework of BO approximation such as QMD and OFMD. Regarding the electron as a Gaussian wave packet, (C)EFF methodology implements the electron-ion dynamics simulations, in which the dynamic coupling and collisions can be naturally included. As shown in Fig.~\ref{fig:diff_simu}, with the temperature increase, more electrons are excited or ionized and become free electrons. These free electrons lead continual and non-negligible electron-ion collisions, supplying drag forces for the motion of ions, and giving rise to much lower diffusion coefficients. The collision rate increases with the temperature, showing lower diffusive properties for ions, significantly affects the transport properties of WDM. The lost of dynamic collisions can be introduced into the QMD model by considering electron-ion collision induced friction in Langevin equation. Here, we use the Rayliegh model to estimate the friction coefficient $\gamma$, and the QLMD simulations have been performed. It is very exciting that the QLMD results, showed in Fig.~\ref{fig:diff_simu}, agree well with (C)EFF simulations. The greatest difference between the two models is 12\%, but mostly within 6\%. This suggests that the reduction in ionic diffusion from (C)EFF simulations does indeed come from electron-ion dynamic collisions. We believe the small difference belongs to the choice of friction coefficient $\gamma$. Since the prior parameter should be determined artificially in QLMD simulations, we are encouraged to do quantitative analysis about the electron-ion collisions effect using the (C)EFF results as benchmark for the results of all adiabatic methods and analytical models. \section{\label{sec:four}ELECTRON-ION COLLISIONS EFFECT ASSESSMENT} As shown above, we should figure out the mechanism how does the dynamic collisions work on the ionic transport? We can find a clue from the Landau-Spitzer (LS) electron-ion relaxation rate $\left(\nu_{ei}\right)$ \cite{Landau1937,Spitzer1967} \begin{equation} \label{eq:electron-ioncollision} \nu_{ei}=\frac{8\sqrt{2\pi}n_iZ^2e^4}{3m_em_i}\left(\frac{k_BT_e}{m_e}+\frac{k_BT_i}{m_i}\right)^{-3/2}\ln{\Lambda} \end{equation} where $m_e\left(m_i\right)$, $n_e\left(n_i\right)$ and $T_e\left(T_i\right)$ are the mass, number density and temperature of electrons(ions), respectively. The Coulomb logarithm $\ln{\Lambda}$ can be calculated by the GMS model \cite{Gericke2002}. In Eq. \ref{eq:electron-ioncollision}, it is obvious that with the increase of density and temperature (the Coulomb logarithm also varies with the density and temperature), the collision frequency becomes higher, leading the diffusion coefficients reduce more significantly. The results of QMD and (C)EFF simulations showed in Fig.~\ref{fig:diff_simu} exhibit the same behaviors. As shown in Eq.~\ref{eq:electron-ioncollision}, the electron-ion relaxation rate ($\nu_{ei}$) is the function of temperature and density. However, the thermodynamic state also changes with temperature and density, therefore it is difficult to distinguish the electron-ion collisions effect. For this purpose, we can change the effective mass of electrons in the (C)EFF simulation without altering the intrinsic interactions in the Hamiltonian of the system\cite{Jaramillo2011,Theofanis2012}. Since the mass of ions is much greater than that of electrons, we can find a simple relationship between the electron-ion collision frequency $\nu_{ei}$ and the mass of the electron $m_e$ from Eq.~\ref{eq:electron-ioncollision} \begin{equation} \label{eq:nuvsme} \nu_{ei}=f\left(\rho,T\right)m_e^{1/2} \end{equation} When the dynamic electron mass is larger, the motion of effective electrons exhibit more classical, and the collisions between electrons and ions become stronger. By this way, we can study the influence of electron-ion collisions by adjusting electronic mass in (C)EFF simulations. The VACFs and self-diffusion coefficients of H at different dynamic electron mass are showed in Fig.~\ref{fig:masseffect}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figure3a} \\ \includegraphics[width=0.4\textwidth]{figure3b} \caption{\label{fig:masseffect}(a) VACFs of H for different dynamic electron mass at 200kK, 100kK, and 50kK from top to bottom. The density is $10\text{g/cm}^3$. The black lines, red lines, and blue lines represent the dynamic electron mass of 100a.u., 500a.u., and 1823a.u., respectively. Details are showed in the insets. (b) The corresponding ionic self-diffusion coefficients as a function of dynamic electron mass. The squares, circles, and triangles represent the temperature at 50kK, 100kK, and 200kK, respectively. The lines are the fitting results, and the fitting functions are listed below the lines.} \end{figure} From the VACF results we can see, The change of dynamic electron mass does not alter the thermodynamic states of ions. While, dynamic collisions reduce the correlation of particles, showing lower decay time with the increase of dynamic electron mass, as well as the electron-ion collision frequency. Diffusions reflect similar trends, and more interestingly, the diffusion of ions is inversely proportional to the log of electronic mass as showed in Fig.~\ref{fig:masseffect}(b). The inverse ratio relation reflects the reduction of the diffusion due to electron-ion collisions, and the slope determines the magnitude of this influence. In Fig.~\ref{fig:masseffect}(b), it is shown that the influence of electron-ion collisions becomes stronger with the increase of temperature, since the electrons are more classical at higher temperature. To quantitatively describe the relationship between diffusion and collision frequency, we performed more intensive simulations on dynamic electron mass as showed in Fig.~\ref{fig:massmodel}. Here, the QMD results are used as the value at reference point, corresponding to no dynamic electron-ion collisions, since $m_e$ can not be zero. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figure4} \caption{\label{fig:massmodel}Dynamic electron mass effects on ionic diffusion. We show the (C)EFF simulation results with different dynamic electron mass at $5\text{g/cm}^3$ and the temperature is 5kK. The mass of electrons has been shifted to avoid infinity definition of log function at zero point. The value at zero point is replaced by the QMD result. The fitting result is represented by the red line.} \end{figure} As showed in Fig.~\ref{fig:massmodel}, the change of diffusion coefficients decreases much steeper when the electron dynamic mass becomes smaller, revealing more significant effect of electron-ion collisions. Another decaying function as $D=a\log{\left(1+bm_e^c\right)}+d$ can well describe this relation of diffusion varying with the dynamic electron mass $m_e$. This function can transit to the linear form when $m_e$ is large. Here, we have found the approximate relationship between ionic diffusion coefficient $D$ and electron-ion collision rate $\nu_{ei}$ taking Eq.~\ref{eq:nuvsme} into the fitting function \begin{equation} \label{eq:fittingfunction} D=f_1\left(\rho,T\right)\log{\left(1+f_2\left(\rho,T\right)\nu_{ei}^{f_3\left(\rho,T\right)}\right)}+f_4\left(\rho,T\right) \end{equation} where $f_1\left(\rho,T\right),f_2\left(\rho,T\right),f_3\left(\rho,T\right),f_4\left(\rho,T\right)$ are the function of the density $\rho$ and temperature $T$. If $\nu_{ei}$ is set to zero, the first term in the right hand of Eq.~\ref{eq:fittingfunction} vanishes, and $D=f_3\left(\rho,T\right)=D_0$. Here, the remaining term $D_0$ represents the diffusion without electron-ion collisions. We call the first term as collisions induced friction (CIF) of the ionic diffusion $D_{\text{CIF}}$. Within this consideration, the total diffusion coefficient can be obtained via \begin{equation} \begin{split} \label{eq:diffsplit} D&=f_1\left(\rho,T\right)\log{\left(1+f_2\left(\rho,T\right)\nu_{ei}^{f_3\left(\rho,T\right)}\right)}+D_0 \\ &=D_{\text{CIF}}+D_0 \end{split} \end{equation} For $D_0$, plenty of models have been developed to study on it, such as QMD and OFMD which are based on BO approximation. In this paper, the diffusion coefficient including non-adiabatic effect has been calculated using (C)EFF method. As the collision frequency is a small term, the equation can be simplified as $D_{\text{CIF}}=D-D_0=f\left(\rho,T\right)\nu_{ei}^{f'\left(\rho,T\right)}$. $D$ and $D_0$ can be obtained from (C)EFF and QMD simulations, respectively. We develop an empirical fitting function from the available data as the assessment of electron-ion collisions induced the decrease of ionic diffusions \begin{equation} D_{\text{CIF}}=\frac{\nu_{ei}^{0.25}}{a\rho/T^{3/2}+b\rho+c/T^{3/2}+d} \end{equation} where the fitting coefficient $a=-8.942\times10^{-3},b=1.585\times10^{-3},c=6.849$, and $d=-4.195$.The corrected QMD results by the CIF model, which are shown in Fig. \ref{fig:verify}, agree well with (C)EFF simulations. To verify the accuracy of the fitting function, we calculate self-diffusion of H and He at some other temperatures and densities. The results are listed in Table \ref{tab:verify}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figure5a} \\ \includegraphics[width=0.4\textwidth]{figure5b} \caption{\label{fig:verify}Self-diffusion coefficients of H calculated by different methods at $5\text{g/cm}^3$ (a) and $10\text{g/cm}^3$ (b). The solid black squares and red triangles represent the QMD and (C)EFF results, respectively. The CIF model is used to correct the QMD results as the consideration of non-adiabatic effect. The results are represented by the blue circles.} \end{figure} \begin{table*} \caption{\label{tab:verify}The self-diffusion coefficients calculated by QMD, QLMD and (C)EFF models. The QMD results corrected by the CIF model is also listed in the table.} \begin{ruledtabular} \begin{tabular}{ccccccc} species & density$(\text{g/cm}^3)$ & temperature(K) & $D_{\text{QMD}}(\text{cm}^2/\text{s})$ & $D_{\text{QLMD}}(\text{cm}^2/\text{s})$ & $D_{\text{(C)EFF}}(\text{cm}^2/\text{s})$ & $D_{\text{QMD}+\text{CIF}}(\text{cm}^2/\text{s})$ \\ \hline H & 8 & 100000 & 0.0155 & 0.0133 & 0.0122 & 0.0123 \\ H & 8 & 200000 & 0.0386 & 0.029 & 0.0264 & 0.0284 \\ H & 15 & 200000 & 0.0237 & 0.0181 & 0.0182 & 0.0183 \\ H & 15 & 300000 & 0.0396 & 0.0289 & 0.03 & 0.0278 \\ He & 10 & 100000 & 0.00757 & 0.0066 & 0.00598 & 0.00596 \\ He & 10 & 200000 & 0.0181 & 0.016 & 0.0108 & 0.0128 \\ \end{tabular} \end{ruledtabular} \end{table*} In Table \ref{tab:verify}, the ionic self-diffusion coefficients obtained from the QMD model can be modified by adding the CIF factor as the compensation of electron-ion collisions. The results are in good agreement with the (C)EFF and QLMD results, showing that our CIF modification can be applied to warm dense matter. It needs to be emphasized that the CIF modification is independent of other models, therefore, any model based on the adiabatic framework can use it to offset the lost of the electron-ion collisions. \section{\label{sec:five}Comparision with analytical models} The expensive computational costs of first principles simulations make it difficult to apply online or generate large amount of data. On the contrary, some analytical models based on numerical simulations have been proposed, supplying promising approaches to the establishment of database. However, the accuracy of these models should be examined when applied to WDM \cite{ZGLi2016}. In this section, we use QMD and our modified QMD results as benchmark trying to find an efficient and accurate model to acquire transport parameters. We firstly compare our results with the Yukawa one-component plasma (YOCP) model, which is a development version of OCP model \cite{Daligault2006,Daligault2009}. In the YOCP model, the electron screening is included to modify the bare Coulomb interactions \cite{Hamaguchi1997,Murillo2000,Daligault2012}. The interaction between ions is replaced by the Yukawa potential \begin{equation} u\left(r\right)=q^2e^{-{\kappa}r}/r \end{equation} where $\kappa$ is the inverse screening length. All properties of the YOCP model are dependent on the inverse screening length $\kappa$ and the coupling parameter $\Gamma$. Daligault has applied the model in a wide range of $\kappa$ and over the entire fluid region \cite{Daligault2012b}. In the gas-like small coupling regime, the reduced self-diffusion coefficients model can be extended from the Chapman-Spitzer results as \cite{Daligault2012b} \begin{equation} D^*\left(\kappa,\Gamma\right)=\sqrt{\frac{\pi}{3}}\frac{1}{\alpha\left(\kappa\right)}\frac{1}{\Gamma^{5/2}\ln {\Lambda\left(\kappa,\Gamma\right)}} \end{equation} The generalized Coulomb logarithm $\ln {\Lambda\left(\kappa,\Gamma\right)}$ is expressed as \begin{equation} \ln {\Lambda\left(\kappa,\Gamma\right)}=\ln{\left(1+B\left(\kappa\right)\frac{\lambda_D}{b_c}\right)}=\ln{\left(1+\frac{B\left(\kappa\right)}{\sqrt{3}\Gamma^{3/2}}\right)} \end{equation} where $\lambda_D$ is the Debye length $\lambda_D=\sqrt{4{\pi}q^2n/k_BT}$ and $b_c$ is the classical distance of closest approach $b_c=Zq^2/k_BT$. $\alpha\left(\kappa\right)$ and $B\left(\kappa\right)$ are fitting parameters dependent on $\kappa$ only \begin{gather} \alpha\left(\kappa\right)=\sqrt{\frac{3}{\pi}}\frac{1}{a_0+a_1\kappa^{a_2}} \\ B\left(\kappa\right)=b_0+b_1\textrm{erf}\left(b_2\kappa^{b_3}\right) \end{gather} with $a_0=1.559773, a_1=1.10941, a_2=1.36909, b_0=2.20689, b_1=1.351594, b_2=1.57138$, and $b_3=3.34187$. Here, we use the Thomas-Fermi (TF) length \cite{Glenzer2009} to estimate the screening of electrons, so that \begin{equation} \kappa=\frac{1}{\lambda_{TF}}=\frac{1}{\left(\pi/12Z\right)^{1/3}\sqrt{r_i}} \end{equation} where $Z$ is the ionic charge, and $r_i$ is the Wigner-Seitz radius defined as $r_i=\left(3/\left(4{\pi}n_i\right)\right)^{1/3}$.The self-diffusion coefficient is obtained according to $D=D^*{\omega}a^2$, and $\omega=\left(4{\pi}n_iZ^{*2}e^2/m_i\right)^{1/2}$, where $Z^*$ is the average ionization degree which is calculated by the AA model \cite{YHou2006}. The comparision between the results of different models are shown in Fig.~\ref{fig:diff_models}. The QMD results and the CIF modification of QMD's are also shown as benchmarks. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figure6a} \includegraphics[width=0.4\textwidth]{figure6b} \caption{\label{fig:diff_models}Comparison of QMD and modified QMD simulations with different analytical models for self-diffusion coefficients of warm dense H at $5\text{g/cm}^3$ (a) and $10\text{g/cm}^3$ (b). The black solid squares and circles represent the results calculated by QMD and QMD with CIF correction, respectively. The red and blue solid lines are the results of the YOCP model\cite{Daligault2012b} and EOCP model\cite{Clerouin2016}. The effective coupling parameters of the EOCP model are obtained from the RDFs of QMD.} \end{figure} As shown in Fig.~\ref{fig:diff_models}, for warm dense hydrogen at the density of $5\text{g/cm}^3$, the YOCP model can excellently reproduce results from the QMD simulations. However, at higher density, the YOCP model overestimates the diffusions compared with QMD results. This is because TF length overestimates the screening. With the density increase, electronic charges are excluded from dense ions that ionic repulsion becomes stronger at short range \cite{HYSun2017,Daligault2016}. And the correlation of ions becomes weaker leading to lower diffusions which is not considered in the TF model. To modify the YOCP model, we should adjust screening length artificially \cite{Zerah1992,Clerouin2001}. There is another scheme to deal with larger ionic coupling systems, in which we can reduce effective volumes of particles so that the collision frequency can be increased and the ionic transportation can be dragged or dissipative. The model has successfully improved the transport properties of strongly coupled plasmas in the range $1\leq\Gamma\leq30$ \cite{Daligault2016,Baalrud2015,Baalrud2013}. As another method, the EOCP model has a better description for all density and temperature range we studied with the QMD simulations as shown in Fig.~\ref{fig:diff_models}. In EOCP model, The effective coupling parameter $\Gamma_e$ and ionization $Q_e$ are introduced as the correction of the OCP model to reproduced the static structures of the OFMD's\cite{Clerouin2013}, the model also works well on transports properties such as diffusion and viscosity \cite{Arnault2013,Clerouin2016}. In this paper, we set $\Gamma_e$ by the procedure developed by Ott \emph{et al} \cite{Ott2014} as \begin{equation} \Gamma_e=1.238\exp{\left(1.575r^3_{1/2}\right)}-0.931,\quad\left(r_{1/2}<1.3\right) \end{equation} where $r_{1/2}$ is obtained from the RDFs $g\left(r\right)$ at $g\left(r\right)=0.5$, The distance is expressed in the Wigner-Seitz radius unit. The effective average charge $Q_e$ is defined as $Q_e=\sqrt{\Gamma_eak_BT}/e$. We use the RDFs of QMD's as the input of EOCP model, the results agree well with those extracted from long time MD simulations, especially when temperature is low. Compared to the YOCP model, the EOCP model results give a more reasonable description of ionic diffusions. The EOCP model extracts the information directly from the static structure of the system. While the accuracy of the YOCP model depends on the selection of the particle interactions, which should be modeled \emph{a priori}. However, neither model agrees well with modified QMD results. This can be attributed to the loss of the non-adiabatic effect of the two models. Both the YOCP model and the EOCP model calculate self-diffusion coefficients based on the static potential, and the dynamic electron ion collisions can not be considered in it. This reminds us to pay attention to the instantaneous dynamic collisions effect when doing MD simulations. For application, we can use the CIF model to modify the EOCP model, which is a cheaper way to obtain the self-diffusion data including the non-adiabatic effect. \section{\label{sec:six}CONCLUSION} We have performed QMD, OFMD, and (C)EFF simulations to determine the RDFs and the ionic self-diffusion coefficients of warm dense hydrogen at the densities of $5\text{g/cm}^3$ and $10\text{g/cm}^3$ and temperatures from 50kK to 300kK. The results from (C)EFF-MD method are carefully compared with the results from QMD/OFMD methods based on the BO approximation. In EFF method, the static properties are insensitive to electron-ion collisions, however, the diffusion of ions decreases significantly with the increase of electron-ion collisions. The ionic diffusion coefficients calculated from (C)EFF agree well with the QLMD results, but largely differ from QMD or OFMD simulations, revealing key role of electron-ion collisions in warm dense hydrogen. Most importantly, we proposed a new analytical model which introduce the electron-ion collisions induced friction (CIF) effects, constructing a formula to calculate self-diffusion coefficients without doing non-adiabatic simulations. The CIF model has been verified to be valid over a wider range of temperature, density and materials. However, since the CIF model is derived from the fitting of simulation results, whether it can be applied for more complex elements should be verified further. We also show the results from analytical models of YOCP and EOCP. Based on the static information, EOCP model reproduces QMD simulations better. However, neither of the two models considers the dynamic electron-ion collisions effect. We propose to use the CIF model to modify the EOCP results as a preferred scheme to calculate self-diffusion coefficients. \section{ACKNOWLEDGMENTS} The authors thank Dr. Zhiguo Li for his helpful discussion. This work was supported by the Science Challenge Project under Grant No. TZ2016001, the National Key R\&D Program of China under Grant No. 2017YFA0403200, the National Natural Science Foundation of China under Grant Nos. 11774429 and 11874424, the NSAF under Grant No. U1830206. All calculations were carried out at the Research Center of Supercomputing Application at NUDT. \section{Data Available} The data that support the findings of this study are available within the article.
1,108,101,563,780
arxiv
\section{Introduction} In this paper we consider the complexity of problems related to the one-player combinatorial game Flood-It, introduced by Arthur, Clifford, Jalsenius, Montanaro and Sach in \cite{arthurFUN}. The original game is played on a board consisting of an $n \times n$ grid of coloured squares, each square given a colour from some fixed colour-set, but we can more generally regard the game as being played on a vertex-coloured graph. A move then consists of picking a vertex $v$ and a colour $d$, and giving all vertices in the same monochromatic component as $v$ colour $d$. The goal is to make the entire graph monochromatic with as few such moves as possible. When the game is played on a planar graph, it can be regarded as modelling repeated use of the flood-fill tool in Microsoft Paint. Implementations of the game, played on a square grid, are widely available online, and include a flash game \cite{flash} as well as popular smartphone apps \cite{iphoneapp,androidapp}. There also exist implementations using a hexagonal grid: Mad Virus \cite{madvirus} is the same one-player game described above, while the Honey Bee Game \cite{honeybee} is a two player variant, and has been studied by Fleischer and Woeginger \cite{fleischer10}. All these implementations are based on the ``fixed'' version of the game, where all moves must be played at the same fixed vertex (usually the vertex corresponding to the top left square when the board is an $n \times n$ grid). For any coloured graph, we define the following problems. \begin{itemize} \item \textsc{Free-Flood-It} is the problem of determining the minimum number of moves required to flood the graph, if we are allowed to make moves anywhere in the graph. \item \textsc{Fixed-Flood-It} is the same problem when all moves must be played at a single specified vertex.\footnote{\textsc{Fixed Flood It} is often referred to as simply \textsc{Flood-It}, but we use the longer name to avoid confusion with the free version.} \item $c$-\textsc{Free-Flood-It} and $c$-\textsc{Fixed-Flood-It} respectively are the variants of \textsc{Free-Flood-It} and \textsc{Fixed-Flood-It} in which only colours from some fixed set of size $c$ are used. \end{itemize} Note that we can trivially flood an $n$-vertex graph with $n-1$ moves, and that if $c$ colours are present in the initial colouring we require at least $c-1$ moves. These problems are known to be computationally difficult in many situations. In \cite{arthurFUN}, Arthur, Clifford, Jalsenius, Montanaro and Sach proved that $c$-\textsc{Free-Flood-It} is NP-hard in the case of an $n \times n$ grid, for every $c \geq 3$, and that this result also holds for the fixed variant. Lagoutte, Noual and Thierry \cite{lagoutte,lagoutte11} showed that the same result holds when the game is played instead on a hexagonal grid, as in Mad Virus or a one-player version of the Honey Bee Game. Fleischer and Woeginger \cite{fleischer10} proved that $c$-\textsc{Fixed Flood It} remains NP-hard when restricted to trees, for every $c \geq 4$,\footnote{Note that this proof does in fact require four colours, not three as stated in a previous version of \cite{fleischer10}.} and Fukui, Nakanishi, Uehara, Uno and Uno \cite{fukui} demonstrated that this result can be extended to show the hardness $c$-\textsc{Free Flood It} under the same conditions. A few positive results are known, however. 2-\textsc{Free-Flood-It} is solvable in polynomial time on arbitrary graphs, a result shown independently by Clifford et.~al.~\cite{clifford}, Lagoutte \cite{lagoutte} and Meeks and Scott \cite{general}. It is also known that \textsc{Fixed-Flood-It} and \textsc{Free-Flood-It} are solvable in polynomial time on paths \cite{clifford,general,fukui} and cycles \cite{fukui}, and more generally on any graph with only a polynomial number of connected subgraphs \cite{spanningFUN,spanning}. Meeks and Scott also show that the number of moves required to create a monochromatic component containing an arbitrary, bounded-size subset of the vertices can be computed in polynomial time, even when the number of colours is unbounded \cite{spanning,spanningFUN}. A major focus of previous research has been the restriction of the game to rectangular boards of fixed height. Although an additive approximation for $c$-\textsc{Free-Flood-It} can be computed in polynomial time \cite{general}, solving either $c$-\textsc{Free-Flood-It} or $c$-\textsc{Fixed-Flood-It} exactly remains NP-hard on $3 \times n$ boards, whenever $c \geq 4$ \cite{general}. However, Clifford et.~al.~\cite{clifford} give a linear time algorithm for \textsc{Fixed-Flood-It} on $2 \times n$ boards. They also raise the question of the complexity of the free variant in this setting. Here we address this remaining case of ($c$-)\textsc{Free-Flood-It} restricted to $2 \times n$ boards, which turn out to be a particularly interesting class of graphs on which to analyse the game. The majority of the paper describes an algorithm to demonstrate that $c$-\textsc{Free-Flood-It}, restricted to $2 \times n$ boards, is fixed parameter tractable with parameter $c$. To do this we exploit some general results from \cite{spanning} about the relationship between the number of moves required to flood a graph and its spanning trees. On the other hand, we also show that \textsc{Free-Flood-It} remains NP-hard in this setting. This is a somewhat surprising result, as it gives the first example of a class of graphs on which the complexity of \textsc{Fixed-Flood-It} and \textsc{Free-Flood-It} has been shown to be different. The rest of the paper is organised as follows. We begin with notation and definitions in Section \ref{notation}, before giving our algorithm for $c$-\textsc{Free-Flood-It} in Section \ref{fpt}. Finally, in Section \ref{NPhard}, we show that the problem remains NP-hard when the number of colours used is unbounded. \section{Notation and definitions} \label{notation} Although the original Flood-It game is played on a square grid, and our main results here concern the game restricted to a rectangular grid, it is convenient to consider the generalisation of the game to an arbitrary graph $G=(V,E)$, equipped with an initial colouring $\omega$ using colours from the \emph{colour-set} $C$. Then each move $m=(v,d)$ consists of choosing some vertex $v \in V$ and a colour $d \in C$, and assigning colour $d$ to all vertices in the same monochromatic component as $v$. The goal is to give every vertex in $G$ the same colour, using as few moves as possible. Given any connected graph $G$, equipped with a colouring $\omega$ (not necessarily proper), we define $m(G,\omega,d)$ to be the minimum number of moves required in the free variant to give all its vertices colour $d$, and $m(G,\omega)$ to be $\min_{d \in C}m(G,\omega,d)$. If $S$ is a sequence of moves played on a graph $G$ with initial colouring $\omega$, we denote by $S(\omega,G)$ the new colouring obtained by playing $S$ in $G$. Note that, if the initial colouring $\omega$ of $G$ is not proper, we may obtain an equivalent coloured graph $G'$ (with colouring $\omega'$) by contracting monochromatic components of $G$ with respect to $\omega$. Let $A$ be any subset of $V$. We denote by $\col(A,\omega)$ the set of colours assigned to vertices of $A$ by $\omega$. We say a move $m = (v,d)$ is \emph{played in} $A$ if $v \in A$, and that $A$ is \emph{linked} if it is contained in a single monochromatic component. Subsets $A,B \subseteq V$ are \emph{adjacent} if there exists $ab \in E$ with $a \in A$ and $b \in B$. When we consider the game played on a rectangular board $B$, we are effectively playing the game in a corresponding coloured graph $G$, obtained from the planar dual of $B$ (in which there is one vertex corresponding to each square of $B$, and vertices are adjacent if they correspond to squares which are either horizontally or vertically adjacent in $B$) by giving each vertex the colour of the corresponding square in $B$. We identify areas of $B$ with the corresponding subgraphs of $G$, and may refer to them interchangeably. We define a \emph{border} of $B$ to be a union of edges of squares on the original board $B$ that forms a path from the top edge of the board to the bottom (but not including any edges that form the top or bottom edge of the board). Thus, a border in $B$ corresponds to an edge-cut in the corresponding graph. Observe that a border is uniquely defined by the points at which it meets the top and bottom of the board, so there are $(n+1)^2$ borders in total. We denote by $b_L$ and $b_R$ the borders corresponding to the left-hand and right-hand edges of the board respectively. Given two borders $b_1$ and $b_2$, we write $b_1 \leq b_2$ if and only if $b_1$ meets both the top and bottom of the board to the left of (or at the same point as) $b_2$, and write $b_1 < b_2$ if $b_1 \leq b_2$ and $b_1 \neq b_2$. Note that if $b_1 \leq b_2$ then $b_1$ lies entirely to the left of $b_2$ (the two borders may meet but never cross); this is a special property of $2 \times n$ boards and does not hold for $k \times n$ boards for $k \geq 3$. If $G$ is the graph corresponding to the $2 \times n$ board $B$, we say that a vertex (or subgraph) is \emph{incident} with a border $b$ if the vertex (or some vertex in the subgraph) corresponds to a square on $B$ whose edge forms part of $b$. If $b_1 < b_2$ are borders, we denote the subgraph induced by vertices lying between $b_1$ and $b_2$ by $B[b_1,b_2]$, and we say $B[b_1,b_2]$ is a \emph{section} if it is connected. Finally, given any tree $T$, we denote by $\bare(T)$ the subtree obtained by deleting all leaves of $T$, and given any $x,y \in V(T)$ we set $P(T,x,y)$ to be the unique path from $x$ to $y$ in $T$. \section{$c$-FREE FLOOD IT on $2 \times n$ boards} \label{fpt} In this section, we give an algorithm to solve $c$-\textsc{Free-Flood-It} on $2 \times n$ boards. More specifically, we prove the following result, which shows that $c$-\textsc{Free-Flood-It}, restricted to $2 \times n$ boards, is fixed parameter tractable, parameterised by $c$. This answers an open question of Clifford, Jalsenius, Montanaro and Sach \cite{clifford}. \begin{thm} When restricted to $2 \times n$ boards, $c$-\textsc{Free-Flood-It} can be solved in time $O(n^{11} \cdot 2^{c})$. \label{2xn-fpt} \end{thm} We begin with some background and auxiliary results in Section \ref{background}, and then describe the algorithm in Section \ref{algorithm}. \subsection{Background and auxiliary results} \label{background} Before describing our algorithm in the next section, we need a number of results which will be used to prove its correctness. We begin with some previous results from \cite{spanning}. Meeks and Scott prove that it suffices to consider spanning trees in order to determine the minimum number of moves required to flood a graph. For any connected graph $G$, let $\mathcal{T}(G)$ denote the set of all spanning trees of $G$. \begin{thm} Let $G$ be a connected graph with colouring $\omega$ from colour-set $C$. Then, for any $d \in C$, $$m(G,\omega,d) = \min_{T \in \mathcal{T}(G)} m(T,\omega,d).$$ \label{spanning-tree} \end{thm} For any $d \in C$, we say that $T$ is a \emph{$d$-minimal} spanning tree for $G$ if $m(T,\omega,d) = m(G,\omega,d)$. In the remainder of this section, we prove that in the special case in which $G$ corresponds to a $2 \times n$ board, there is always a $d$-minimal spanning tree $T$ such that $\bare(T)$ is a path. In doing so, and in proving the correctness of our algorithm in the next section, we make use of a corollary of Theorem \ref{spanning-tree}, again proved in \cite{spanning}, which shows that the number of moves required to flood a graph is bounded above by the sum of the numbers of moves required to flood connected subgraphs which cover the vertex-set. \begin{cor} Let $G$ be a connected graph, with colouring $\omega$ from colour-set $C$, and let $A$ and $B$ be subsets of $V(G)$ such that $V(G) = A \cup B$ and $G[A], G[B]$ are connected. Then, for any $d \in C$, $$m(G,\omega,d) \leq m(A,\omega,d) + m(B,\omega,d).$$ \label{non-interference} \end{cor} A key step used to prove Theorem \ref{spanning-tree} in \cite{spanning} is to prove a special case of Corollary \ref{non-interference}, where the underlying graph $G$ is a tree and $A$ and $B$ are disjoint. We will need the following result, proved using an extension of part of this proof from \cite{spanning}. \begin{lma} Let $T$ be a tree, with colouring $\omega$ from colour-set $C$, let $A$ and $B$ be disjoint subsets of $V(T)$ such that $V(T) = A \cup B$ and $T[A], T[B]$ are connected, and let $x$ be the unique vertex of $B$ with a neighbour in $A$. Suppose that \begin{itemize} \item the sequence $S_A$ floods $T[A]$ with colour $d_A$, \item the sequence $S_B$ floods $T[B]$ with colour $d_B$, \item at least one move of $S_B$ changes the colour of $x$, and \item playing $S_A$ in $T$ changes the colour of $x$. \end{itemize} Then $$m(T,\omega,d_B) \leq |S_A| + |S_B|.$$ \label{compatible-ordering} \end{lma} \begin{proof} We proceed by induction on $|B|$. Note that we may assume without loss of generality that $\omega$ gives a proper colouring of $B$; otherwise we may contract monochromatic components. Suppose $|B| = 1$. Then $S_A$ must change the colour of the only vertex in $B$ (linking it to some $a \in A$), and so playing $S_A$ in $T$ makes the whole tree monochromatic with colour $d_A$. Thus $m(T,\omega,d_A) \leq |S_A|$, and $$m(T,\omega,d_B) \leq m(T,\omega,d_A) + 1 \leq |S_A| + 1 \leq |S_A| + |S_B|,$$ as required, since by assumption $|S_B| \geq 1$. Now suppose $|B| > 1$, so $B$ is not monochromatic initially, and assume that the result holds for smaller $B$. Set ${S_B}'$ to be the initial segment of $S_B$, up to and including the move that first makes $B$ monochromatic (in any colour $d'$), so any final moves that simply change the colour of $B$ are omitted. We may, of course, have ${S_B}' = S_B$ (and so $d' = d_B$), if $B$ is not monochromatic before the final move of $S_B$. Suppose that $S_B'$ does not change the colour of $x$ (which is only possible in the case $S_B' \neq S_B$). Then playing $S_B'$ in $T$ to make $B$ monochromatic cannot change the colour of any vertex in $A$, so if we play $S_B'$ in $T$ and then play $S_A$, this will still flood $A$ with colour $d_A$. Moreover, as playing $S_B'$ has not changed the colour of $x$, playing $S_A$ will still change the colour of $x$, thus linking all of $B$ to $A$ and so flooding $T$ with colour $d_A$. Hence, in this case, we have $$m(T,\omega,d_A) \leq |S_B'| + |S_A|,$$ and so, as we must in this case have $|S_B'| < |S_B|$, $$m(T,\omega,d_B) \leq 1 + m(T,\omega,d_A) \leq 1 + |S_B'| + |S_A| \leq |S_A| + |S_B|,$$ as required. Suppose now that $S_B'$ does change the colour of $x$. Before the final move of ${S_B}'$ there are $r \geq 2$ monochromatic components in $B$ (all but one of which have colour $d'$), with vertex-sets $B_1, \ldots, B_r$. For $1 \leq i \leq r$, set $S_i$ to be the subsequence of ${S_B}'$ consisting of moves played in $B_i$, and note that these subsequences partition ${S_B}'$. Observe also that playing $S_i$ in $T[B_i]$ gives $B_i$ colour $d'$, so $m(B_i,\omega,d') \leq |S_i|$. Let $B_1$ be the unique component adjacent to $A$, and set $T_1 = T[A \cup B_1]$. Note that $S_A$ floods $T_1[A]$ with colour $d_A$, and $S_1$ floods $T_1[B_1]$ with colour $d'$. Moreover, as playing $S_A$ in $T$ changes the colour of $x$, playing $S_A$ in $T_1$ must also change the colour of $x$. Also, at least one move from $S_B$ changes the colour of $x$, the unique vertex of $B_1$ with a neighbour in $A$, and this move must belong to $S_1$. Thus we can apply the inductive hypothesis to see that $$m(T_1, \omega, d') \leq |S_A| + |S_1|.$$ Now suppose without loss of generality that $B_2$ is adjacent to $B_1$. We can then apply Corollary \ref{non-interference} to $T_2 = T[V(T_1) \cup B_2]$ to see that $$m(T_2, \omega, d') \leq m(T_1,\omega,d') + m(B_2,\omega,d') \leq |S_A| + |S_1| + |S_2|.$$ Continuing in this way, each time adding an adjacent component, we see that $$m(T,\omega,d') \leq |S_A| + \sum_{i=1}^r |S_i| = |S_A| + |{S_B}'|.$$ Now, if ${S_B}' = S_B$, this immediately gives the desired result, as $d' = d_B$. Otherwise, note that $|S_B| \geq |{S_B}'|+1$ and so $$m(T,\omega,d_B) \leq m(T,\omega,d') + 1 \leq |S_A| + |{S_B}'| + 1 \leq |S_A| + |S_B|,$$ as required. \end{proof} In the next result, we exploit this lemma to give a strengthening of Corollary \ref{non-interference} under additional assumptions. This can be applied to show that, in certain situations, we may assume that \emph{no} optimal sequence to flood a subtree can change the colour of any vertex outside the subtree, when played in a larger tree. \begin{prop} Let $T$ be a tree, with colouring $\omega$ from colour-set $C$, and let $X$ and $Y$ be disjoint sutbrees of $T$ such that $T[V(X) \cup V(Y)]$ is connected, and such that \begin{itemize} \item there is a sequence $S_X$ of $\alpha$ moves that floods $X$ with some colour $d' \in C$, \item there is a sequence $S_Y$ of $\beta$ moves that floods $Y$ with colour $d$, and that changes the colour of the unique vertex $v \in V(B) \cap \Gamma(A)$, and \item playing $S_X$ in $T$ changes the colour of at least one vertex in $Y$. \end{itemize} Then, setting $T' = T \setminus (V(X) \cup V(Y))$ and $\omega' = \omega$, we have $$m(T,\omega,d) \leq m(T',\omega',d) + \alpha + \beta.$$ \label{strong-non-int} \end{prop} \begin{proof} Note that $S_X$ must change the colour of $v$, so we can apply Lemma \ref{compatible-ordering} to see that $$m(T[V(X) \cup V(Y)],\omega,d) \leq |S_X| + |S_Y| = \alpha + \beta.$$ Corollary \ref{non-interference} then gives \begin{align*} m(T,\omega,d) & \leq m(T[V(X) \cup V(Y)], \omega,d) + m(T',\omega',d) \\ & \leq m(T',\omega',d) + \alpha + \beta, \end{align*} as required. \end{proof} Before proving the main result of this section, we need one further result, relating the number of moves required to flood the same graph with different initial colourings. \begin{lma} Let $G$ be a connected graph, and let $\omega$ and $\omega'$ be two colourings of the vertices of $G$ (from colour-set $C$). Let $\mathcal{A}$ be the set of all monochromatic components of $G$ with respect to $\omega'$, and for each $A \in \mathcal{A}$ let $c_A$ be the colour of $A$ under $\omega'$. Then, for any $d \in C$, $$m(G,\omega,d) \leq m(G,\omega',d) + \sum_{A \in \mathcal{A}} m(A,\omega,c_A).$$ \label{change-colouring} \end{lma} \begin{proof} We proceed by induction on $m(G,\omega',d)$. Note that if $m(G,\omega',d) = 0$ then the result is trivially true: in this case $\mathcal{A}$ contains a single monochromatic component $G$, with colour $d$, so we have $$m(G,\omega',d) + \sum_{A \in \mathcal{A}} m(A,\omega,c_A) = m(G,\omega,d).$$ Suppose now that $m(G,\omega',d) > 0$, and let $S$ be an optimal sequence of moves to flood $G$ with colour $d$, when the initial colouring is $\omega'$. We proceed by case analysis on the final move, $\alpha$, of $S$. First suppose that $G$ is already monochromatic before $\alpha$, so this final move just changes the colour of the entire graph to $d$ from some colour $d' \in C$. In this case, $m(G,\omega',d) = m(G,\omega',d') + 1$, and so we may apply the inductive hypothesis to see that \begin{align*} m(G,\omega,d) & \leq 1 + m(G,\omega,d') \\ & \leq 1 + m(G,\omega',d') + \sum_{A \in \mathcal{A}} m(A,\omega,c_A) \\ & = m(G,\omega',d) + \sum_{A \in \mathcal{A}} m(A,\omega,c_A), \end{align*} as required. Now suppose that $G$ is not monochromatic before $\alpha$, and so this move links monochromatic components $X_1, \ldots, X_r$. We may assume that $\alpha$ changes the colour of $X_1$ from $d'$ to $d$, and that all the components $X_2, \ldots, X_r$ have colour $d$ before $\alpha$. Let $S_i$ denote the subsequence of $S$ consisting of moves played in $X_i$, and observe that playing $S_i$ in the isolated subgraph $X_i$ must flood this graph with colour $d$, so $m(X_i,\omega',d) \leq |S_i|$. Note that, as no move can split a monochromatic component, the sets $\mathcal{A}_i = \{A \in \mathcal{A}: A \subseteq X_i\}$ (for $1 \leq i \leq r$) partition $\mathcal{A}$. Observe that, for $2 \leq i \leq r$, $m(X_i, \omega, d) < |S| = m(G,\omega',d)$, and so we may apply the inductive hypothesis to see that \begin{align*} m(X_i,\omega,d) & \leq m(X_i,\omega',d) + \sum_{A \in \mathcal{A}_i} m(A,\omega,c_A) \\ & \leq |S_i| + \sum_{A \in \mathcal{A}_i} m(A,\omega,c_A). \end{align*} Similarly, the inductive hypothesis gives $$m(X_1,\omega,d') \leq m(X_1,\omega',d') + \sum_{A \in \mathcal{A}_1} m(A,\omega,c_A),$$ and so, as $m(X_1,\omega',d') \leq |S_1| - 1$, we have \begin{align*} m(X_1,\omega,d) & \leq 1 + m(X_1,\omega,d) \\ & \leq |S_1| + \sum_{A \in \mathcal{A}_1} m(A,\omega,c_A). \end{align*} Now we can apply Corollary \ref{non-interference} to see that $$m(G,\omega,d) \leq \sum_{i=1}^r m(X_i,\omega,d),$$ and so \begin{align*} m(G,\omega,d) & \leq \sum_{i=1}^r (|S_i| + \sum_{A \in \mathcal{A}_i} m(A,\omega,c_A)) \\ & = |S| + \sum_{A \in \mathcal{A}} m(A,\omega,c_A) \\ & = m(G,\omega',d) + \sum_{A \in \mathcal{A}} m(A,\omega,c_A), \end{align*} completing the proof. \end{proof} Using the previous results, we are now ready to prove the key result of this section. \begin{lma} Let $G$ with colouring $\omega$ (from colour-set $C$) be the graph corresponding to a $2 \times n$ flood-it board $B$, let $H$ be a connected induced subgraph of $G$, and let $u$ and $w$ be vertices lying in the leftmost and rightmost columns of $H$ respectively. Then, for any $d \in C$, there exists a $d$-minimal spanning tree $T$ for $H$ such that $\bare(T) \subseteq P(T,u,w)$. \label{leafy-path} \end{lma} \begin{proof} We proceed by induction on $m(H,\omega,d)$. Note that the result is trivially true if $m(H,\omega,d) = 0$ as the graph is initially monochromatic with colour $d$ and so any spanning tree will do. Suppose then that $m(H,\omega,d) > 0$. Let $S$ be an optimal sequence to flood $H$ with colour $d$, and suppose that the last move of $S$ is $\alpha$. If $H$ is monochromatic in some colour $d' \in C$ before $\alpha$ is played, and so this final move just changes the colour of the whole graph to $d$, we see that $m(H,\omega,d') \leq m(H,\omega,d) - 1$. Thus we may apply the inductive hypothesis to obtain a $d'$-minimal spanning tree $T$ for $H$ such that $\bare(T) \subseteq P(T,u,w)$. But then $$m(T,\omega,d) \leq 1 + m(T,\omega,d') = 1 + m(H,\omega,d') \leq m(H,\omega,d),$$ and so $T$ is also a $d$-minimal spanning tree for $H$. Thus we may assume that $H$ is not monochromatic immediately before $\alpha$ is played. This means that $\alpha$ must change the colour of a monochromatic component $A$ from some $d' \in C$ to $d$, where $H \setminus A$ is nonempty and has colour $d$ before $\alpha$ is played. Since $H$ is a connected induced subgraph of a $2 \times n$ board, $H \setminus A$ has at most one component $L$ which contains vertices lying in columns to the left of all columns containing a vertex of $A$, and correspondingly at most one component $R$ containing vertices lying in columns entirely to the right of $A$. There may additionally be some components $X_1, \ldots, X_r$ of $H \setminus A$ which contain only vertices which lie in the same column as some vertex of $A$. A possible structure for $H$ is illustrated in Figure \ref{H-cpts}. In the remainder of the proof, we will exploit the structure of $H \setminus A$ to define a $d$-minimal spanning tree $T$ for $H$ whose non-leaf vertices lie on $P(T,u,w)$. \begin{figure} \centering \includegraphics[width = 0.9 \linewidth]{H-cpts} \caption{Monochromatic components of $H$ before the final move is played.} \label{H-cpts} \end{figure} Observe that we may have $L=R$, as illustrated in Figure \ref{L=R}; we will deal with this case later, so for the moment we assume that $L \neq R$. \begin{figure} \centering \includegraphics[width = 0.7 \linewidth]{L=R} \caption{It is possible that $L = R$.} \label{L=R} \end{figure} Set $v$ (respectively $v'$) to be any vertex lying in the leftmost (respectively rightmost) column of $A$ that has at least one neighbour in $L$ (respectively $R$); if $L$ (respectively $R$) is empty, we set $v=u$ (respectively $v' = w$). If two vertices of $L$ lie in the rightmost column of $L$, one of these must be adjacent to $v$, in which case we set this vertex to be $u'$; otherwise $u'$ is defined to be the unique vertex of $L$ that lies in the rightmost column. We define $w'$ symmetrically, so that $w'$ lies in the leftmost column of $R$, and so that if there is a choice of vertices of $R$ in this column then $w'$ is the vertex adjacent to $v'$. Note that $m(L,\omega,d) < m(H,\omega,d)$ and so, by the inductive hypothesis, there exists a $d$-minimal spanning tree $T_L$ for $L$ such that $\bare(T_L) \subseteq P(T_L,u,u')$. Similarly, there exists a $d$-minimal spanning tree $T_R$ for $R$ such that $\bare(T_R) \subseteq P(T_R,w',w)$, and a $d'$-minimal spanning tree $T_A$ for $A$ such that $\bare(T_A) \subseteq P(T_A,v,v')$. Let $S_A$ be an optimal sequence of moves to flood $T_A$ with colour $d'$, and $S_L$ and $S_R$ be optimal sequences to flood $T_L$ and $T_R$ respectively with colour $d$. Observe that, as well as containing vertices that lie in columns to the left (respectively right) of $A$, $L$ (respectively $R$) may additionally contain some vertices that lie in the same column as a vertex of $A$. We set $T_L'$ to be the subtree of $T_L$ induced only by those vertices in $L$ that lie in the same column as or to the left of the leftmost vertex of $A$, and define $T_R'$ symmetrically. We further define $S_L'$ (respectively $S_R'$) to be the subsequence of $S_L$ (respectively $S_R$) consisting of moves that change the colour of at least one vertex in $T_L'$ (respectively $T_R'$), and note then that $S_L'$ (respectively $S_R'$) floods $T_L'$ (respectively $T_R'$) with colour $d$ (implying that $m(T_L',\omega,d) \leq |S_L'|$, and $m(T_R',\omega,d) \leq |S_R'|$). Now set $T_A'$ to be the spanning tree for $H \setminus (T_L' \cup T_R')$ obtained from $T_A$ by adding an edge from every vertex $z$ of this subgraph that does \emph{not} lie in $A$ to the vertex of $A$ that lies in the same column as $z$ (and observe that $\bare(T_A') \subseteq P(T_A',v,v')$). Finally, we obtain a spanning tree $T$ for $H$ by connecting $T_L'$, $T_R'$ and $T_A'$. If $T_L' = T_L$, we use the edge $u'v$ to connect $T_L'$ and $T_A'$; otherwise we use the edge of $T_L$ with exactly one endpoint in $T_L'$. In either case we must have $\bare(T[V(T_L') \cup V(T_A')]) \subseteq P(T,u,v')$. Similarly, if $T_R' = T_R$ then we connect $T_R'$ and $T_A'$ with $v'w'$, and otherwise use the edge of $T_R$ with exactly one endpoint in $T_R'$. The construction of $T$ is illustrated in Figure \ref{T-construction}. It is clear from the construction that $T$ is a spanning tree for $H$, and that $\bare(T) \subseteq P(T,u,w)$; we will argue that in fact $T$ is a $d$-minimal spanning tree for $H$. \begin{figure} \centering \includegraphics[width = 0.9 \linewidth]{T-construction} \caption{The spanning tree $T$.} \label{T-construction} \end{figure} For the rest of the argument, it will be useful to identify two important vertices of $T$. We set $x$ to be the last vertex on the path $P(T,u,w)$ before $A$, and $y$ the first vertex after $A$, when this path is traversed from left to right (as illustrated in Figure \ref{T-construction}); if $L$ (respectively $R$) is empty then $x$ (respectively $y$) is not defined. Assuming $L$ (respectively $R$) is nonempty, let $a_x$ (respectively $a_y$) be the neighbour in $A$ of $x$ (respectively $y$). Note that $x \in T_L'$ and $y \in T_R'$. Having defined the spanning tree $T$ for $H$, we now consider how to flood $T$ with colour $d$. First, observe that \begin{align} |S| & \geq 1 + m(A,\omega,d') + m(L,\omega,d) + m(R,\omega,d) \nonumber \\ & \qquad \qquad \qquad \qquad + \sum_{i=1}^r m(X_i,\omega,d) \nonumber \\ & \geq 1 + |S_A| + |S_L| + |S_R| + | \bigcup_{i=1}^r \col(X_i,\omega) \setminus \{d\}|. \label{|S|} \end{align} We will say that a colour $\tilde{d} \in \col(V(T_A') \setminus V(A), \omega) \setminus \{d\}$ is \emph{autonomous} either if $\tilde{d}$ appears in the initial colouring in one or more component $X_i$, or else if a move of $S_L$ or $S_R$ is is played in a monochromatic component of colour $\tilde{d}$ that does not intersect $T_L'$ or $T_R'$. Let $C_A$ denote the set of all autonomous colours. Note then that, for each $v \in V(T_L \setminus T_L')$ that does not have colour $d$ initially, at least one of the following must hold in order for $v$ to be given colour $d$: \begin{enumerate} \item $\col(\{v\},\omega) \in C_A$, or \item either initially, or after some move of $S_L$, $x$ has colour $\col(\{v\},\omega)$. \end{enumerate} Let $W_L$ be the set of vertices $v \in V(T_L \setminus T_L')$ such that the first statement holds, so $v \in W_L$ if and only if $\col(\{v\},\omega) \in C_A$. We then set $U_L = V(T_L \setminus T_L') setminus W_L))$, and note that the second statement must hold for every $v \in U_L$. We can apply exactly the same reasoning to $V(T_R \setminus T_R')$ (replacing $x$ with $y$), and define $U_R$ and $W_R$ analogously. Observe that $|S_L| \geq |S_L'| + |\col(W_L,\omega)|$, and $|S_R| \geq |S_R'| + |\col(W_R,\omega)|$. Thus, by (\ref{|S|}), we see that \begin{align*} |S| & \geq 1 + |S_A| + |S_L'| + |S_R'| + |\col(W_L,\omega)| + |\col(W_R,\omega)| \\ & \qquad \qquad \qquad \qquad + | \bigcup_{i=1}^r \col(X_i,\omega) \setminus \{d\} | \\ & \geq 1 + |S_A| + |S_L'| + |S_R'| + |C_A|. \end{align*} In order to flood $T$ with colour $d$, we will first play $S_A$, flooding $A$, and then repeatedly change the colour of $A$ to cycle through all colours in $C_A$. Note that these first $|S_A| + |C_A|$ moves create a monochromatic component $A'$ containing $T_A' \setminus (U_L \cup U_R)$. There are now three cases to consider, depending on whether none, both or one of $U_L$ and $U_R$ are non-empty. First suppose that $U_L = U_R = \emptyset$. Note in this case that our first $|S_A| + |C_A|$ moves make $T_A'$ monochromatic in some colour, so $m(T_A',\omega,d) \leq 1 + |S_A| + |C_A|$. Thus we can apply Corollary \ref{non-interference} to see that \begin{align*} m(T,\omega,d) & \leq m(T_L',\omega,d) + m(T_A',\omega,d) + m(T_R',\omega,d) \\ & \leq 1 + |S_A| + |S_L'| + |S_R'| + |C_A| \\ & \leq |S| \\ & = m(H,\omega,d), \end{align*} as required. Now suppose that exactly one of $U_L$ and $U_R$ is nonempty, and without loss of generality suppose that $U_L \neq \emptyset$. We claim that playing $S_A$ in $T$ does not change the colour of any vertex in $T_L'$. Indeed, if this sequence does change the colour of a vertex in $T_L'$, it must change the colour of $x$, and this colour change will be due to moves in $S_A$ changing the colour of $a_x$. Thus, if we played $S_A$ in the tree $T_1$, obtained by connecting $T_L$, $T_A$ and $T_R$ with the edges $xa_x$ and $ya_y$, it would would still change the colour of $x$ (which is the unique vertex of $T_L$ adjacent to $T_A$). However, as $U_L \neq \emptyset$, we know that $S_L$ must change the colour of $x$, and so by Proposition \ref{strong-non-int} (setting $X=T_A$, $Y=T_L$, $S_X = S_A$ and $S_Y = S_L$) we would have $m(T_1,\omega,d) \leq |S_R| + |S_L| + |S_A| < |S|$, implying (by Theorem \ref{spanning-tree}) that $m(H,\omega,d) \leq m(T_1,\omega,d) < |S| = m(H,\omega,d)$, a contradiction. We may further assume that then cycling $A$ through all colours in $C_A$ does not change the colour of any vertex in $T_L'$: if $\col(\{x\},\omega) \in C_A$ we can choose this to be the last colour we play in $A$, and so our sequence will link $A$ to $x$ but will not change the colour of $x$ (or therefore of any other vertex in $T_L'$). Next, if playing $S_A$ and cycling through the colours of $C_A$ has not already linked $A'$ to $x$, we play one further move to give $A'$ the same colour as $x$. Since the sequence of moves we play up to this point does not change the colour of any vertex in $T_L'$, we can now play the sequence $S_L'$ to give every vertex in $T_L'$ colour $d$. As $x$ is in the same monochromatic component as $A'$ this will also give all vertices in $A'$ colour $d$. Moreover, playing this sequence will at some point give $x$, and hence $A'$, every colour in $\col(U_R,\omega)$, and so will link every vertex in $U_R$ to $A'$ and ultimately give these vertices colour $d$. Thus, playing $S_A$, cycling through $C_A$, if necessary linking $x$ to $A'$, and then playing $S_L'$ will flood all the vertices of $T \setminus T_R'$ with colour $d$ (as $U_R = \emptyset$), so we see that $$m(T \setminus T_R', \omega,d) \leq |S_A| + |C_A| + 1 + |S_L'|.$$ But then, once again, we can apply Corollary \ref{non-interference} to see that \begin{align*} m(T,\omega,d) & \leq m(T \setminus T_R', \omega,d) + m(T_R',\omega,d) \\ & \leq 1 + |S_A| + |C_A| + |S_L'| + |S_R'| \\ & \leq |S| \\ & = m(H,\omega,d), \end{align*} as required. For the final subcase, we suppose that $U_L, U_R \neq \emptyset$. We begin once again by playing $S_A$, cycling $A$ through all colours in $C_A$, and then (if required) playing an additional move to change the colour of the monochromatic component containing $A$ to be the same as $x$; as before we may assume that these initial moves do not change the colour of any vertex in $S_L'$. Note that, as $U_L \neq \emptyset$, the colour of $x$ must change at least once when we play $S_L'$ in $T_L'$. Set $\beta$ to be the last move in $S_L'$ to change the colour of $x$, and note then that $\beta$ must change the colour of some component $Z$, containing $x$, to $d$. Set $\bar{T_L} = T_L' \setminus Z$, and let $S_Z$ be the subsequence of $S_L'$ consisting of moves played in $Z$ (so $S_Z$ floods $Z$ with colour $d$, and $\beta$ is the final move of $S_Z$). As $Z$ is monochromatic before $\beta$, playing $S_Z \setminus \beta$ in $Z$ must flood this component with some colour $d_Z \in C$. Observe also that the sequence $S_L' \setminus S_Z$ must, when played in the forest $\bar{T_L}$, give every vertex of $\bar{T_L}$ colour $d$. Suppose that, after playing $S_A$ and linking $A$ to $x$, we then play $S_Z \setminus \beta$. This will ensure that $x$ and hence $A$ at some point receives every colour in $\col(U_L,\omega)$ (except possibly d), so every vertex in $U_L$ is either linked to $A$ or has colour $d$ (in which case it will certainly end up with colour $d$, as its colour can only change if it is linked to another vertex which will ultimately be given colour $d$). Note that we now have a monochromatic component $B$ that contains $A$, $Z$ and all vertices of $T_A' \setminus U_R$ that do not initially have colour $d$. We claim that the sequence of moves we play up to this point cannot change the colour of any vertex in $T_R'$. To prove the validity of this claim, first observe that $S_R'$, played in $T_R$, floods a subtree $T_R''$ of $T_R$ with colour $d$, where $T_R' \cup U_R \subseteq T_R''$. Now set $T_2$ to be the spanning tree for $H$ obtained by connecting $T_R''$ and $T \setminus V(T_R'')$ with the edge $ya_y$. It is clear that, if our sequence of moves so far changes the colour of any vertex in $T_R'$ when played in $T$, playing the same sequence in $T_2$ would change the colour of $y \in T_R''$. However, as $U_R \neq \emptyset$, we also know that $S_R$ changes the colour of $y$. Note also that all vertices of $T_2 \setminus (B \cup T_R'')$ that do not belong to $\bar{T_L}$ have colour $d$ initially, so $m(\bar{T_L},\omega,d)$ moves suffice to flood $T_2 \setminus (B \cup T_R'')$ with colour $d$. We can now apply Proposition \ref{strong-non-int}, setting $X = B \setminus T_R''$, $S_X$ to be the sequence of moves we have played up to this point, $Y = T_R''$ and $S_Y = S_R'$ to see that \begin{align*} m(T_2,\omega,d) & \leq m(\bar{T_L},\omega,d) + |S_A| + 1 + |S_Z| - 1 + |C_A| + |S_R'| \\ & \leq |S_L'| - |S_Z| + |S_A| + |S_Z| + |C_A| + |S_R'| \\ & = |S_L'| + |S_R'| + |S_A| + |C_A| \\ & < |S|. \end{align*} Theorem \ref{spanning-tree} would then imply that $$m(H,\omega,d) \leq m(T_2,\omega,d) < |S| = m(H,\omega,d),$$ a contradiction. If the monochromatic component containing $A$ does not already have the same colour as $y$, we now play one further move to link it to this vertex (and note that such a move will not change the colour of any vertex in $T_R'$). Hence, if we now play $S_R'$, this will flood $T_R'$ with colour $d$; as $A$ and $y$ lie in the same monochromatic component before these moves are played, this sequence will also give every vertex in the same monochromatic component as $A$ colour $d$. Moreover, linking $A$ to $y$ and playing $S_R'$ will at some point give $y$, and hence $A$, every colour in $\col(U_R,\omega)$, and so all vertices in $U_R$ will be linked to $A$ and thus end up with colour $d$. Hence this sequence of moves gives every vertex in $T \setminus \bar{T_L}$ colour $d$, and so we have \begin{align*} m(T \setminus \bar{T_L}, \omega,d) & \leq |S_A| + |C_A| + 1 + |S_Z| - 1 + 1 + |S_R'| \\ & = |S_A| + |C_A| + |S_Z| + |S_R'| + 1. \end{align*} Finally, we apply Corollary \ref{non-interference} to give \begin{align*} m(T,\omega,d) & \leq m(T \setminus \bar{T_L}, \omega,d) + m(\bar{T_L}, \omega, d) \\ & \leq |S_A| + |C_A| + |S_Z| + |S_R'| + 1 + |S_L'| - |S_Z| \\ & = |S_A| + |C_A| + |S_R'| + |S_L'| + 1 \\ & \leq |S| \\ & = m(H,\omega,d), \end{align*} as required. This completes the proof in the case that $L \neq R$. It remains to consider the case in which $L=R$, as in Figure \ref{L=R}. We define $T$ exactly as before (as shown in Figure \ref{T,L=R}), and again identify the important vertices $x$ and $y$. The previous reasoning only fails in the case $L=R$ because it is not necessarily true, in this case, that $S_L'$ floods $T_L'$ with colour $d$ and $S_R'$ floods $T_R'$ with colour $d$. However, by considering more carefully the sequence of moves that floods $H \setminus A$, we are able to deal with this problem. \begin{figure} \centering \includegraphics[width=0.9 \linewidth]{T,L=R} \caption{The construction of $T$ in the case that $L=R$.} \label{T,L=R} \end{figure} If $x$ and $y$ belong to the same monochromatic component $T'$ of $T_L (=T_R)$, with colour $d_{xy}$, under the initial colouring $\omega$, then we can flood $T[V(T') \cup V(A)]$ by playing $S_A$ and then changing the colour of $A$ to $d_{xy}$, so $m(T[V(T') \cup V(A)], \omega, d_{xy}) \leq |S_A| + 1$. Let $\omega'$ be the colouring of $T$ which agrees with $\omega$ on every vertex in $T_L$, and gives every vertex in $A$ colour $d_{xy}$. Then $T$ with colouring $\omega'$ is equivalent (when monochromatic components are contracted) to $T_L$ with colouring $\omega$, implying that $m(T,\omega',d) = m(T_L,\omega,d) \leq |S_L|$. We can then apply Lemma \ref{change-colouring} to give \begin{align*} m(T,\omega,d) & \leq m(T,\omega',d) + m(T[V(T') \cup V(A)], \omega, d_{xy}) \\ & \leq |S_L| + |S_A| + 1 \\ & = |S| \\ & = m(H,\omega,d). \end{align*} So we may assume that $x$ and $y$ do not belong to the same monochromatic component initially. Let $S'$ be the initial segment of $S_L$ up to and including the move that first links $x$ and $y$; let $T'$ be the monochromatic component of $T \setminus A$ that contains $x$ and $y$ at this point, and suppose that it has colour $\bar{d}$ and that $k$ moves of $S'$ are played in $T'$. We now consider flooding the subtree $T[V(T') \cup V(A)]$ with colour $\bar{d}$. Note that the subsequence of $S'$ consisting of moves played in $T_L' \cap T'$ must in this case flood $T_L' \cap T'$ with colour $\bar{d}$, and similarly the subsequence consisting of moves played in $T_R' \cap T'$ must flood $T_R' \cap T'$ with colour $\bar{d}$. Thus, applying exactly the same arguments as in the case that $L \neq R$, we see that $$m(V(T') \cup V(A),\omega,\bar{d}) \leq |S_A| + k + 1.$$ Now set $\omega'$ to be the colouring of $V(T)$ that agrees with $S'(\omega,T_L)$ on $T_L$ and gives all vertices of $A$ colour $\bar{d}$. Note that $T$ with colouring $\omega'$ is equivalent (when monochromatic components are contracted) to $T_L$ with colouring $S'(\omega,T_L)$, and so $m(T,\omega',d) = m(T_L,S'(\omega,T_L),d) \leq |S_L| - |S'|$. If $\mathcal{A}$ is the set of monochromatic components of $T$ with respect to $\omega'$, and each $A \in \mathcal{A}$ has colour $c_A$ under this colouring, then observe that $$\sum_{\substack{A \in \mathcal{A} \\ A \neq T[V(T') \cup V(A)]}} m(A,\omega,c_A) \leq |S'| - k,$$ and so $$\sum_{A \in \mathcal{A}} m(A,\omega,c_A) \leq |S'| - k + |S_A| + k + 1 = |S'| + |S_A| + 1.$$ We can now apply Lemma \ref{change-colouring} to see that \begin{align*} m(T,\omega,d) & \leq m(T,\omega',d) + \sum_{A \in \mathcal{A}} m(A,\omega,c_A) \\ & \leq |S_L| - |S'| + |S'| + |S_A| + 1 \\ & = |S| \\ & = m(H,\omega,d) \end{align*} in this case also, completing the proof. \end{proof} In our analysis of the algorithm in the next section, we will need one additional result: we show in the next lemma that any tree can be flooded by an optimal sequence in which no moves are played at leaves. \begin{lma} Let $T$ be any tree, and $\omega$ a colouring of the vertices of $T$. Then there exists a sequence of moves $S$, of length $m(T,\omega)$, which makes $T$ monochromatic and in which all moves are played in $\bare(T)$. \label{no-leaf-moves} \end{lma} \begin{proof} Let $S_0$ be any optimal sequence to flood $T$, and set $S_0'$ to be the subsequence of $S_0$ consisting of moves that change the colour of a vertex in $\bare(T)$. Note that we may assume without loss of generality that all moves of $S_0'$ are played in $\bare(T)$. Note further that $S_0 \setminus S_0'$ contains only moves played at leaves, and let $U$ be the set of leaves in which moves of $S_0 \setminus S_0'$ are played. Observe that playing $S_0'$ in $T$ will make $T \setminus U$ monochromatic, and so we can flood the entire tree by playing a sequence $S$ which consists of $S_0'$ followed by a further $|\col(U,\omega)|$ moves, cycling through the colours still present in leaves of $T$ (playing all moves in $\bare(T)$). Thus $$|S_1| \leq |S_0'| + |\col(U,\omega)| \leq |S_0'| + |U|.$$ However, it is clear that $|S_0| \geq |S_0'| + |U|$, as $S_0 \setminus S_0'$ contains at least one move played at each vertex in $U$. Hence we see that $|S| \leq |S_0|$, and so $S$ is an optimal sequence to flood $T$ in which all moves are played in $\bare(T)$, as required. \end{proof} \subsection{The algorithm} \label{algorithm} In this section we describe our algorithm to solve $c$-\textsc{Free-Flood-It} on $2 \times n$ boards, and use results from the previous section to prove its correctness. We begin with some further definitions. For any section $B[b_1,b_2]$, we define $\mathcal{T}[b_1,b_2]$ to be the set of all spanning trees for $B[b_1,b_2]$. Given any $2 \times n$ Flood-It board $B$, corresponding to a graph $G$ with colouring $\omega$ from colour-set $C$, we define a set of vectors $Z(B)$, where \begin{align*} Z(B) = \{(b_1,b_2, & r_1, r_2, d, I): \\ & B[b_1,b_2] \text{ is a section}, \\ & r_1, r_2 \in V(B[b_1,b_2]), \\ & \exists T \in \mathcal{T}[b_1,b_2] \text{ such that } \bare(T) \subseteq P(T,r_1,r_2), \\ & r_1 \text{ incident with } b_1, r_2 \text{ incident with } b_2, \\ & d \in C, \\ & I \subseteq C \}. \end{align*} Note that there always exists a tree $T \in \mathcal{T}[b_1,b_2]$ such that $\bare(T) \subseteq P(T,r_1,r_2)$ unless one of the following holds: \begin{enumerate} \item there is more than one vertex of $B[b_1,b_2]$ lying strictly to the left of $r_1$ or strictly to the right of $r_2$, or \item there is exactly one vertex of $B[b_1,b_2]$ lying strictly to the left of $r_1$ (respectively to the right of $r_2$), which is not adjacent to $r_1$ (respectively $r_2$) and whose neighbour in the same column as $r_1$ (respectively $r_2$) has no neighbour in $B[b_1,b_2]$ other than $r_1$ (respectively $r_2$). \end{enumerate} Thus we can check whether this condition is satisfied in constant time. We now introduce a function $f$ which is closely related to the minimum number of moves required to flood a $2 \times n$ board. For any $\mathbf{z}=(b_1,b_2,r_1,r_2,d,I) \in Z(B)$ we define $f(\mathbf{z})$ to be the minimum, taken over all $T \in \mathcal{T}[b_1,b_2]$ such that $\bare(T) \subseteq P(T,r_1,r_2)$, of the number of moves that must be played in $P(T,r_1,r_2)$ to flood $P(T,r_1,r_2)$ with colour $d$, and link to $P(T,r_1,r_2)$ all leaves of $T$ that do not have colours from $I$. It follows immediately from Lemmas \ref{leafy-path} and \ref{no-leaf-moves} that $$m(G,\omega) = \min_{\substack{d \in C \\ r_1 \text{ incident with } b_L \\ r_2 \text{ incident with } b_R}} f(b_L,b_R,r_1,r_2,d,\emptyset).$$ Our algorithm in fact computes recursively a function $f^*$, with the same parameters as $f$. We will argue that, for every $\mathbf{z} \in Z(B)$, $f^*(\mathbf{z}) = f(\mathbf{z})$ and hence that it suffices to compute all values of $f^*$ in order to calculate $m(G,\omega)$. The first step of the algorithm is to initialise certain values of $f^*$ to zero. We set $f^*(b_1,b_2,r_1,r_2,d,I) = 0$ if and only if, under the initial colouring, there exists a $r_1$-$r_2$ path of colour $d$ in $B[b_1,b_2]$, and all vertices in $B[b_1,b_2]$ that do not lie on this path are adjacent to the path and have colours from $I \cup \{d\}$. All other values of $f^*(\mathbf{z})$ are initially set to infinity. Note that, under this definition, $f^*(\mathbf{z}) = 0 \iff f(\mathbf{z}) = 0$, and that for each $\mathbf{z} \in Z(B)$ we can easily determine in time $O(n)$ whether $f^*$ should be initialised to zero or infinity. In order to define further values of $f^*$, we introduce two more functions. First, for any $\mathbf{z} = (b_1,b_2,r_1,r_2,d,I) \in Z(B)$, we set $$f_1(b_1,b_2,r_1,r_2,d,I) = 1 + \min_{d' \in C} \{f^*(b_1,b_2,r_1,r_2,d',I \cup \{d\})\}.$$ We also define, for any $\mathbf{z} \in Z(B)$, \begin{align*} f_2(b_1,b_2,r_1, & r_2,d,I) = \\ & \min_{\substack{(b_1,b,r_1,x_1,d,I) \in Z(B) \\ (b,b_2,x_2,r_2,d,I) \in Z(B) \\ b_1 < b < b_2 \\ x_1x_2 \in E(G)}} \{f^*(b_1,b,r_1,x_1,d,I) + f^*(b,b_2,x_2,r_2,d,I)\}. \end{align*} Finally, we set $$f^*(\mathbf{z}) = \min \{f_1(\mathbf{z}), f_2(\mathbf{z})\}.$$ For the reasoning below, it will be useful to introduce another function $\theta$, taking the same parameters as $f$ and $f^*$. For any $\mathbf{z} = (b_1,b_2,r_1,r_2,d,I) \in Z(B)$, we define $$\theta(\mathbf{z}) = f^*(\mathbf{z}) + |\{b: b \text{ a border of } B, b_1 < b < b_2\}|.$$ In the following two lemmas, we show that $f^*(\mathbf{z}) = f(\mathbf{z})$ for all $\mathbf{z} \in Z(B)$, as claimed. We begin by demonstrating that $f^*(\mathbf{z})$ gives an upper bound for $f(\mathbf{z})$. \begin{lma} Let $G$ with colouring $\omega$ (from colour-set $C$) be the coloured graph corresponding to a $2 \times n$ Flood-It board $B$. Then $$f(\mathbf{z}) \leq f^*(\mathbf{z})$$ for all $\mathbf{z} = (b_1,b_2,r_1,r_2,d,I) \in Z(B)$. \label{f*>=f} \end{lma} \begin{proof} We proceed by induction on $\theta(\mathbf{z})$. Recall that we have equality between $f(\mathbf{z})$ and $f^*(\mathbf{z})$ whenever $f^*(\mathbf{z}) = 0$, so certainly the base case for $\theta(\mathbf{z})=0$ must hold. Assume therefore that $f^*(\mathbf{z}) > 0$, and that the result holds for all $\mathbf{z}'$ with $\theta(\mathbf{z}') < \theta(\mathbf{z})$. Since $f^*(\mathbf{z}) > 0$, we must have $f^*(\mathbf{z}) \in \{f_1(\mathbf{z}),f_2(\mathbf{z})\}$. Suppose first that $f^*(\mathbf{z}) = f_1(\mathbf{z})$. Then, for some $d' \in C$, \begin{align*} f^*(b_1,b_2,r_1,r_2,d,I) & = 1 + f^*(b_1,b_2,r_1,r_2,d',I \cup \{d\}) \\ & \qquad \qquad \qquad \qquad \text{by definition of $f_1$} \\ & \geq 1 + f(b_1,b_2,r_1,r_2,d',I \cup \{d\}) \\ & \qquad \qquad \qquad \qquad \text{by inductive hypothesis.} \end{align*} But then we know, by definition of $f$, that there exists $T \in \mathcal{T}[b_1,b_2]$ and a sequence $S$ of $f(b_1,b_2,r_1,r_2,d',I \cup \{d\})$ moves, all played in $P(T,r_1,r_2)$, which, when played in $T$, floods $P(T,r_1,r_2) \supseteq \bare(T)$ with colour $d'$ and links all leaves to $P(T,r_1,r_2)$ except possibly those with colours from $I \cup \{d\}$. By appending one further move to $S$, which changes the colour of $P(T,r_1,r_2)$ to $d$, we obtain a sequence $S'$ of length $f(b_1,b_2,r_1,r_2,d',I \cup \{d\}) + 1$ (with all moves played in $P(T,r_1,r_2)$) which, when played in $T$, floods $P(T,r_1,r_2)$ with colour $d$ and is such that all leaves of $T$ not linked to $P(T,r_1,r_2)$ by $S$ have colours from $I$. Hence $f(b_1,b_2,r_1,r_2,d,I) \leq |S'| = 1 + f(b_1,b_2,r_1,r_2,d',I \cup \{d\})$, and so $f(b_1,b_2,r_1,r_2,d,I) \leq f^*(b_1,b_2,r_1,r_2,d,I)$, as required. Now suppose that $f^*(\mathbf{z}) = f_2(\mathbf{z})$. Then, by definition of $f_2$, there exists a border $b$ with $b_1 < b < b_2$ and an edge $x_1x_2 \in E(G)$ such that $(b_1,b,r_1,x_1,d,I), (b,b_2,x_2,r_2,d,I) \in Z(B)$ and $$f^*(b_1,b_2,r_1,r_2,d,I) = f^*(b_1,b,r_1,x_1,d,I) + f^*(b,b_2,x_2,r_2,d,I).$$ Note that $|\{b': b' \text{ a border of } B, b_1 < b' < b\}|$ and $|\{b': b \text{ a border of } B, b < b' < b_2\}|$ are both strictly smaller than $|\{b: b \text{ a border of } B, b_1 < b < b_2\}|$, so by the inductive hypothesis we have $$f^*(b_1,b_2,r_1,r_2,d,I) \geq f(b_1,b,r_1,x_1,d,I) + f(b,b_2,x_2,r_2,d,I).$$ By definition of $f$, there exist trees $T_1 \in \mathcal{T}[b_1,b]$ and $T_2 \in \mathcal{T}[b,b_2]$, and sequences $S_1$ and $S_2$ of length $f(b_1,b,r_1,x_1,d,I)$ and $f(b,b_2,x_2,r_2,d,I)$ respectively, such that (for $i \in \{1,2\}$) all moves of $S_i$ are played in $P(T_i,r_i,x_i)$ and $S_i$ floods $P(T_i,r_i,x_i)$ with colour $d$, additionally linking all leaves of $T_i$ to $P(T_i,r_i,x_i)$ except possibly those with colours from $I$. Now set $T = T_1 \cup T_2 \cup \{x_1x_2\}$. It is clear that $T \in \mathcal{T}[b_1,b_2]$, and moreover that $\bare(T) \subseteq P(T,r_1,r_2)$. Suppose $T_1'$ and $T_2'$ are the subtrees of $T_1$ and $T_2$ respectively that are given colour $d$ by $S_1$ and $S_2$, and set $T' = T_1' \cup T_2' \cup \{x_1x_2\}$. Note that $P(T,r_1,r_2) \subseteq T'$ and that $\col(T \setminus T', \omega) = \col(T_1 \setminus T_1',\omega) \cup \col(T_2 \setminus T_2', \omega) \subseteq I$, so $f(b_1,b_2,r_1,r_2,d,I) \leq m(T',\omega,d)$. We can then apply Corollary \ref{non-interference} to see that \begin{align*} f(b_1,b_2,r_1,r_2,d,I) & \leq m(T',\omega,d) \\ & \leq m(T_1',\omega,d) + m(T_2',\omega,d) \\ & \leq |S_1| + |S_2| \\ & = f(b_1,b,r_1,x_1,d,I) + f(b,b_2,x_2,r_2,d,I) \\ & \leq f^*(b_1,b_2,r_1,r_2,d,I), \end{align*} completing the proof. \end{proof} Next we show that the reverse inequality also holds. \begin{lma} Let $G$ with colouring $\omega$ (from colour-set $C$) be the coloured graph corresponding to a $2 \times n$ Flood-It board $B$. Then $$f(\mathbf{z}) \geq f^*(\mathbf{z})$$ for all $\mathbf{z} = (b_1,b_2,r_1,r_2,d,I) \in Z(B)$. \label{f*<=f} \end{lma} \begin{proof} We proceed by induction on $f(\mathbf{z})$, noting again that we have equality in the base case for $f(\mathbf{z}) = 0$. Suppose that $f(\mathbf{z}) > 0$, and that the result holds for $\mathbf{z}'$ whenever $f(\mathbf{z}') < f(\mathbf{z})$. By definition, there exists a tree $T \in \mathcal{T}[b_1,b_2]$ and a sequence $S$ of length $f(b_1,b_2,r_1,r_2,d,I)$ such that $\bare(T) \subseteq P(T,r_1,r_2)$, all moves of $S$ are played in $P(T,r_1,r_2)$, and $S$ floods $P(T,r_1,r_2)$ with colour $d$, leaving only leaves with colours from $I$ not linked to $P(T,r_1,r_2)$. We proceed by case analysis on $\alpha$, the final move of $S$. Suppose first that $P(T,r_1,r_2)$ is already monochromatic before $\alpha$, and that this final move just changes its colour to $d$ from some $d' \in C$ (possibly flooding some additional leaves of colour $d$ in the process). In this case it is clear that $f(b_1,b_2,r_1,r_2,d',I \cup \{d\}) \leq |S| - 1$ and so we can apply the inductive hypothesis to see that $f^*(b_1,b_2,r_1,r_2,d',I \cup \{d\}) \leq f(b_1,b_2,r_1,r_2,d',I \cup \{d\})$. But then, by definition of $f_1$, we know that \begin{align*} f^*(b_1,b_2,r_1,r_2,d,I) & \leq 1 + f^*(b_1,b_2,r_1,r_2,d',I \cup \{d\}) \\ & \leq 1 + f(b_1,b_2,r_1,r_2,d',I \cup \{d\}) \\ & \leq 1 + |S| - 1 \\ & = f(b_1,b_2,r_1,r_2,d,I), \end{align*} as required. So we may assume that $P(T,r_1,r_2)$ is not monochromatic before $\alpha$: it may have either two or three monochromatic components. Suppose first that $P(T,r_1,r_2)$ has exactly three monochromatic components before $\alpha$ is played, $A_1$, $A_2$ and $A_3$; we may assume that $A_1$ and $A_3$ have colour $d$ before $\alpha$, and that this final move gives $A_2$ colour $d$ to flood the entire path. For $i \in \{1,2,3\}$, set $S_i$ to be the subsequence of $S \setminus \alpha$ consisting of moves played in $A_i$, and set $\bar{A_i}$ to be $A_i$ together with all leaves of $T$ that lie in the same column as a vertex of $A_i$ or whose only neighbour on $P(T,r_1,r_2)$ is in $A_i$. Note that that $\bar{A_1}$, $\bar{A_2}$ and $\bar{A_3}$ partition the vertex set of $T$, and that $S_1$, $S_2$ and $S_3$ partition $S \setminus \alpha$. We may assume without loss of generality that $r_1 \in A_1$ and $r_2 \in A_3$. Observe that there must exist borders $b$ and $b'$, with $b_1 < b < b' < b_2$, such that $\bar{A_1} = B[b_1,b]$, $\bar{A_2} = B[b,b']$ and $\bar{A_3} = B[b',b_2]$. Set $x_1x_2$ to be the edge of $T$ such that $x_1 \in A_1$, $x_2 \in A_2$ and $y_1y_2$ the edge of $T$ such that $y_1 \in A_2$ and $y_2 \in A_3$. Note that $T[A_1] \in \mathcal{T}[b_1,b]$, $T[A_2] \in \mathcal{T}[b,b']$, and $T[A_3] \in \mathcal{T}[b',b_2]$, and moreover that we have $\bare(T[A_1]) \subseteq P(T[A_1],r_1,x_1)$, $\bare(T[A_2]) \subseteq P(T[A_2],x_2,y_1)$ and $\bare(T[A_3] \subseteq P(T[A_3],y_2,r_2)$. Observe also that $S_1$ is a sequence of moves played in $P(T[A_1],r_1,x_1)$ that floods $P(T[A_1],r_1,x_1)$ with colour $d$ and links all leaves, except possibly those with colours from $I$, to $P(T[A_1],r_1,x_1)$, so we must have $f(b_1,b,r_1,x_1,d,I) \leq |S_1|$. Similarly, we see that $f(b',b_2,y_2,r_2,d,I) \leq |S_3|$ and $f(b,b',x_2,y_1,d',I \cup \{d\}) \leq |S_2|$. Since $|S_1|,|S_2|,|S_3| < |S|$, we can apply the inductive hypothesis to see that $$f^*(b_1,b,r_1,x_1,d,I) \leq f(b_1,b,r_1,x_1,d,I) \leq |S_1|$$ and $$f^*(b',b_2,y_2,r_2,d,I) \leq f(b',b_2,y_2,r_2,d,I) \leq |S_3|.$$ The inductive hypothesis also gives $$f^*(b,b',x_2,y_1,d',I \cup \{d\}) \leq f(b,b',x_2,y_1,d',I \cup \{d\}) \leq |S_2|,$$ and we can then apply the definition of $f_1$ to see that $$f^*(b,b',x_2,y_1,d,I) \leq 1 + f^*(b,b',x_2,y_1,d',I \cup \{d\}) \leq 1 + |S_2|.$$ Now we can apply the definition of $f^*$ to see that \begin{align*} f^*(b_1,b_2,r_1,r_2,d,I) & \leq f_2(b_1,b_2,r_1,r_2,d,I) \\ & \leq f^*(b_1,b,r_1,x_1,d,I) + f^*(b,b_2,x_2,r_2,d,I) \\ & \leq f^*(b_1,b,r_1,x_1,d,I) + f_2(b,b_2,x_2,r_2,d,I) \\ & \leq f^*(b_1,b,r_1,x_1,d,I) + f^*(b,b',x_2,y_1,d,I) \\ & \qquad \qquad \qquad \qquad + f^*(b',b_2,y_2,r_2,d,I) \\ & \leq 1 + |S_1| + |S_2| + |S_3| \\ & = |S| \\ & = f(b_1,b_2,r_1,r_2,d,I), \end{align*} as required. For the remaining case, in which $P(T,r_1,r_2)$ has exactly two monochromatic components before $\alpha$, we can use the same reasoning as in the previous case for three components to show that we must once again have $f^*(b_1,b_2,r_1,r_2,d,I) \leq f(b_1,b_2,r_1,r_2,d,I)$, completing the proof. \end{proof} The final step to is to show that all values of $f^*$ can be computed in time $O(n^{20}2^{3c})$. \begin{prop} For any $2 \times n$ Flood-It board $B$, the function $f^*(\mathbf{z})$ can be computed, for all $\mathbf{z} \in Z(B)$, in time $O(n^{11}2^{c})$. \label{complexity} \end{prop} \begin{proof} We compute values of $f^*$ recursively using a dynamic programming technique. Our table has one entry for each pair of borders, for each possible vertex incident with each of the borders, for each colour in the colour-set and for each possible subset of colours, so the total number of entries is at most $$O(n^2 \cdot n^2 \cdot n \cdot n \cdot c \cdot 2^c) = O(n^6 2^c).$$ The table is initialised by setting all values to either zero or infinity, and for each entry we can determine which of these values it should take in time at most $O(n)$, so we can initialise the entire table in time $O(n^7 2^c)$. The next step is to apply the recursive definition of $f^*$ repeatedly to all entries in the table that are not already set to zero. Each time we apply this definition to a single entry, we take the minimum of at most $O(cn^3)$ values (one for each choice of colour, plus one for each combination of a border and a pair of adjacent vertices on either side), each a combination of at most two other entries in the table, so each entry can be calculated in time $O(cn^3)$. We can therefore perform one iteration in which we apply the definition to each non-zero entry in the table in time $O(n^{9} 2^c)$. Note that once we have initialised the table, we have the correct value of $f^*(\mathbf{z})$ for any $\mathbf{z}$ such that $\theta(\mathbf{z}) = 0$. Moreover, the value of $f^*(\mathbf{z})$ depends only on values of $f^*(\mathbf{z}')$ where $\theta(\mathbf{z}') < \theta(\mathbf{z})$, so after $k$ iterations we will have correctly computed the value of $f^*(\mathbf{z})$ for all $\mathbf{z}$ with $\theta(\mathbf{z}) \leq k$. Note that for every $\mathbf{z} \in Z(B)$, $\theta(\mathbf{z}) \leq 2n + n^2$, as there are at most $n^2$ borders in total, and no more than $2n$ moves can be required to flood a graph with at most this many vertices, so $n^2 + 2n$ iterations are sufficient to guarantee we have computed all values of $f^*$ correctly. Thus, we can compute all values of $f^*(\mathbf{z})$ for $\mathbf{z} \in Z(B)$ in time $O(n^{11} 2^{c})$, as required. \end{proof} We now combine the previous three results to give the proof of our main theorem. \begin{proof}[Proof of Theorem \ref{2xn-fpt}] Recall that, from the definition of $f$ and Lemmas \ref{leafy-path} and \ref{no-leaf-moves}, $$m(G,\omega) = \min_{\substack{d \in C \\ r_1 \text{ incident with } b_L \\ r_2 \text{ incident with } b_R}} f(b_L,b_R,r_1,r_2,d,\emptyset).$$ Thus, in order to compute $m(G,\omega)$ in time $O(n^{11} 2^{c})$, it suffices to compute all relevant values of $f$ in time $O(n^{11} 2^{c})$. However, we know from Lemmas \ref{f*>=f} and \ref{f*<=f} that $f(\mathbf{z}) = f^*(\mathbf{z})$ for all $\mathbf{z} \in Z(B)$, and from Proposition \ref{complexity} we know that we can compute $f^*(\mathbf{z})$ for all $\mathbf{z} \in Z(B)$ in time $O(n^{11} 2^{c})$. This completes the proof of the theorem. \end{proof} \section{FREE FLOOD IT on $2 \times n$ boards} \label{NPhard} In this section we prove the following theorem. \begin{thm} \textsc{Free-Flood-It} remains NP-hard when restricted to $2 \times n$ boards. \label{2xnNP} \end{thm} This is somewhat surprising, as we have seen in the previous section that $c$-\textsc{Free-Flood-It} can be solved in polynomial time on $2 \times n$ boards, while \cite{clifford} gives a linear time algorithm to solve \textsc{Fixed Flood It} in this situation. We demonstrate here that the problem is almost certainly not in \textbf{P} if we remove both these restrictions (that moves are always played at the same vertex, or the number or colours is bounded). This is the first class of graphs for which such a result has been shown. The proof is by means of a reduction from Vertex Cover, shown to be NP-hard by Karp in \cite{karp72}. Given a graph $G=(V,E)$, we construct a $2 \times n$ Flood-It board $B_G$ as follows. Suppose $E = \{e_1, \ldots, e_m\}$. For each edge $e=uv \in E$ we construct the gadget $G'_e$, as illustrated in Figure \ref{gadget-G_e'}. We will refer to the single-square components incident with the bottom edge in $G_e'$ as \emph{islands}. $G_e'$ is then embedded in the larger gadget $G_e$, as shown in Figure \ref{gadget-G_e}. Distinct colours $x_1^e, \ldots, x_r^e$ are used for each $e$, where $r = 2m+|V|$. We then obtain the board $B_G$ by placing these gadgets $G_e$ in a row, as illustrated in Figure \ref{board-2xn-inf-B}. Observe that we can take $n = m(2r+6) = 2m(2m + |V| + 3)$. Let us also set $N = mr + 2m -1$. \begin{figure} [h] \centering \includegraphics[width=0.4\linewidth]{gadget-G_e1} \caption{The gadget $G_e'$} \label{gadget-G_e'} \end{figure} \begin{figure} [h] \centering \includegraphics[width=0.8\linewidth]{gadget-G_e} \caption{The gadget $G_e$} \label{gadget-G_e} \end{figure} \begin{figure} [h] \centering \includegraphics[width=0.7\linewidth]{board-2xn-inf-B} \caption{The board B} \label{board-2xn-inf-B} \end{figure} In the two following lemmas, we show that we can flood this board $B$ in $N + k$ steps if and only if $G$ has a vertex cover of size at most $k$. \begin{lma} If $G$ has a vertex cover of size at most $k$, then we can flood the board $B_G$ in $N + k$ steps. \label{vc=>strat} \end{lma} \begin{proof} First observe that, if $e=uv$, then with $(r+1)$ moves we can flood the gadget $G_e$, except for a single island of colour $c(e) \in \{u,v\}$, so that it is monochromatic in colour $x_r^e$: first play a single move to make all of $G_e'$ except for a single island monochromatic, then play colours $x_1^e, \ldots, x_r^e$ in this central component. Ignoring the islands for the moment, the components corresponding to each $G_e$ now have distinct colours, so we can link these components with a minimum of $m-1$ moves. Finally, we need to flood the islands, and this requires exactly $|\{c(e): e \in E\}|$ moves. But we know that $G$ has a vertex cover of size at most $k$, say $V'$. By the definition of a vertex cover, if the gadget $G_e'$ uses colours $u$ and $v$, then at least one of $u,v \in V'$. So for each $G_e$, we may choose to leave an island of colour $d$ where $d \in V'$. Following this strategy, we are left in the final stage with islands of at most $k$ distinct colours, and can flood these in $k$ steps (by cycling through each colour in turn in the external monochromatic component). Hence we can flood $B_G$ in $N + k$ steps. \end{proof} \begin{lma} If we can flood $B_G$ in $N+k$ steps (for some $0 \leq k \leq |V|$), then $G$ has a vertex cover of size at most $k$. \label{strat=>vc} \end{lma} \begin{proof} Suppose the sequence $S$ floods $B_G$, where $|S| = N+k$. Observe that, if we contract monochromatic components of the coloured graph corresponding to $B_G$, we obtain a tree $T$. Let $P$ be the unique path in $P$ joining the two vertices in $T$ that correspond to the monochromatic components incident with opposite ends of the board and note that, by Lemma \ref{no-leaf-moves}, we may assume that all moves of $S$ are played in $\bare(T) \subseteq P$. Moreover, $S$ must flood $P$ when played in this isolated path. We will say that a component of colour $d$ is \emph{eliminated} by the move $\alpha$ if $\alpha$ changes the colour of that component, linking it to an adjacent component of colour $d' \neq d$. We say that $\alpha$ \emph{eliminates the colour $d$} if it removes the last component of colour $d$ remaining in the graph. Suppose that, for some $v \in V$, a single move $\alpha \in S$ eliminates two components $A_1$ and $A_2$ of $P$ that both have colour $v$ initially. $A_1$ and $A_2$ cannot belong to the same gadget $G_e$ so, for $\alpha$ to eliminate them both, the moves played in $S$ before $\alpha$ must create a single monochromatic component $A$ containing both $A_1$ and $A_2$. Such a component $A$ must contain $i \geq 2r$ different colours under the original colouring, and so $S$ must include at least $i-1$ moves played in this section of the path before $\alpha$ (so in total at least $i$ moves of $S$ are played in $A$). But there are at least $mr - (i - 2r)$ colours on the path outside $A$ (as at least $2r$ of the colours in $A$ must also appear outside $A$), and all but one of these must be eliminated by moves played outside $A$. This gives $$|S| \geq mr + 2r - 1 > N + |V|,$$ a contradiction. So we may assume that no move in $S$ eliminates more than one component that originally has colour $v \in V$. Now consider the leaves of $T$, and let $\bar{S}$ be the set of moves in $S$ that eliminate the second leaf in each $G_e'$. Suppose that one leaf in $G_e'$ has already been eliminated, and that the move $\alpha \in \bar{S}$ removes the second leaf. Since one leaf has already been eliminated, no components in $G_e \cap P$ which originally had a colour $v \in V$ still have colour $v$. Suppose that $\alpha$ reduces the number of monochromatic components on $P$. By the reasoning above, if $\alpha$ links $G_e'$ to another component outside $G_e$ that originally had colour $v$, we would have a contradiction with $|S| > N + |V|$, so in fact $\alpha$ must link $G_e'$ to a component in $G_e$ whose colour was previously changed to $v$ by some move $\beta$; such a move $\beta$ could not decrease the number of monochromatic components of $P$. Thus, for every $\alpha \in \bar{S}$, there is at least one move of $S$ that does not decrease the number of monochromatic components of $P$. Hence we see that $$|S| \geq mr + 2m - 1 + |\bar{S}| = N + |\bar{S}|,$$ and so $|\bar{S}| \leq k$. However, we know that $\bar{S}$ eliminates at least one leaf from every $G_e'$, and clearly each move in $\bar{S}$ can eliminate leaves of only one colour. Hence there exists some set $C' \subset V$ such that $|C'| \leq |\bar{S}| \leq k$ and at least one leaf in each $G_e'$ has a colour from $C'$. In other words, $|C'| \leq k$, and for every edge $uv \in E$, $\{u,v\} \cap C' \neq \emptyset$, so $C'$ is in fact a vertex cover for $G$ of size at most $k$. \end{proof} \begin{proof}[Proof of Theorem \ref{2xnNP}] The reduction from Vertex Cover is immediate from Lemmas \ref{vc=>strat} and \ref{strat=>vc}. \end{proof} \section{Conclusions and open problems} We have demonstrated an algorithm which shows that the problem $c$-\textsc{Free-Flood-It}, restricted to $2 \times n$ boards, is fixed parameter tractable with parameter $c$, and on the other hand we have shown that \textsc{Free-Flood-It} remains NP-hard in this setting. This answers an open question from \cite{clifford}, in which Clifford, Jalsenius, Montanaro and Sach showed that \textsc{Fixed-Flood-It} can be solved in time $O(n)$ on such boards. Our results therefore give the first example of a class of graphs on which the complexity status of the fixed and free versions of the game differ. Together with results from \cite{clifford} and \cite{general}, this almost completes the picture for the complexity of flood-filling problems restricted to $k \times n$ boards. However, there does remain one open case: \begin{prob} What are the complexities of 3-\textsc{Fixed-Flood-It} and 3-\textsc{Free-Flood-It} restricted to $k \times n$ boards, in the case that $k \geq 3$ is a fixed integer? \end{prob} Another interesting direction for further research would be to consider extremal flood-filling problems in this setting. \begin{prob} What colourings of a $k \times n$ board $B$ with $c$ colours give the maximum value of $m(B)$? \end{prob} As a first step, it should not be hard to determine the maximum value of $m(B)$ for a $1 \times n$ board. Such questions can also be generalised to arbitrary graphs, leading to two more natural questions. \begin{prob} Given a graph $G$ and an integer $c \geq \chi(G)$, what proper colourings $\omega$ of $G$ with exactly $c$ colours maximise $m(G,\omega)$? \end{prob} \begin{prob} Given a graph $G$, what proper colourings $\omega$ minimise $m(G,\omega)$? Do such colourings necessarily use exactly $\chi(G)$ colours? \end{prob}
1,108,101,563,781
arxiv
\section{Introduction} The dynamics of quantum metamaterials \cite{macha2014implementation, PhysRevLett.117.210503, braumuller2017analog, Zagoskin2016, LAZARIDES20181, jung2014progress,fistul2017quantum,Shapiro2015, shapiro2015dispersive,GREENBERG2019300} is a subject of a great interest. These metamaterials are the hybrid systems where cavity photons interact with multi-qubit environment. The behavior of such systems is captured by the Dicke model \cite{emary2003chaos, brandes2005coherent, kirton2018introduction}. The interactions can be characterized by a collective Rabi frequency proportional to a product of the individual qubit-cavity coupling constant and square root of the qubit number. If the Rabi frequency is larger than a certain value then the superradiant phase transition, characterized by an emergence of a large photon number in a cavity and finite order parameter, occurs for temperatures lower than a critical value. The rigorous study of the superradiant phase transition was proposed in the pioneering work of Fedotov and Popov \cite{popov1988functional}. These authors proposed semi-fermion parametrization of spin operators and described the phase transition in the framework of Matsubara effective action for the photon field. In that work the chemical potential was assumed to be zero and, consequently, the excitations' number was not constrained. Another case of finite chemical potential in the Dicke model was addressed in Refs.~\cite{eastham2001bose, eastham2006finite} and it was shown that the Bose condensation of polaritons is emerged \cite{popov1988functional,eastham2006finite} The Keldysh diagrammatic approach for finite-$N$ corrections, as well as effects of dissipation and external driving, were studied in Refs.~\cite{dalla2013keldysh, PhysRevA.94.061802}. Zero temperature description for a limit of large excitations number was obtained in Ref.~\cite{Pogosov_2017} by means of Bethe-ansatz technique. Alternatively to the temperature driven transition discussed in Ref.~\cite{popov1988functional}, the superradiance can be turned on by an increase of the interaction strength. It takes place if the Rabi frequency overcomes a critical value. A realization of a control parameter as the interaction energy is possible for quantum metamaterials such as superconducting qubits arrays \cite{macha2014implementation, PhysRevLett.117.210503, Shulga2017, Zhang2017} integrated with a GHz transmission line via tunable couplers \cite{Srinivasan2011,Hoffman2011,Chen2014,Zeytinouglu2015}. Also, this may be done in hybrid systems with a controllable amount of nitrogen-vacancy (NV) centers in a diamond sample which interact with an electromagnetic field \cite{Dutt2007,Sandner2012,Putz2014,Angerer2018}. In the present paper we address the situation where the Rabi frequency in quantum metamaterial is varied from weak to ultra-strong coupling domains while the temperature remains constant. We also keep a constant number of qubits $N$ assuming that $N $ is large but finite. It is implied that the loss rate in the cavity is small. The finiteness of $N$ in our consideration means that the superradiant transition is smoothed by the fluctuations of the order parameter and, beside of that, by the thermal fluctuations of polariton quasiparticles. The aim of this work is (i) to describe fluctuations of the above two types and (ii) to formulate a full counting statistics for the photon numbers in this regime. Our main results are the explicit expressions for the average photons number, its fluctuations and full counting statistics as functions of the collective Rabi frequency. Proposed formalism provides a solution for low temperature $T$ and large $N$ provided they satisfy the conditions $\hbar\omega\gg k_{\rm B}T\gg \hbar\omega/N$ (in this case all qubits are assumed to be in a resonance with the photon mode of the frequency $\omega$). The generalizations for the high-temperature limit, $k_{\rm B}T\gg\hbar\omega$, and dispersive regime, where a spectral density of qubits energies is strongly broadened, are also discussed. The paper is organized as follows. In the Sec. \ref{sec:path_int} we present Matsubara action for the Dicke model, where the qubits degrees of freedom are expressed through the Majorana and complex fermion variables. This is one of possible representations of Pauli operators acting in a Hilbert space of a two-level system. The Sec. \ref{sec:eff_action} has methodological character. We derive the photon mode's effective action, which was obtained in previous works~\cite{popov1988functional,eastham2006finite}, by means of the alternative technique with Majorana fermions. In the Sec. \ref{sec:n_ph} we present the general expressions for the average photon number and their fluctuations in a resonant limit. In the Sec. \ref{sec:ph_tr} we discuss fluctuational and statistical properties and present a comparison with results of exact numerical simulations at finite temperature and qubit number of the order of ten. In Sec. \ref{sec:generalization} the results are generalized for high temperatures and inhomogeneous broadening in qubit ensemble. In Sec. \ref{sec:fcs} the cumulant generating function for the photon number is derived. In the Sec. \ref{sec:concl} we conclude. In the Appendix \ref{app-corr} we derive the conditions, where our solution based on the Gaussian approximation for thermal fluctuations is strict. \section{Path integral formulation}\label{sec:path_int} The Dicke Hamiltonian of $N$ qubits reads (we set $\hbar=1$ and $k_{\rm B}=1$ throughout the text): \begin{equation} \hat H =\omega\hat \psi^\dagger\hat \psi+\sum\limits_{j=1}^N \frac{\epsilon_j}{2}\hat\sigma^z_j+ \sum\limits_{j=1}^N g_j (\hat\psi\hat\sigma^+_j + \hat\psi^\dagger\sigma^-_j). \label{h-rwa} \end{equation} Here $\epsilon_j$ are the qubits excitation energies, $g_j$ are the individual coupling strengths between $j$-th qubit and the photon field in a single-mode cavity. The fundamental frequency of the photon mode is $\omega$. The coupling term is introduced in the standard rotating wave approximation. In a path integral formulation the photon mode is described by a conventional complex bosonic fields $\bar\psi,\psi$. The Pauli operators, $\hat \sigma^\pm_j,\hat \sigma^z_j$, acting on the $j$-th qubit degrees of freedom, may be represented in path integrals in different ways. It can be bosonic Holstein-Primakoff representation \cite{PhysRev.58.1098} or bilinear forms of fermions. Concerning other fermion representations for the Dicke model, techniques based on semi-fermions with an imaginary chemical potential \cite{popov1988functional} or auxiliary boson field \cite{eastham2001bose} were employed. These representations allow to eliminate the emergent unphysical states and to reduce a Hilbert space to that of a spin-1/2. The semi-fermion representation for spin operators was generalized for Keldysh technique in Ref.~\cite{PhysRevLett.85.5631}. Another one fermion representation, which we choose for our calculations, is given by the product of a complex $\hat c_j\neq \hat c^\dagger_j$ and Majorana $\hat d_j=\hat d_j^\dagger$ fermion operators~\cite{martin1959generalized,tsvelik2007quantum}: \begin{equation} \hat\sigma^+_j=\sqrt{2}\hat c^\dagger_j \hat d_j, \quad \hat \sigma^-_j=\sqrt{2}\hat d_j\hat c_j . \label{fermionization} \end{equation} They correspond to three Grassmann fields $\bar c ,c$ and $d$ in a path integral formalism. The use of Majorana fermion allows to avoid auxiliary constraints in the action. Fields $\bar c, c$ are related to usual complex fermion mode with the excitation energy of two-level system. Field $d$ stands for Majorana zero energy mode with $\langle \hat d^2\rangle=1/2$. Majorana representation of spin operators has been applied to spin-boson model \cite{SCHAD2015401,PhysRevB.93.174420} and to a description of spin-spin interaction via helical Luttinger liquid \cite{PhysRevLett.120.147201}. Recently, this fermionization has been applied to the Dicke model with counter-rotating terms in the interaction Hamiltonian and a regime of quantum chaos has been studied \cite{1808.02038v2}. In our studies, which are focused on fluctuation-dominated regime near superradiant phase transition and behavior at finite $N$, Majorana representation appears as a convenient tool. Below we demonstrate how one can obtain the effective action for photon field with the use of the fermionization (\ref{fermionization}). The starting point of such consideration is the path integral formulation of the partition function $Z$ in terms of the boson complex fields $\Psi_\tau=[\bar\psi_\tau,\psi_\tau]$ and fermion fields $\bar c ,c,d$~\cite{kamenev2011field}: \begin{equation} Z= \int \mathcal{D}[\Psi, \bar c ,c,d ] \exp(- S[\Psi, \bar c ,c,d ]) \label{Z} \end{equation} with the action is \begin{multline}S [\Psi, \bar c ,c,d ]=S_{\rm ph}[\Psi ] + S_{\rm q} [ \bar c ,c,d ] + \\ + S_{\rm int}[\Psi, \bar c ,c,d ]+\ln Z_{\rm ph}Z_{\rm q} \ . \label{S} \end{multline} Here $S_{\rm ph}[\Psi ] $, $S_{\rm q} [ \bar c ,c,d ] $ and $S_{\rm int}[\Psi, \bar c ,c,d ]$ are the Matsubara actions of the photon mode, qubit environment and their interaction, respectively. The last term $\ln Z_{\rm ph}Z_{\rm q}$ appears due to a normalization of $Z$ to unity at the decoupled limit $g_j \to 0$. Below we consider the terms in (\ref{S}) in more details. Both of the qubit and photon subsystems are assumed to be in thermal equilibrium at the temperature $T$. The photon mode action, defined on the imaginary time interval $\tau\in [0,\beta]$, where $\beta=1/T$, is \begin{equation} S_{\rm ph}[\Psi] =\int\limits_0^\beta \bar \psi_\tau(-G_{{\rm ph}; \tau-\tau'})\psi_{\tau'} \ d\tau , \label{Sph} \end{equation} where the inverse Green function of free photon mode is \begin{equation} \quad G_{{\rm ph}; \tau-\tau'}^{-1}=\delta_{\tau-\tau'}(-\partial_{\tau'}-\omega) \ . \end{equation} The Fourier transformations from $\tau $ to Matsubara bosonic frequencies $\omega_n=2\pi n T$ are defined for the fields and for the Green functions as \begin{equation} \psi_n=T \int\limits_0^\beta \psi_\tau e^{{\rm i}2\pi n T \tau} d\tau, \ \bar\psi_n=T \int\limits_0^\beta \bar\psi_\tau e^{-{\rm i}2\pi n T \tau} d\tau \end{equation} and \begin{equation} G_{{\rm ph}; n}^{-1}= \int\limits_0^\beta G_{{\rm ph};\tau }^{-1} e^{{\rm i}2\pi n T \tau} d\tau={\rm i}2\pi n T -\omega. \end{equation} In this representation the photon mode action (\ref{Sph}) is transformed into \begin{equation} S_{\rm ph}[\Psi] =\beta\sum\limits_{n }\bar\psi_n (-G_{{\rm ph}; n}^{-1} ) \psi_n . \label{Sph-1} \end{equation} The qubit ensemble action is \begin{multline} S_{\rm q}[ \bar c ,c,d ] \\ =\frac{1}{2}\sum\limits_{j=1}^N\int\limits_0^\beta \begin{bmatrix} \bar c_j & c_j & d_j \end{bmatrix} (-\mathbf{G}_{j;\tau-\tau'}^{-1} ) \begin{bmatrix} c_j \\ \bar c_j \\ d_j \end{bmatrix} d\tau d\tau' \ . \label{Sq} \end{multline} The matrix $\mathbf{G}_{j;\tau-\tau'}^{-1}$ describes the $j$-th qubit. It contains the inverse Green functions for the $j$th complex fermion and its conjugate with the energies $\pm \epsilon_j$, respectively, and the Majorana fermion of zero energy: \begin{equation} - \mathbf{G}_{j;\tau-\tau'}^{-1}=\delta_{\tau-\tau'} \begin{bmatrix} \partial_{\tau'} + \epsilon_j && 0 && 0 \\ \\ 0 && \partial_{\tau'} -\epsilon_j && 0 \\ \\ 0 && 0 && \partial_{\tau'} \end{bmatrix} \ .\label{G} \end{equation} Note, that a corresponding Fourier transformation of the fields $\bar c_\tau, c_\tau$ and $d_\tau$ and the elements of $\mathbf{G}_{j;\tau-\tau'}$ assumes the fermionic frequencies $\omega_n=2\pi n T+\pi T$. Bilinear forms $c_j d_j$ and $\bar c_j d_j$ appear in $S [\Psi, \bar c ,c,d ]$ due to the qubit-cavity coupling encoded by the matrix $\mathbf{V}_j[\Psi_\tau]$: \begin{multline} S_{\rm int}[\Psi, \bar c ,c,d ]\\ = \frac{1}{2}\sum\limits_{j=1}^N\int\limits_0^\beta \begin{bmatrix} \bar c_j & c_j & d_j \end{bmatrix} \delta_{\tau-\tau'}\mathbf{V}_j [\Psi_\tau] \begin{bmatrix} c_j \\ \bar c_j \\ d_j \end{bmatrix} d\tau d\tau' . \label{Sint} \end{multline} This is the matrix which involves the complex boson fields $\psi_{\tau}$, $\bar\psi_{\tau}$ as follows: \begin{equation} \mathbf{V}_j[\Psi_\tau] =\sqrt{2}g_j \begin{bmatrix} 0 && 0 && -\psi_\tau \\ \\ 0 && 0 && \bar\psi_\tau \\ \\ -\bar\psi_\tau && \psi_\tau && 0 \end{bmatrix}.\label{V} \end{equation} The normalization term in (\ref{S}) is the product of partition functions of non-interacting photon mode and $N$ qubits. The logarithms of their partition functions $ Z_{\rm ph}=\int \mathcal{D}[\Psi ] \exp(-S_{\rm ph}[\Psi] )$ and $Z_{\rm q}=\int \mathcal{D}[ \bar c ,c,d ] \exp( - S_{\rm q}[\bar c ,c,d ])$ are the following: \begin{equation} \ln Z_{{\rm ph} } =- {\rm Tr}\ln (- G ^{-1}_{{\rm ph};\tau-\tau'}) \end{equation} and \begin{equation} \ln Z_{{\rm q} } =\frac{1}{2}\sum\limits_{j=1}^{N}{\rm Tr} \ln (-\mathbf{G}^{-1}_{j;\tau-\tau'}) \ . \end{equation} The prefactor of \sfrac{1}{2} results from the integration over Grassmann variables in the representation (\ref{Sq}). The sign ``${\rm Tr}$'' means the trace taken over the imaginary time variables, or, equivalently, by the Matsubara frequency index $n$; in a case of qubits, an additional trace is taken over the internal $3\times 3$ structure of a matrix $\mathbf{G} _{j}$. \section{Effective action}\label{sec:eff_action} To derive the effective action for the photon field, $S_{\rm eff}[\Psi]$, from the full one $S [\Psi, \bar c ,c,d ]$, we start from integration over the the fermion modes $c_j, \bar c_j$ and $d_j$. As a result, the path integral in the partition function is reduced to $ Z=\int D[ \Psi] e^{-S_{\rm eff}[\Psi]} $ where the effective action is obtained in the most general form \begin{multline} S_{\rm eff}[\Psi]=S_{\rm ph}[\Psi] +\ln Z_{\rm ph}Z_{\rm q}-\\ -\frac{1}{2} {\rm Tr} \ln (-\mathbf{G}^{-1}_{j;\tau-\tau'} +\delta_{\tau-\tau'} \mathbf{V}_j[\Psi_\tau]). \label{s_eff_1} \end{multline} Expanding the logarithm in the last term of (\ref{s_eff_1}) we obtain that all odd order terms are equal to zero. This follows from the diagonal and non-diagonal structures of $\mathbf{G}_j$ and $\mathbf{V}_j$, respectively. The resummation back of the non-zero terms of even orders gives the identity: \begin{multline} {\rm Tr} \ln (-\mathbf{G}^{-1}_{j;\tau-\tau'} +\delta_{\tau-\tau'} \mathbf{V}_j[\Psi_\tau])=\ln Z_{{\rm q} }+\\ +\frac{1}{2}{\rm Tr} \ln (-\mathbf{G}^{-1}_{j;\tau-\tau'} + \mathbf{V}_j[\Psi_\tau]\mathbf{G}_{j;\tau-\tau'} \mathbf{V}_j[\Psi_{\tau'}]) \ . \label{tr_log} \end{multline} A direct first order expansion of the logarithm in the second line of (\ref{tr_log}) by $\mathbf{V}[\Psi_\tau] \mathbf{G}_{\tau-\tau'} \mathbf{V}[\Psi_{\tau'}]$ provides Gaussian action for all Matsubara modes $\bar\psi_n$, $\psi_n$. As it will be shown in Sec. \ref{sec:n_ph}, this expansion results in divergent number of photons at the critical Rabi frequency near the transition into superradiant phase (see Eq.~\ref{NphGauss}). This follows from an infinite occupation of zeroth Matsubara frequency component of the field \begin{equation} \psi_0\equiv T\int\limits_0^\beta\psi_\tau d\tau \ . \end{equation} To make correct description of photonic subsystem we should leave $\psi_0$ in zero order term of (\ref{tr_log}) and expand the logarithm by the fluctuations $\delta\psi_\tau\equiv \psi_\tau- \psi_0$. This results in effective regularization of the divergency. Note, that Fourier transformation $\delta\psi_\tau$ gives the non-zero Matsubara components $\psi_{n\neq 0}$. The field $\psi_0$ is related to the complex amplitude of a superradiant order parameter while $\psi_{n\neq 0}$ are related to thermal fluctuations of polaritonic quasiparticles. The regularization of the divergency mentioned above assumes a redefinition of the Green function, $\mathbf{G}_{j}\to \mathcal{G}_j[\Psi_0]$ with $ \Psi_0=[\bar\psi_0,\psi_0]$, as follows: \begin{equation} \mathcal{G}^{-1}_{j;\tau-\tau'}[\Psi_0]\equiv \mathbf{G}^{-1}_{j;\tau-\tau'} - \mathbf{V}_j[\Psi_0]\mathbf{G}_{j;\tau-\tau'} \mathbf{V}_j[\Psi_0] \ . \end{equation} Here we introduce the matrix with zero-mode components \begin{equation} \mathbf{V}_j[\Psi_0]=\frac{1}{\beta}\int\limits_0^\beta \mathbf{V}_j[\Psi_\tau] d\tau \ . \end{equation} Below we limit our consideration of the fluctuations taking into account bilinear combinations of the fields $\delta\bar\psi_\tau $ and $\delta\psi_{\tau'}$. These are gauge invariant terms $\delta\bar\psi_\tau\delta\psi_{\tau'}$ which provide normal coupling channel between the photons in the dissipative action $S_\Sigma$. Oppositely, terms $\delta\bar\psi_\tau\bar\delta\psi_{\tau'}$ and $ \delta\psi_\tau\delta\psi_{\tau'}$ are not gauge invariant and provide anomalous type of coupling. At the given step of the derivation we perform the logarithm expansion in (\ref{tr_log}) around $\mathcal{G}^{-1}$ in second order by the matrix \begin{equation} \mathbf{V}_j[\delta\Psi_\tau]\equiv\mathbf{V}_j[\Psi_\tau] - \mathbf{V}[\Psi_0] \ , \end{equation} which involves the fluctuating parts in $\delta\Psi_\tau=[\delta\bar\psi_\tau,\delta\psi_\tau]$. We note that the first order contribution by $\mathbf{V}_j[\delta\Psi_\tau]$ equals zero in this approach. As a result, we obtain \begin{equation} S_{\rm eff}[\Psi]=S_{\rm ph}[\Psi]+S_{\mathcal{G}}[\Psi_0]+S_{\rm \Sigma}[\Psi]+\ln Z_{\rm ph} \ . \label{s_eff} \end{equation} The first term $S_{\rm ph}$ in (\ref{s_eff}) is not changed. The second term $S_{\mathcal{G}}[\Psi_0]\equiv -\frac{1}{4}\sum_j{\rm Tr} \ln\left( \mathbf{G}_j\mathcal{G}_j^{-1}[\Psi_0]\right)$ involves the zero-frequency mode $\Psi_0$ only. Note, that in the Dicke model (\ref{h-rwa}) the interaction is limited by the rotating wave approximation which conserves the excitations number. In this case $S_{\mathcal{G}}$ depends on the zero mode's magnitude squared, $\Phi\equiv \bar\psi_0\psi_0$, and is independent on its complex phase $\varphi \equiv \arg \psi_0$. Thus, $S_{\mathcal{G}}[\Psi_0]= S_{\mathcal{G}}[\Phi] $ and its explicit expression is \begin{equation} S_{\mathcal{G}}[\Phi]= - \sum\limits_{j=1}^N \ln \frac{\cosh\frac{\sqrt{\epsilon_j^2+4g_j^2 \Phi }}{2T}}{\cosh\frac{ \epsilon_j } {2T}} \ . \label{s-zm} \end{equation} This result follows from a representation of the Green functions $\mathbf{G}$ and $\mathcal{G}$ in Matsubara frequencies $\omega_n$. After that, $S_{\mathcal{G}}$ is reduced to a calculation of infinite product by $n$. The third term in (\ref{s_eff}) quadratic by quasiparticle fluctuations reads \begin{multline}S_{\Sigma}[\Psi]=\\ \frac{\beta}{2}\sum\limits_{n\neq 0}\begin{bmatrix} \bar\psi_n & \psi_{-n} \end{bmatrix} \begin{bmatrix} \Sigma_n[\Psi_0] && \tilde\Sigma_n[\Psi_0] \\ \\ (\tilde\Sigma_{-n}[\Psi_0])^* && \tilde\Sigma_{-n}[\Psi_0] \end{bmatrix} \begin{bmatrix} \psi_n \\ \\ \bar\psi_{-n} \end{bmatrix} \ . \label{s-fl} \end{multline} This is the dissipative part of the action; it corresponds to effective photon-photon interaction via qubits degrees of freedom. The self-energy operators $\Sigma_{\tau }[ \Psi_0 ]$ and $\tilde\Sigma_{\tau }[ \Psi_0 ]$ provide normal and anomalous channels of the photon-photon interactions, respectively. They result from a summation over the fermionic Matsubara frequencies. From calculations it follows that normal self-energy depends on $\Phi$ only, $\Sigma[\Psi_0]=\Sigma[\Phi]$, while the anomalous one depends also on the phase, i.e., $\tilde\Sigma[\Psi_0]\equiv \tilde\Sigma[\Phi,\varphi]$. Their explicit expressions are presented in the Appendix \ref{app-corr}, see Eqs.~(\ref{sigma-normal}) and (\ref{sigma-anomal}). The above results for $S_{\mathcal{G}}$ and $S_{\rm \Sigma}$ are in full correspondence with that derived in Refs.~\cite{popov1988functional,eastham2006finite} using alternative spin representations. The action $S_{\rm eff}$ allows to calculate the thermodynamical average value $\langle\Phi\rangle$ which is superradiant order parameter. As shown below, the action indicates the superradiance as a second order phase transition. The quadratic expansion by $\Phi$ in $S_{\mathcal{G}}[\Phi]$ allows to capture this transition (it corresponds to taking into account the non-Gaussian $|\psi_0|^4$). As a consequence, if the system is in the normal phase or near the phase transition, one can simplify $S_{\mathcal{G}}$ and $S_{\rm \Sigma}$ assuming that the relevant values of $\Phi$ belongs to a certain region near $\Phi=0$. Namely, analytical calculations presented in this work assumes that we apply second order expansion by $\Phi$ in $S_{\mathcal{G}}$ and neglect by non-Gaussian cross terms $\propto \Phi\bar\psi_n\psi_n $ in $S_{\rm \Sigma}$. For the zero-mode part it means \begin{equation}S_{\mathcal{G}}[\Phi]\approx\Phi S_{\mathcal{G}}'[0]+\frac{1}{2}\Phi^2 S_{\mathcal{G}}''[0] \ . \label{s_zm_quadratic} \end{equation} For $S_{\rm \Sigma}$ part it means one can neglect the dependencies of self-energies on $\Phi$ and assume \begin{equation} S_{\rm \Sigma}[\Psi]\approx S_{\rm \Sigma}[\Phi{=}0,\delta\Psi] \ . \label{s-fl-simplif} \end{equation} We obtain that this approximation involves the normal coupling only, i.e., \begin{equation} S_{\rm \Sigma}[\Phi{=}0,\delta\Psi] = \beta\sum\limits_{n\neq 0} \Sigma_n[0] \bar\psi_n\psi_n \ . \label{s-fl-simplif-1} \end{equation} It follows from (\ref{sigma-anomal}) that $ \tilde\Sigma_n[\Phi,\varphi]\propto \Phi \ $ for small $\Phi$ and, hence, the anomalous terms does not appear in (\ref{s-fl-simplif}). Note, that $S_{\rm \Sigma}$ is purely Gaussian in this case because the terms proportional to $\Phi \bar\psi_n\psi_n$ are neglected. The validity of the approximations (\ref{s_zm_quadratic}) and (\ref{s-fl-simplif}) is analyzed in the Appendix \ref{app-corr} by means of the effective action for the zero Matsubara mode, see Eq. (\ref{app:s-0}). This action is obtained after the Gaussian integration over all non-zero modes $\psi_{n\neq 0}$ in $S_{\rm eff}$ from (\ref{s_eff}). The linear by $\Phi$ contributions to the self-energies, $\Sigma_n[ \Phi ]\approx\Sigma_n[ 0 ]+\Phi\Sigma'_n[ 0 ] $ and $\tilde\Sigma_n[\Phi,\varphi]\propto \Phi$, are investigated as a perturbations for the action (\ref{app:s-0}). It is shown that such perturbations are small and approximations (\ref{s_zm_quadratic}) and (\ref{s-fl-simplif}) are strict if the condition for the temperature and qubit number \begin{equation} T\gg \frac{\omega}{N} \label{condition-0} \end{equation} holds. It is assumed here that qubits' and photon mode's frequencies are of the same order, $\epsilon_j\sim \omega$. The condition (\ref{condition-0}) also provides the range of parameters where one can go beyond the thermodynamic limit and study finite-$N$ effects. The thermodynamic limit, where fluctuations of order parameter are negligible as $1/N$, corresponds to a situation of simultaneous limits $N\to \infty$ and $g\to 0$ with the constraint $\sqrt{N}g={\rm const}$. As it is also shown in Appendix \ref{app-corr}, the ratio \begin{equation}\kappa_{\rm c}=\sqrt{\frac{\omega}{NT}}\ll 1 \label{kappa} \end{equation} provides the small parameter of this theory near the superradiant transition which allows one to neglect non-Gaussian terms in a controllable way. To summarize the above, for large enough qubit number dictated by (\ref{condition-0}), we obtain an effective theory for low temperatures, $\omega\gg T\gg \omega/N$. The high temperature domain corresponds to $T\gg\omega$. Two approximations (\ref{s-fl-simplif}) and (\ref{s_zm_quadratic}) yield the effective action $S_{\rm eff,0}$ which provides a description of the normal phase and fluctuational region near the superradiant transition. It is convenient to represent it as \begin{equation} S_{\rm eff,0}[\Phi,\bar\psi_n,\psi_n]=S_{\rm zm}[\Phi]+S_{\rm fl}[\bar\psi_n,\psi_n]+\ln Z_{\rm ph} \ . \label{s_eff0} \end{equation} where the zero-mode terms are collected in \begin{equation} S_{\rm zm}[\Phi]=A\Phi+{\it \Gamma} \Phi^2 \ \end{equation} and that of quasiparticle fluctuations in \begin{equation} S_{\rm fl}[\bar\psi_n,\psi_n]=\beta\sum\limits_{n\neq 0}(-{\rm i}2\pi n T+\omega+\Sigma_n[0])\bar\psi_n\psi_n \ . \label{s-fl-main} \end{equation} The parameters for a general case are: \begin{equation} A=\beta \omega- \beta \sum\limits_{j=1}^N\frac{g^2_j}{\epsilon_j}\tanh\frac{\beta\epsilon_j}{2} \ , \label{alpha} \end{equation} \begin{equation} {\it \Gamma}= \beta \sum\limits_{j=1}^Ng^4_j\frac{\sinh \beta\epsilon_j - \beta\epsilon_j}{\epsilon_j^3(\cosh\beta\epsilon_j +1)}, \label{gamma} \end{equation} and \begin{equation} \Sigma_n[ 0 ]= \sum\limits_{j=1}^N \frac{g^2_j\tanh\frac{\epsilon_j}{2T}}{2 {\rm i}\pi n T - \epsilon_j}. \label{sigma-0} \end{equation} In the above formulation, the critical point is $A=0$. For $A>0$ the system is in the normal phase and for $A<0$ a superradiant phase with large amount of photons does emerge. In other words, if $A<0$ then $S_{\rm eff,0}$ has a minimum at the stationary point $\Phi=\Phi^*$ with \begin{equation}\Phi^*=-\frac{A}{2{\it \Gamma}} \ .\label{saddle-point-0} \end{equation} In terms of the complex photon field $\psi_0$ this corresponds to a saddle line which is a circle in the complex plane. The control parameter of the phase transition is the collective Rabi frequency defined as \begin{equation} \Omega=\sqrt{N\langle g^2\rangle_j} ,\qquad \langle g^2\rangle_j = \frac{1}{N}\sum_{j=1}^N g_j^2 \ , \end{equation} where we denote $\langle \cdot \rangle_j$ as the average over the qubit ensemble. The superradiance condition $A<0$ corresponds to the Rabi frequency exceeding a certain critical value $\Omega>\Omega_{\rm c}$. For the homogeneous limit where all qubits have the same energy, $\epsilon_j=\bar\epsilon$, the saddle point (\ref{saddle-point-0}) is given by \begin{equation} \Phi^*=N\left(\frac{\bar\epsilon}{\Omega}\right)^2\left(\tanh\frac{\bar\epsilon }{2T}- \frac{\bar\epsilon\omega}{\Omega^2}\right) \frac{1+\cosh\beta\bar\epsilon}{\sinh\beta\bar\epsilon-\beta\bar\epsilon} . \label{saddle-point} \end{equation} The critical Rabi frequency of the phase transition follows from the condition $\Phi^*=0$. From (\ref{saddle-point}) one finds that \begin{equation} \Omega_{\rm c}= \sqrt{\bar\epsilon\omega \coth\frac{ \bar\epsilon }{2T}}. \label{omega-c} \end{equation} We also introduce the action \begin{equation}S_{\rm eff,1}[\Psi]= S_{\rm ph}[\Psi]+S_{\rm fl}[\bar\psi_n,\psi_n]+ S_{\mathcal{G}} [\Phi] \label{s_eff1} \end{equation} where, in contrast to $S_{\rm eff,0}[\Psi]$ in (\ref{s_eff0}), the zero mode's part (\ref{s-zm}) is taken into account exactly and its logarithm is not expanded. This action provides an adequate description of the superradiant phase where is $\langle\Phi\rangle$ large. Calculations combine the exact integration by $\bar\psi_n$ and $\psi_n$ and numerical integration by $\Phi$ in this regime. In what follows we focus mainly on the transition between the normal phase and the fluctuational region employing the formalism of $S_{\rm eff,0}[\Psi]$. A behavior in the superradiant phase is briefly discussed below. \section{Photons number and their fluctuations for resonant limit}\label{sec:n_ph} In this part of the paper we study fluctuational behavior of the superradiance with the use of $S_{\rm eff,0}$ in a limit of full resonance between qubits and photon mode, i.e., \begin{equation} \epsilon_j=\bar\epsilon=\omega. \label{res} \end{equation} The disorder in $g_j$, in its turn, is taken into account. The parameters (\ref{alpha}, \ref{gamma}) are reduced to \begin{equation} \alpha\equiv A_{\epsilon_j=\omega}=\beta \omega\left(1-\frac{ \Omega^2_T}{ \omega^2 }\right) \ , \label{alpha-1} \end{equation} \begin{equation} \gamma\equiv {\it \Gamma}_{\epsilon_j=\omega}= q f(\beta\omega)\frac{\beta \Omega^4_T}{N \omega^3 }. \end{equation} We introduced here the collective Rabi frequency renormalized by $T$, \begin{equation} \Omega_T\equiv\Omega \sqrt{\tanh\frac{\omega }{2T}} \ , \label{omega-T} \end{equation} and the function $f(x) = \frac{\sinh x - x}{1+ \cosh x }\coth^2 \frac{x}{2} $; the parameter $q$ is a ratio between fourth and second moments for coupling parameters, $ q = \langle g^4\rangle_j/\langle g^2\rangle^2_j $. The absence of the disorder in $g_j$ corresponds to $q=1$; in disordered case $q>1$; $q=9/5$ for a flat distribution ranging from $g_{\rm min}$ to $g_{\rm max}$ with $g_{\rm max}\gg g_{\rm min}$. In further consideration the photon number \begin{equation} \langle N_{\rm ph}\rangle=\beta^{-1}\int\limits_0^\beta \langle\bar\psi_\tau\psi_{\tau}\rangle d\tau \label{N-ph-def} \end{equation} is analyzed. Alternatively, it is given by the following identity \begin{equation} \langle N_{\rm ph}\rangle=T\sum_n (-G_n)-\frac{1}{2}, \label{Nph-1} \end{equation} \begin{equation} G_n=-\beta\langle \bar \psi_n \psi_n\rangle \label{G-n-def} \end{equation} where $ G_n$ is $n$-th component of Matsubara Green function. If the quadratic expansion in (\ref{tr_log}) is applied then one obtains $S_{\rm eff, 0}[\Phi,\bar\psi_n,\psi_n]$ with $\gamma=0$. This action is fully Gaussian with respect to all Matsubara modes. The following expression for the Green function is obtained for arbitrary $\epsilon_j$ and $\omega$ within this expansion: \begin{equation} G_{n }=\frac{1}{{\rm i}2\pi n T-\omega-\Sigma_n[0]} \ . \label{Gn-0} \end{equation} For the resonant limit $\epsilon_j =\omega$ it reads: \begin{equation} G_{n }(\epsilon_j {=}\omega)=\frac{\omega-2{\rm i}\pi n T}{(2 \pi n T+{\rm i}\omega)^2+ \Omega^2_T} \ . \label{Gn} \end{equation} It is used in the calculations below. This expression holds for any $n$ in the Gaussian approach ($\gamma=0$). After the summation one obtains the average photon number: \begin{multline} \langle N_{\rm ph } \rangle_{\rm Gauss} =\\ =\frac{1}{4}\left[\coth\frac{\omega- \Omega_T}{2T}+\coth\frac{\omega+ \Omega_T}{2T} \right]-\frac{1}{2} \ . \label{NphGauss} \end{multline} One can see that $\langle N_{\rm ph } \rangle_{\rm Gauss}$ is divergent at the critical value of the renormalized Rabi frequency $ \Omega_{T,c}=\omega$ and is negative for $ \Omega_T>\omega$. This follows from the condition $G_{n=0}=-\frac{1}{\alpha T}$. It is divergent at the critical point where $\alpha= 0$. The regularization is provided by the expansion with respect to $\mathcal{G}$ in (\ref{tr_log}) which involves high order terms by $\Phi$. As we have shown above, quadratic expansion of $S_{\rm eff}$ by $\Phi^2$ gives $S_{\rm eff,0}$. Corresponding zero mode's Green function is changed to $ G_0=-\frac{\langle\Phi\rangle}{T} $ which is not divergent anymore at the critical point due to $\gamma\neq 0$. Within the $S_{\rm eff,0}$ action, for non-zero modes the expressions for $ G_{n\neq 0}$ are the same as in (\ref{Gn}). We refine the definition for the average $\langle N_{\rm ph } \rangle$ as \begin{equation} \langle N_{\rm ph } \rangle =\langle\Phi\rangle+\sum\limits_{n\neq 0} \langle \bar \psi_n \psi_n\rangle-\frac{1}{2}. \label{Nph-2} \end{equation} The zero mode' part is written explicitly here. We emphasize thereby that it is calculated within the non-Gaussian (fourth order) approach by $\psi_0$. Let us calculate both of the contributions originating from the superradiant order parameter, $\langle\Phi\rangle$, and from the thermal excitations $\langle \bar \psi_n \psi_n\rangle$. As long as there is no explicit dependence on $\varphi$, one has $\iint d (\mathop{\rm Re} \psi_0) \, d (\mathop{\rm Im} \psi_0) = \pi \int\limits_0^\infty d\Phi$. For $\langle\Phi\rangle$ we find \begin{equation} \langle\Phi\rangle =\frac{\int\limits_0^\infty \Phi e^{ -S_{\rm zm}[\Phi ]} d\Phi}{\int\limits_0^\infty e^{ -S_{\rm zm}[\Phi ]} d\Phi}=-\frac{\alpha}{2\gamma}+\frac{e^{-\frac{\alpha^2}{4\gamma}}}{\sqrt{\pi \gamma} { \rm erfc}\frac{\alpha}{2\sqrt{\gamma}}} \label{Phi} \end{equation} with the complementary error function is ${\rm erfc}z=1-{\rm erf}z$. Summation over $n\neq 0$ gives the quasiparticle contribution \begin{equation} \sum\limits_{n\neq 0} \langle \bar \psi_n \psi_n\rangle=\langle N_{\rm ph } \rangle_{\rm Gauss}-\frac{1}{\alpha}. \label{n:fluct} \end{equation} Finally, for the average photon number (\ref{Nph-2}) we obtain \begin{equation} \langle N_{\rm ph } \rangle=-\frac{\alpha}{2\gamma}+\frac{e^{-\frac{\alpha^2}{4\gamma}}}{\sqrt{\pi \gamma} { \rm erfc}\frac{\alpha}{2\sqrt{\gamma}}}+\langle N_{\rm ph } \rangle_{\rm Gauss}-\frac{1}{\alpha}. \label{n-ph} \end{equation} The fluctuations of the photon number are given by the second cumulant $\langle\!\langle N_{\rm ph}^2\rangle\!\rangle\equiv \langle N_{\rm ph}^2\rangle-\langle N_{\rm ph}\rangle^2$. With use of the above notations it is reduced to \begin{equation} \langle\!\langle N_{\rm ph}^2\rangle\!\rangle= \langle \Phi^2\rangle -\langle \Phi\rangle^2 +T^2\sum\limits_{n\neq 0} G_n^2. \end{equation} Calculation of the integrals by $\Phi$ and summation over $n$ provide \begin{multline} \langle\!\langle N_{\rm ph}^2\rangle\!\rangle= \frac{1}{2\gamma}+\frac{ \frac{\sqrt\pi \alpha}{2\sqrt\gamma } { \ \rm erfc} \frac{\alpha}{2\sqrt{\gamma}}-e^{- \frac{\alpha^2}{4\gamma}} }{ \pi \gamma { \ \rm erfc}^2\frac{\alpha}{2\sqrt{\gamma}}}e^{-\frac{\alpha^2}{4\gamma}}+\\+\langle\!\langle N_{\rm ph }^2 \rangle\!\rangle_{\rm Gauss}-\frac{1}{\alpha^2} \ . \label{c2-ph} \end{multline} We introduced here the second cumulant in Gaussian approximation $\langle\!\langle N_{\rm ph}^2\rangle\!\rangle=T^2\sum\limits_{n} G_n^2$. It reads \begin{multline} \langle\!\langle N_{\rm ph}^2\rangle\!\rangle_{\rm Gauss}=\\ \frac{ \cosh\frac{\omega}{T}\left(\cosh\frac{\Omega_T}{T}+\frac{T}{\Omega_T}\sinh\frac{\Omega_T}{T}\right)-1-\frac{T}{2\Omega_T}\sinh\frac{2\Omega_T}{T}}{4\left(\cosh\frac{\omega}{T}-\cosh\frac{\Omega_T}{T}\right)^2}. \end{multline} Similar to (\ref{n-ph}), the divergent zero frequency term in the sum is canceled by $1/\alpha^2$ in (\ref{c2-ph}). In the Section \ref{sec:ph_tr} the properties of the photon number and its second cumulant near the phase transition are analyzed in details. \section{ Phase transition at low temperatures. Resonant limit}\label{sec:ph_tr} \subsection{Average photon number near the phase transition} In the following consideration at low temperatures $T\ll \omega$, we should emphasize that there is also a limitation (\ref{condition-0}) which means that $T$ can not be arbitrary small. Namely, it belongs to the domain \begin{equation} \omega\gg T \gg \frac{\omega}{N} \ . \label{condition-1} \end{equation} In such a limit we set $f(\beta\omega)=1$ and $\Omega_T=\Omega$ with the exponential by $\beta\omega$ accuracy. In this Section we continue to study the case of full resonance between cavity and all the qubits. We obtain an analytical expansion of photon number $\langle N_{\rm ph}\rangle$ (\ref{n-ph}) around the critical point $\Omega_{\rm c}=\omega$. The expansion in series by the dimensionless detuning $\frac{ \Omega-\omega }{\omega}$ for $\langle N_{\rm ph}\rangle$ is \begin{multline} \langle N_{\rm ph}\rangle\approx\left[ \sqrt{\frac{NT}{\pi q\omega}}+\delta n_0\right]-\frac{1}{2} + \\+ \left[\frac{N(\pi - 2)}{\pi q }+\delta n_1\right]\frac{ \Omega-\omega }{\omega}+O\left[\left(\frac{ \Omega-\omega }{\omega}\right)^2\right] \ . \label{n-ph-expansion} \end{multline} The main contribution to $\langle N_{\rm ph}\rangle$ follows from the $\psi_0$ mode as powers of $\sqrt{\frac{NT}{\omega}}$. The prefactors contain the leading term given by zero mode, and small corrections $\delta n$. The corrections follow from the fluctuations of the modes $ \psi_{n\neq 0}$. Their expressions might be obtained from the expansion of (\ref{n:fluct}) as \begin{equation} \delta n_0=\frac{1}{4}-\frac{T}{4\omega} \label{delta-n-0} \end{equation} and \begin{equation} \delta n_1=\frac{3T^2-\omega^2}{24 T\omega} \ . \label{delta-n-1} \end{equation} The zero order term in (\ref{n-ph-expansion}) gives large but finite photon number at the critical point \begin{equation}\langle N_{{\rm ph} }\rangle_{\rm c}=\sqrt{\frac{NT}{\pi q\omega}}-\frac{1}{4}. \label{Nc} \end{equation} The leading term is much higher than unity under the condition $N\gg \omega/T$. If one goes beyond the validity of $S_{\rm eff,0}$ taking a formal limit of $T\to 0$ in (\ref{Nc}) then the unphysical value of $-\sfrac{1}{4}$ is obtained. This demonstrates that for low temperatures the non-Gaussian fluctuations are needed to be taken into account. It is known that $\langle N_{\rm ph} \rangle=1/2$ at zero temperature limit above the critical point. This is due to the ground state wave function contains \sfrac{1}{2} photon on the average. The ground state is changed at the critical point from a direct product of zero photon state and qubits ground state, $|n{=}0; \sigma_j{=}-1 \rangle$ ($j=1, \ ... \ N$), to an entangled state with a single photon and excited qubits. The field-theoretical approach provided does not allow to describe this limit because it is restricted to finite temperatures $T\gg \omega/N$. Nevertheless, this formalism allows to demonstrate a positive change in the negative constant value in $\langle N_{{\rm ph} }\rangle_{\rm c}$ for very low $T$. For instance, if the third order correction $ \propto \Phi \bar\psi_n \psi_n$ is taken into account in the action $ S_{\rm eff,0} $. This correction originates from the dependence of the normal part of the self-energy $\Sigma_n $ on $\Phi $. In Appendix \ref{app-corr} it is shown that the integration over all non-zero modes provides a correction $\delta S[\Phi]= \delta\alpha \Phi$ to $S_{\rm zm}[\Phi]$ due to the third-order term. At the critical point and low temperatures $T\ll \omega$ this coefficient reads as $\delta\alpha_{\rm c}=\frac{3 \omega}{4T N}$. This results in the positive shift in (\ref{Nc}) as $\langle N_{{\rm ph} }\rangle_{\rm c}'=\langle N_{{\rm ph} }\rangle_{\rm c}+b$ where $b=\frac{3}{8}(1-2/\pi)\approx 0.1363$. \subsection{Fluctuations and the Fano factor } At the critical point and low temperature limit (\ref{condition-1}) the fluctuations of photons number are: \begin{equation} \langle\!\langle N_{\rm ph}^2\rangle\!\rangle_{\rm c}= \frac{(\pi -2) N T}{2 \pi \omega } + O[T/\omega]\ . \label{N2c} \end{equation} This is the sum of large leading term due to superradiant order parameter fluctuations, $\langle\!\langle \Phi^2 \rangle\!\rangle$, and small correction $\sim T/\omega\ll 1$ due to weak fluctuations of quasiparticles. The relative value of fluctuations \begin{equation} r=\frac{ \langle\!\langle N_{\rm ph}^2\rangle\!\rangle }{\langle N_{\rm ph}\rangle^2}, \label{r} \end{equation} is large as $e^{\omega/T}$ in the decoupling limit $\Omega=0$ and decays monotonously due to the decreasing of the second cumulant. It is less than unity above the phase transition. Using the expressions for $\langle N_{\rm ph}\rangle$ and $\langle\!\langle N_{\rm ph}^2\rangle\!\rangle$, Eqs. (\ref{n-ph}) and (\ref{c2-ph}), one obtains the expansion near the phase transition up to the first order by the dimensionless detuning: \begin{equation} r\approx \frac{\pi-2}{2} - (\pi-3) \sqrt{\frac{\pi N \omega }{ qT}} \ \frac{ \Omega-\omega }{\omega} \ . \label{r-1} \end{equation} At the critical point ($\Omega=\omega$) the main contribution is due to the zero mode and, consequently, $r_{\rm c}=\frac{\langle\!\langle \Phi^2 \rangle\!\rangle}{\langle \Phi\rangle^2}$. The universal value of the relative fluctuations $r$ at the transition point is \begin{equation}r_{\rm c}=\frac{\pi}{2}-1.\end{equation} It follows from the $\Phi$-integrals (\ref{Phi}) at $\alpha=0$ and is exact up to the small correction $\sim N^{-1/2 }$. From the expansion (\ref{r-1}) the width of the fluctuational Ginzburg-Levanyuk region, $\Omega_{\rm GL}$, near the critical Rabi frequency can be defined. This is a domain where fluctuations and average value of the number of photons are of the same order. This consideration can be applied straightforwardly to the superradiant phase where $\Omega>\Omega_{\rm c}$. The parameter $\Omega_{\rm GL}$ is obtained from the matching conditions \begin{equation} r(\Omega)\sim 1 , \ \Omega-\Omega_{\rm c}\sim \Omega_{\rm GL} \ , \label{fluct-def} \end{equation} which give \begin{equation} \Omega_{\rm GL}\sim\sqrt{\frac{\omega T}{N}}. \label{fluct-zone} \end{equation} Approaching the critical point from the normal phase, i.e., $\Omega<\Omega_{\rm c}$, fluctuations are always greater that average values and the definition (\ref{fluct-def}) is not valid. Instead of (\ref{fluct-def}) we introduce the width $\Omega'$ where the superradiant order parameter fluctuations start to grow and become relevant. In this region the contribution to $\langle N_{\rm ph }\rangle$ due to the non-Gaussian fluctuations of $|\psi_0|^4$ is comparable with the quasiparticle's part related to $\psi_{n \neq 0}$. We define $\Omega'$ through the value of $\Omega=\Omega_{\rm c}-\Omega'$ which provides the matching between the average values obtained in the Gaussian and non-Gaussian approaches: \begin{equation} \langle N_{\rm ph }\rangle_{\rm Gauss}\sim \langle N_{\rm ph }\rangle , \ \Omega_{\rm c}- \Omega\sim \Omega' \ . \label{fluct-def-normal} \end{equation} From (\ref{NphGauss}) and (\ref{n-ph}) it follows that $ \langle N_{\rm ph }\rangle_{\rm Gauss}\sim T/\Omega'$ and $\langle N_{\rm ph }\rangle\sim \sqrt{NT/ \omega}$. The width $\Omega'$ of fluctuation-dominated region appears the same order as in the superradiant phase, i.e., \begin{equation} \Omega'\sim\Omega_{\rm GL}\sim\sqrt{\frac{\omega T}{N}} \ . \label{fluct-zone-normal} \end{equation} It is rather narrow and is much less than the temperature due to the condition (\ref{condition-1}). \begin{figure*}[htp] \includegraphics[scale=0.48]{nph-compare-log-omega-100.pdf} \includegraphics[scale=0.48]{F-compare-log-omega-100.pdf} \\ \includegraphics[scale=0.48]{nph-compare-log-omega-1000.pdf} \includegraphics[scale=0.48]{F-compare-log-omega-1000.pdf} \caption{ (a, c) Average photon number $\langle N_{\rm ph}\rangle$, fluctuations $\sqrt{\langle\!\langle N_{\rm ph}^2\rangle\!\rangle}$ and (b, d) Fano factor $F$ as functions of collective Rabi frequency $\Omega= g\sqrt{N }$ (in units of resonator mode frequency $\omega$). All curves are calculated for the full resonance limit between qubits and cavity mode frequencies, $\bar\epsilon=\omega$. The temperature is low, $T=0.1 \, \omega$, and qubit number is $N=100$ on panels (a, b) and $N=1000$ on panels (c, d). White, light blue and light green areas (color online) correspond to normal (N) phase, fluctuational region and superradiant (SR) phase, respectively. The critical point is $\Omega_{\rm c}= \omega $ and the width of the fluctuational region is $2\Omega_{\rm GL}\approx 0.063 \ \omega$ on panels (a, b) and $2\Omega_{\rm GL}= 0.02 \ \omega$ on panels (c, d). Red and green curves stand for calculations based on $S_{\rm eff,0}$ and $S_{\rm eff,1}$ actions, respectively. The Fano factor (b, d) in the normal phase demonstrates that $F_{\rm min}<F<1$. It means negative correlation between photons (antibunching effect). The horizontal dotted line $F=1$ separates the regions of negative ($F<1$) and positive ($F>1$) photon correlations. The fluctuational region in (b, d) demonstrates a growth of the Fano factor with a peak at $F_{\rm c}>1$ which means positive correlations between photons. The superradiant phase shows reentrance to the negative correlations with the decay of $F$.} \label{plots} \end{figure*} In order to illustrate the above results we present in Fig.~\ref{plots} (a, c) the data for $\langle N_{\rm ph}\rangle$ and $\sqrt{\langle\!\langle N_{\rm ph}^2\rangle\!\rangle}$ as functions of $\Omega$. We consider the full resonance limit, $\bar\epsilon=\omega$, and low temperature regime $T=0.1 \ \omega$. The qubits number is $N=100$ on panel (a) and $N=1000$ on panel (c), hence, the constraint (\ref{condition-1}) is satisfied. Such a qubit number can be realized in contemporary quantum metamaterials. White, thin light blue and light green sectors correspond to normal (N) phase, fluctuational region and superradiant (SR) phase, respectively. The critical point in this low temperature regime is $\Omega_{\rm c}= \omega $. The width of the Ginzburg-Levanyuk fluctuational region is $2\Omega_{\rm GL}\approx 0.063 \ \omega$. The red curves are obtained with the use of the action $S_{\rm eff,0}$ and the corresponding analytical results (\ref{n-ph}) and (\ref{c2-ph}). It is shown that in the normal phase there are exponential dependencies of the photon number and its fluctuations, as follows from linear sectors in the logarithmic scale. Tuning $\Omega$ to the critical value initiates the superradiant transition where photons number is increased rapidly. Further increase of $\Omega$ drives the system into superradiant state. The quadratic expansion for the logarithm in $S_{\mathcal{G}}$, as it should be, does not work well in this phase. A correct description assumes that the use of the action $S_{\rm eff,1}$ from (\ref{s_eff1}). The green curves are the results obtained by means of $S_{\rm eff,1}$ where the integration over $\Phi$ is performed numerically. Dashed parts of the red curves demonstrate the difference between these two approaches. Green curves show that $\langle N_{\rm ph}\rangle$ and its fluctuations in the superradiant phase grow sub-exponentially. It also follows from this plot that in the normal phase the relative fluctuations value $r_{\rm n}>r_{\rm c}$ and $r_{\rm sr}<r_{\rm c}$ in the superradiant phase. It is also instructive to analyze the Fano factor defined as a ratio between second and first cumulants as $$ F=\frac{\langle\!\langle N_{\rm ph}^2\rangle\!\rangle}{\langle N_{\rm ph}\rangle}. $$ This is a representative parameter bringing an information about the statistics. The value of $F$ reflects a type of a coherence between the photons: $F=1$ means that they are uncorrelated, $F<1$ and $F>1$ correspond to their negative and positive correlations, respectively. As shown in Fig.~\ref{plots} (b, d) the dependence $F(\Omega)$ demonstrates rich behavior. The parameters of calculation here are the same as that in the plot (a): $T=0.1 \ \omega$, $N=100$ and $\bar\epsilon=\omega$. Red and green curves correspond to calculations based on $S_{\rm eff,0}$ and $S_{\rm eff,1}$, respectively. In the decoupling limit $\Omega=0$ the value of the Fano factor is \begin{equation} F_0=\frac{1}{1-e^{-\beta \omega } } >1 \ . \label{F-0} \end{equation} In a low temperature limit $F_0\approx 1+e^{-\beta \omega }$ which means that photons are weakly correlated. For a finite $\Omega$ there is the entrance into the negative correlations domain where the dependence is non-monotonous with $F_{\rm min}<F<1$. There is a minimum with $F_{\rm min}\approx 0.8$ for an intermediate strength of $\Omega$. It means negative correlation between photons (antibunching effect) in the normal phase due to the interaction between photons through the qubit environment. It is remarkable, that the dependence $F(\Omega)$ in the fluctuational region demonstrates a dramatic change where $F$ grows rapidly and becomes greater than unity. There is a maximum at the critical point which is given by \begin{equation} F_{\rm c}=\frac{1}{2}(\pi-2) \sqrt{\frac{NT}{\pi \omega}} \ . \end{equation} The latter means strongly positive coherence between photons near the superradiant transition. The Fano factor shows the decay entering into the superradiant phase if $\Omega$ is further increased. As one can see there is the reentrance to negative correlations with $F_{\rm sr}<1$. The finite width of fluctuational region and the peak in the Fano factor dependence are finite-size effects. In thermodynamic limit of $N\to \infty$ the Fano factor peak shrinks to a singularity at the critical point. This tendency is seen from a comparison of Figs.~\ref{plots} (b) and (d) where $N$ is changed by an order. \subsection{Numerical simulation} In Fig.~\ref{plots-num} we compare the results obtained in the field-theoretical formalism and that in exact numerical simulations. The qubit number $N=10$ means that the system is beyond from the thermodynamic limit. Despite that $\kappa_{\rm c}$ is not very small compared to unity, $\kappa_{\rm c}\approx \ 0.58$ on Fig.~\ref{plots-num} (b), a well quantitative agreement between the numerical and theoretical results is observed. Surprisingly, analytical solution is in a good agreement with numerical calculations even when $\kappa_{\rm c}=1$, as shown in Fig.~\ref{plots-num} (b). We represent results for $\langle N_{\rm ph} \rangle$ and $\langle\!\langle N_{\rm ph}^2\rangle\!\rangle$ obtained in three different ways. The red dotted curves are obtained with the use of the action $S_{\rm eff,0}$ and analytical expressions (\ref{n-ph}) and (\ref{c2-ph}). Green dashed curves are derived with the use of $S_{\rm eff,1}$. There is a difference between them in the superradiant phase Blue solid curves represent the results of numerical calculations based on the definitions \begin{equation} \langle N_{\rm ph} \rangle=\frac{{\rm Tr}[\hat\rho \hat\psi^{\dagger}\hat\psi]}{{\rm Tr}[\hat\rho ]} \ , \end{equation} and \begin{equation} \langle\!\langle N_{\rm ph}^2 \rangle\!\rangle=\frac{{\rm Tr}[\hat\rho (\hat\psi^{\dagger}\hat\psi)^2]}{{\rm Tr}[\hat\rho ]} -\langle N_{\rm ph} \rangle^2 \ . \end{equation} Here the equilibrium density matrix is \begin{equation}\hat\rho=\exp(-\hat H/T)\ . \end{equation} It is block diagonal due to the conservation of total excitations number in the system. This follows from a commutation of the excitations number operator, $\hat M =\hat\psi^\dagger\hat\psi+\sum_j\hat\sigma^+_j\hat\sigma^-_j$, and $\hat H$. In calculations the maximum of excitations number is $M_{\rm max}=50$. This means that $\hat\rho$ has $M_{\rm max}$ blocks each of them has the dimension of $2^M$, $ M=1, \ ... \ , M_{\rm max}$. For the above parameters the most relevant part of the Fock space belongs to $M$ which covers a region from one to a value around 30. We observe a good correspondence between theoretical curves (red dashed and green dotted) and numerical simulation (blue solid curves) for the range of Rabi frequencies $0<\Omega\lesssim \omega$ which covers the normal phase and fluctuational region. In the superradiant phase, where $ \Omega \gtrsim \omega $, the numerical results are in good agreement with more precise calculations based on $S_{\rm eff,1}$. \begin{figure*}[htp] \includegraphics[scale=0.45]{nph-compare-log-a.pdf} \includegraphics[scale=0.45]{nph-compare-log-b.pdf} \caption{ Comparison of results obtained in effective action techniques and in exact numerical calculations based on equilibrium density matrix. The data for the average photon number $\langle N_{\rm ph}\rangle$ is presented. The temperature is low as $T/\omega=0.3$ for the panel (a) and $T/\omega=0.1$ for the panel (b) and qubit number is $N = 10$. The range of cubit-cavity coupling covers the domain of the normal phase, fluctuational region and the superradiant phase. The data obtained from numerical simulations is shown as blue curves. Results of field theoretical approaches based on $S_{\rm eff,0}$ and $S_{\rm eff,1}$ are shown as red {dashed} and green {dotted} curves, respectively. We note surprisingly small deviation of solid blue lines from the green dotted line. Despite that the parameters are near the edge of applicability range of the theory, a good agreement between numerical results and theoretical calculations is clearly observed. } \label{plots-num} \end{figure*} \section{Some generalizations}\label{sec:generalization} \subsection{High temperatures} Below we discuss results obtained at the critical point for the high temperature regime $T\gg \omega$. Note that the phase transition at $\alpha=0$ (see Eq. (\ref{alpha})) is given by the increased collective coupling: \begin{equation} \Omega_{\rm c}=\sqrt{T\omega}. \end{equation} We use (\ref{n-ph}) and (\ref{c2-ph}) to obtain the leading order expansions for $\langle N_{\rm ph}\rangle$ and $\langle\!\langle N_{\rm ph}^2\rangle\!\rangle$ by the large parameter $T/\omega $. In the Appendix \ref{app-corr} we discuss that the Gaussian approximation for quasiparticle fluctuations is valid for any $N$ and the corrections due to cross terms $\propto \Phi\bar\psi_n\psi_n$ are always small. This is distinct from $N\gg \omega/T\gg 1$ in the low-temperature limit addressed above. We obtain that the photon number at the critical point is \begin{equation} \langle N_{\rm ph}\rangle_{\rm c}=\sqrt{\frac{3 N}{\pi }} \frac{T}{\omega}. \end{equation} In contrast to the low temperature limit where it scales as $\propto \sqrt{T}$, in the high temperature regime under consideration it grows as $\propto T$. The fluctuations of photons, \begin{equation} \langle\!\langle N_{\rm ph}^2\rangle\!\rangle_{\rm c}= \frac{3(\pi -2) N T^2 }{2 \pi \omega ^2}+\frac{T^{5/2}}{8 \sqrt{2} \omega ^{5/2}}, \label{fluct-high-t} \end{equation} in contrast to (\ref{N2c}), contain not only the contributions from $\Phi$ (first term), but also from the non-zero modes $\psi_n$ as well (second term). Thus, the high temperature limit is distinct in that sense that there are two domains of $N$ where fluctuations have different contributions. The first domain for $N$ is related to the thermodynamical limit of very large qubit number. It is given by (\ref{fluct-high-t}) as \begin{equation} N\gg \sqrt{\frac{T}{\omega}}, \end{equation} when only superradiant zero mode is relevant. The second one is the intermediate region, \begin{equation} \sqrt{\frac{T}{\omega}} \gtrsim N \ , \label{condition-high-t} \end{equation} when contribution of fluctuations of the order parameter can be neglected compared to that of thermal fluctuations of quasiparticles. The relative value at the transition for this intermediate domain, \begin{equation} r_{\rm c}=\frac{\pi-2}{2} +\frac{\pi \sqrt{T}}{24 \sqrt{2}N \sqrt{\omega }} \ , \label{r-c-high-t} \end{equation} shows a deviation from the universal value $\pi/2-1$ due to the second term. Thus, $N\sim\sqrt{ T / \omega} $ defines a condition for the crossover between two types of fluctuational behavior. Namely, $N\gg \sqrt{ T / \omega} $ corresponds to thermodynamic limit where fluctuations of superradiant order parameter provide the leading contribution to fluctuations of the photons number. In case of $ \sqrt{ T / \omega}\gtrsim N$ the contribution due to thermal fluctuations of quasiparticles becomes dominant. \subsection{Inhomogeneous broadening} In the above results for the resonant limit a spread of coupling energies $g_j$ yields the prefactor $q^{-1}$ for the qubit number. The inhomogeneous broadening of qubit energies modifies the expressions in a more significant way described below. We also assume that qubit frequencies are distributed in a certain interval, temperatures are low enough, $T\ll \epsilon_j$, and couplings are homogeneous, $g_j\equiv g$. We assume that the system is in the critical point, $\alpha=0$, and photons number (\ref{n-ph}) is contributed by the zero mode only, i.e., $\langle N_{\rm ph}\rangle = \frac{1}{\sqrt{\pi\gamma}}$, and quasiparticles contributions are neglected. In the definition for $\gamma$ (\ref{gamma}) the sum over qubit index is replaced by the integral over energies, $\sum_j\to N \int \rho( {\epsilon}) \, d {\epsilon}$ with the density of states $\rho( {\epsilon})$ is normalized to unity ($\epsilon$ is a qubit's energy). We discuss two cases which correspond to flat distributions with finite and very broad widths. In the first case we consider the distribution with a median energy at $\overline \epsilon$ and width $\Delta$, hence, the density of states is \begin{equation} \rho({\epsilon})=\frac{1}{\Delta}\theta(\Delta/2-|{\epsilon}-\overline{ \epsilon}|). \end{equation} The photon number is obtained as \begin{equation} \langle N_{\rm ph} \rangle=z(\Delta/{\overline \epsilon})\frac{\sqrt{NT{\overline \epsilon}}}{\sqrt \pi \omega} , \label{n-ph-offres} \end{equation} where dimensionless prefactor $z$ is \begin{equation} z(x)= \left(\frac{1}{x}-\frac{x}{4} \right) \ln \frac{1+x/2}{1-x/2}. \end{equation} In the homogeneous limit, $\Delta\to 0$, this prefactor is unity. Note, that the expression (\ref{n-ph-offres}) provides the photon number at the critical point for the off-resonant regime, where ${\overline \epsilon} \neq \omega$. In the second case of the very broad distribution, qubits energies belong to the interval from $ \epsilon_{\rm min}$ up to large $\epsilon_{\rm c} \gg \epsilon_{\rm min}$ which is spectrum cut-off. This case is considered as a thermodynamic limit where the average level spacing can be introduced, $\delta\epsilon\equiv \epsilon_{\rm c}/N$. Under the assumption \begin{equation} T\ll \{{ \epsilon_{\rm min}},\omega \} \ll \epsilon_{\rm c} \end{equation} we find that \begin{equation} \langle N_{\rm ph} \rangle= \sqrt\frac{2T}{\pi \delta\epsilon} \frac{ \epsilon_{\rm min}}{\omega}\ln \frac{\epsilon_{\rm c}}{ \epsilon_{\rm min}} . \label{n-ph-wide} \end{equation} In a physically relevant situation the lower edge of the qubits spectrum $ \epsilon_{\rm min}$ may be of the order of the resonator mode frequency, hence, their fraction is order of unity. The logarithm is also not a very large number. Interestingly, in this case we obtain that the photon number is affected mainly by the ratio between the smallest energy scales -- the temperature and level spacing. \subsection{Off-resonant regime} In this subsection we generalize the result for photon number where $\omega$ and $\bar \epsilon=\epsilon_j$ are out of the resonance. We assume no disorder in $g_j$. The value of $\langle N_{\rm ph}\rangle$ is given by the same expression as in Eq. (\ref{n-ph}) but $\alpha$, $\gamma $ and the Gaussian part are taken in more general form due to $\bar\epsilon \neq \omega$. The functional coefficients are \begin{equation} \alpha^{(\bar\epsilon{\neq}\omega)} =\frac{\omega}{T}- \frac{\Omega^2 }{T \bar\epsilon}\tanh\frac{ \bar\epsilon}{2T} \ , \label{alpha-off-res} \end{equation} \begin{equation} \gamma^{(\bar\epsilon{\neq}\omega)}= \frac{\Omega^4 }{NT\bar\epsilon^3} \ \frac{\sinh \frac{ \bar\epsilon}{ T} - \frac{ \bar\epsilon}{ T}}{ \cosh\frac{ \bar\epsilon}{ T} +1 }, \label{gamma-off-res} \end{equation} The Gaussian part is given by the sum (\ref{Nph-1}) with $G_n$ from (\ref{Gn-0}). In the off-resonant case it reads as \begin{multline} \langle N_{\rm ph}\rangle_{\rm Gauss}^{(\bar\epsilon{\neq}\omega)}= \\ =\frac{(\omega -\bar\epsilon ) \sinh \frac{E(\bar\epsilon,\Omega)}{2 T} -E(\bar\epsilon,\Omega)\sinh \frac{\omega +\bar\epsilon }{2 T} }{2 E(\bar\epsilon,\Omega) \left(\cosh \frac{E(\bar\epsilon,\Omega)}{2 T} -\cosh \frac{\omega +\bar\epsilon }{2 T} \right)}-\frac{1}{2} \label{NphGauss-off-res} \end{multline} where \begin{equation} E(\bar\epsilon,\Omega) = \sqrt{ 4 \Omega^2 \tanh\frac{\bar\epsilon}{2T} +(\bar\epsilon -\omega )^2}. \end{equation} In the resonant limit of $\bar\epsilon=\omega$, addressed in the Sec. \ref{sec:n_ph}, the expression (\ref{NphGauss-off-res}) reproduces (\ref{NphGauss}). In Fig. \ref{maps} (a) and (b) the photon number as the function of $\Omega$ and qubits energies $\bar\epsilon$ is plotted (in units of $\omega$). The effective action $S_{\rm eff,1}$ is employed in this calculation. The data shown in (a) demonstrates the behavior at low temperature $T=0.1 \, \omega$; (b) demonstrates the behavior at intermediate temperature $T=\omega$. The qubit number $N=100$ in both of the plots. The dark (bright) regions in the maps correspond to normal (superradiant) phases. Red curves depict dependencies of the critical coupling value $\Omega_{\rm c}(\bar\epsilon)$ from (\ref{omega-c}) where $\omega$ is kept constant. Curves in insets demonstrate the average photon number as functions of $\Omega/\omega$ for cuts in the plots marked by green dashed lines. Red points in insets stand for the critical Rabi frequency for a given $T$ and cuts of $\bar\epsilon$ in (a) and (b). These plots demonstrate typical scales of photons number in normal and superradiant phases for low and intermediate temperature regimes. Red curves corresponding to $\Omega_{\rm c}(\bar\epsilon)$ relations reproduce asymptotics for low temperatures in (a), where $\Omega_{\rm c}(\bar\epsilon)\propto \sqrt{\bar\epsilon}$, and for high temperatures with $\Omega_{\rm c}(\bar\epsilon)\propto {\rm const}$ in (b). \begin{figure*}[ht] \includegraphics[scale=0.52]{map-loT.pdf} \includegraphics[scale=0.52]{map-hiT.pdf} \caption{ Average photon number $\langle N_{\rm ph}\rangle$ obtained by means of $S_{\rm eff,1}$ as the function of the Rabi frequency $\Omega$ and qubit energies $\epsilon_j=\bar\epsilon$ in non-resonant regimes of $\omega\neq \bar\epsilon$. The dark regions in the maps correspond to normal phase; bright regions correspond to superradiant phase. Qubit number $N=100$, $\Omega$ and $\bar\epsilon$ are measured in units of resonator frequency $\omega$. (a) Data calculated for low temperature regime $T=0.1 \, \omega$. (b) Data calculated for intermediate temperature $T= \omega$. Red solid curve corresponds to the critical $\Omega_{\rm c}$ as a function of $\bar\epsilon$ given by the relation (\ref{omega-c}). Insets in (a) and (b) demonstrate $\langle N_{\rm ph}\rangle$ as functions of $\Omega/\omega$ for cuts marked by green dashed lines; red points mark the critical Rabi frequency for a given $T$ and $\bar\epsilon$ in the cut. } \label{maps} \end{figure*} \section{ Full counting statistics} \label{sec:fcs} \subsection{Generating action} The effective action for quantum fluctuations (\ref{s_eff0}) allows to we derive the full counting statistics (FCS) for photon numbers. These are cumulant and moment generating functions (CGF and MGF). These are functions of real counting variable $\xi$. In our consideration the generating action is introduced on the imaginary time. The CGF and MGF are defined as follows through the partition function $Z(\xi)$ \begin{equation} {\rm CGF}(\xi)= \ln {\rm MGF}(\xi) , \quad {\rm MGF}(\xi)=\frac{Z(\xi)}{Z(0)}, \end{equation} \begin{multline} Z(\xi)= \int D[\Psi] \exp\Big[-S_{\rm eff,0}[\Phi,\bar\psi_n,\psi_n]-\\-{\rm i}\xi\Big(\Phi+\sum\limits_{n\neq 0} \bar \psi_n\psi_n-1/2\Big)\Big]. \label{z} \end{multline} $\mathcal{T}$-ordering in the imaginary time representation of the path integrals assumes that the photon number, introduced in (\ref{N-ph-def}), is defined as \begin{equation} N_{\rm ph}=T\int\limits_0^\beta \bar\psi_\tau\psi_{\tau+{\it 0}} d\tau \label{n-def-0} \end{equation} in a generating term. Alternatively, the generating term can be also represented as a half sum of (\ref{n-def-0}) with $+ {\it 0}$ and $- {\it 0}$, which is symmetric under $\mathcal{T}$- and anti-$\mathcal{T}$-ordering. In the Matsubara representation we obtain the generating action in the form of (\ref{z}) after such a symmetrization. Due to the commutation of photon operators, we include $-\sfrac{1}{2}$ in (\ref{z}). The photon number moments $\langle N_{\rm ph}^n \rangle \equiv \langle (\hat\psi^\dagger \hat\psi)^n \rangle$ are given by the derivatives \begin{equation} \langle N_{\rm ph}^n\rangle=({\rm i})^n\left. \frac{\partial^n}{\partial\xi^n}{ \rm MGF}(\xi)\right|_{\xi=0}, \end{equation} while the cumulants are defined as \begin{equation} \langle\!\langle N_{\rm ph}^n\rangle\!\rangle=({\rm i})^n\left. \frac{\partial^n}{\partial\xi^n}{\rm CGF}(\xi)\right|_{\xi=0}. \label{c-n} \end{equation} Path integration in (\ref{z}) is reduced to the infinite product of Matsubara Green functions involving the counting variable \begin{multline} {\rm MGF}(\xi)= e^{{\rm i}\xi /2} \ \frac{\int\limits_0^\infty e^{ -(\alpha+{\rm i}\xi)\Phi-\gamma \Phi^2} d\Phi}{\int\limits_0^\infty e^{ - \alpha \Phi-\gamma \Phi^2} d\Phi} \prod\limits_{n\neq 0} \frac{ G_{ n }(\xi) }{ G_{n }(0)} . \label{mgf-0} \end{multline} The Green function with the counting variable reads as \begin{equation} G_{ n }(\xi)=\frac{1}{2\pi {\rm i}n -(\omega+{\rm i}\xi T) - \Sigma_{n}[0]}\ , \ n\neq 0 \ . \label{g-xi} \end{equation} Calculation of the integrals and product in (\ref{mgf-0}) yields for the resonant case ($\epsilon_j=\bar\epsilon=\omega$): \begin{equation} {\rm MGF}(\xi)={\rm MGF}_0(\xi) {\rm MGF}_{\rm fl}(\xi) , \label{mgf} \end{equation} where the zero mode's and quasiparticles' parts are \begin{equation} {\rm MGF}_0(\xi)= \exp\Big[\frac{2{\rm i}\alpha\xi-\xi^2}{4\gamma} \Big] \frac{ { \rm erfc} \frac{\alpha+{\rm i}\xi}{2\sqrt{\gamma}}}{{ \rm erfc} \frac{\alpha}{2\sqrt{\gamma}}} \end{equation} and \begin{multline} {\rm MGF}_{\rm fl}(\xi)=\\=\Big[1+\frac{{\rm i}\xi T\omega}{\omega^2-\Omega_T^2}\Big]\frac{(\cosh\frac{\omega}{T}-\cosh\frac{\Omega_T}{T})e^{{\rm i}\xi /2}}{\cosh\!\Big[\!\frac{\omega}{T} +\frac{{\rm i}\xi}{2}\!\Big]-\cosh\sqrt{\frac{\Omega_T^2}{T^2}-\frac{\xi^2}{4}}} \ . \end{multline} With the use of this result for MGF one can obtain the above expressions for the photon number and its fluctuations (\ref{n-ph}) and (\ref{c2-ph}). \subsection{FCS at the phase transition} In the thermodynamic limit of large enough $N$, the leading contribution to cumulants is described by that of the zero mode ${\rm MGF}_0(\xi)$. Thus, the CGF for the critical point is \begin{equation} {\rm CGF}_0(\xi)=\frac{ -\xi^2}{4\gamma}+\ln \Big[{ \rm erfc} \frac{ {\rm i}\xi}{2\sqrt{\gamma}} \Big]. \end{equation} The first six cumulants, which follows from ${\rm CGF}_0(\xi)$, are: \begin{eqnarray} \langle N_{\rm ph} \rangle&=&\frac{1}{\sqrt{\pi \gamma}}, \\ \langle\!\langle N_{\rm ph}^2\rangle\!\rangle&=&\frac{\pi-2}{2\pi \gamma}, \\ \langle\!\langle N_{\rm ph}^3\rangle\!\rangle&=&\frac{4-\pi}{2(\pi \gamma)^{3/2}}, \\ \langle\!\langle N_{\rm ph}^4\rangle\!\rangle&=&\frac{2(\pi-3)}{(\pi \gamma)^{ 2}}, \\ \langle\!\langle N_{\rm ph}^5\rangle\!\rangle&=&\frac{96-40\pi+3\pi^2}{4(\pi \gamma)^{5/2}}, \\ \langle\!\langle N_{\rm ph}^6\rangle\!\rangle&=&\frac{60(\pi-2)-7\pi^2}{(\pi \gamma)^{3}}. \end{eqnarray} From a numerical calculation it follows that higher cumulants alter their signs, for instance, as it seen from the negativity of the 5th and 6th ones. The non-zero cumulants for $n>2$ is the consequence of that fact that photons' probability distribution function is half of a Gaussian because of the positively defined variable of integration $\Phi$ in (\ref{mgf-0}). The Fourier transformation of the MGF provides the probability density to measure $N_{\rm ph}$ photons on average \begin{equation} \mathcal{P}(N_{\rm ph})= \int\limits_{-\infty}^{\infty} {\rm MGF}(\xi) e^{{\rm i} \xi N_{\rm ph}} d\xi. \label{p} \end{equation} Note, that $\mathcal{P}$ is a non-zero function of the continuous variable $N_{\rm ph}$. This is due to that $N_{\rm ph}$ is not an eigenvalue of the Hamiltonian (\ref{h-rwa}). Hence, non-integer values $N_{\rm ph}$ are assumed to be observed as the thermodynamical averages. As long as the $\psi_n$-fluctuations are frozen out if the system is near the critical point and $N$ is large enough, one finds from (\ref{z}) and (\ref{p}) that the probability density is identical to the exponent in $Z$ (\ref{z}) as $$ \mathcal{P}_0(N_{\rm ph})=2\pi \theta(N_{\rm ph})\frac{\exp[-\alpha N_{\rm ph}-\gamma N_{\rm ph}^2]}{Z(0)}. $$ In particular, at the critical point ${\rm MGF}_0(\xi)$ from (\ref{mgf}) the distribution is \begin{equation} \mathcal{P}_{\rm c}(N_{\rm ph})=\begin{cases} 4 \sqrt{\pi\gamma}\exp[-\gamma N_{\rm ph}^2], & \mbox{ if } N_{\rm ph}\geq 0, \\ 0, & \mbox{ if } N_{\rm ph}<0. \end{cases} \end{equation} This is the half of the Gaussian for $N_{\rm ph}>0$, while for unphysical $N_{\rm ph}<0$ it is zero. At the critical point (we assume below that $\Omega_{\rm c}=\omega$), the distribution's maximum is located at $N_{\rm ph}=0$. In the superradiant phase, the maximum of $\mathcal{P}(N_{\rm ph})$ is shifted to a non-zero value. In other words, for higher values $\Omega\gg\omega$ one obtains from $\ln [{\rm MGF}_0(\xi)]$ that in the leading order $\langle N_{\rm ph}\rangle=\frac{N\omega^2}{2\Omega^2}$ and $\langle\!\langle N_{\rm ph}^2\rangle\!\rangle=\frac{NT\omega^3}{2\Omega^4}$. The higher cumulants are strongly suppressed by the exponent: for instance, the third one is $\langle\!\langle N_{\rm ph}^3\rangle\!\rangle\sim e^{-N\frac{\omega}{T}}$. \subsection{FCS for weak interaction and normal phase} \label{seq:normalphase} In this part we discuss MGF at the normal phase and weak coupling limit. It is assumed that the system is far away from the fluctuational region, i.e., $\Omega_T\ll \omega $ (see Eq. \ref{fluct-zone-normal}). Taking the limit $\gamma\to 0$ in (\ref{mgf}) one obtains the MGF for the normal phase of the Dicke model: \begin{equation} {\rm MGF}(\xi)=\frac{(\cosh\frac{\omega}{T}-\cosh\frac{\Omega_T}{T})e^{{\rm i}\xi /2}}{\cosh\!\Big[\!\frac{\omega}{T} +\frac{{\rm i}\xi}{2}\!\Big]-\cosh\sqrt{\frac{\Omega_T^2}{T^2}-\frac{\xi^2}{4}}} . \end{equation} In the decoupled limit, where the Rabi frequency is the smallest scale $\Omega_T\ll \{T, \omega\}$, one arrives at the MGF of the free photon mode of the frequency $\omega$ \begin{equation} {\rm MGF}(\xi )=\frac{1-e^{-\beta \omega }}{1-e^{-{\rm i}\xi-\beta\omega}}. \label{MGF-decoupled} \end{equation} Note, that it is $2\pi$-periodic function of the counting variable. The discrete Fourier transformation of (\ref{MGF-decoupled}) at the finite interval $[0;2\pi]$ of the single period yields the standard Hibbs distribution probabilities \begin{equation} P_{n}=(1-e^{-\beta\omega})e^{-n\beta\omega}, \quad n\geq 0. \end{equation} Obviously, the infinite integral definition (\ref{p}) one would obtains delta-peaks in the probability distribution density located at $N_{\rm ph}=n\geq 0$, being the eigenvalues of the free photon mode Hamiltonian, as $$\mathcal{P}(N_{\rm ph} )=\frac{1}{2\pi}\sum_{n\geq 0}P_n \delta(N_{\rm ph}-n).$$ Note that the cumulant generating function for the free mode is \begin{equation} {\rm CGF}(\xi )={\rm i}\frac{\xi}{2}-\ln \frac{\sinh\frac{\omega +{\rm i}\xi T}{2T}}{\sinh\frac{\omega }{2T}} \ . \label{CGF-rwa-0} \end{equation} The cumulants itself are \begin{equation} \langle\!\langle N_{\rm ph}^n\rangle\!\rangle = \begin{cases} \frac{1}{2}\coth\frac{\omega }{2T} -\frac{1}{2}, & n=1; \\ \\ \frac{(-1)^{n-1}}{2^{n}}\left.\frac{\partial^{n-1} }{\partial x^{n-1}} \coth x \right|_{x=\frac{\omega }{2T}} , & n\geq 2 . \end{cases} \end{equation} One arrives at the mentioned above Fano factor $ F_0=(1-e^{-\beta \omega })^{-1} $ in (\ref{F-0}) and the relative fluctuations parameter $r_0=e^{ \beta \omega }$. \section{Conclusions}\label{sec:concl} In this work we addressed to fluctuations near superradiant transition which is driven by an interaction between a single-mode photons and multi-qubit environment. In such consideration the collective Rabi frequency is varied (it can be close to the critical value of superradiant transition), while the temperature $T $ is kept unchanged. We did not assume the thermodynamic limit of infinite qubits number $N$ and consider it as large enough but finite value. Our analysis was focused on two types of competing fluctuations -- the thermal one and that of the superradiant order parameter. This regime is opposite to the transition by the temperature studied in Ref.~\cite{popov1988functional}. We used Majorana fermion representation of qubits' Pauli operators in order to formulate a path integral approach. Having started from the Dicke Hamiltonian, we demonstrate how one can derive the effective action for the photon mode, obtained by alternative fermionization techniques in Refs.~\cite{popov1988functional,eastham2006finite}. After that we calculated the average photons number and equilibrium fluctuations in terms of the effective action formalism. As a generalization, the full counting statistics, providing higher order cumulants of the photon numbers, was formulated. Most of the result of this paper address a low temperature regime and a resonance between qubits and photon mode frequency $\omega$. It was shown that the Gaussian approximation for thermal fluctuations is exact and analytical solution can be found, if $\hbar \omega \gg k_{\rm B} T \gg \hbar \omega/N$. In this limit the critical value of the collective Rabi frequency is $\Omega_{\rm c}=\omega$ and the average photon number at this point is $\langle N_{\rm ph}\rangle = \sqrt{N k_{\rm B} T/(\pi\hbar\omega)}$. The relative fluctuations parameter $r_{\rm c}\equiv\langle\!\langle N_{\rm ph}^2 \rangle\!\rangle/ \langle N_{\rm ph} \rangle^2$, where the second cumulant is $\langle\!\langle N_{\rm ph}^2 \rangle\!\rangle=\langle N_{\rm ph}^2 \rangle-\langle N_{\rm ph} \rangle^2$, is universal at the critical point $r_{\rm c} =\pi/2-1$. A domain near $\Omega_{\rm c}$ in the superradiant phase, where $r$ is not suppressed, corresponds to the fluctuational Ginzburg-Levanyuk region. The width of such frequency range is proportional to $\sqrt{\omega k_{\rm B} T/(\hbar N)}$; this is much smaller than $k_{\rm B}T$ and shrinks at thermodynamic limit. Another characteristic, Fano factor $F\equiv\langle\!\langle N_{\rm ph}^2 \rangle\!\rangle/ \langle N_{\rm ph} \rangle$, decreases from the unity in decoupled limit $\Omega\ll\Omega_{\rm c}$ to a minimum $F_{\rm min}<1$ at $\Omega\lesssim\Omega_{\rm c}$. The latter indicates a negative correlation between photons. The further increase of $\Omega$ up to the critical value results in a significant growth of the Fano factor to a maximum $F_{\rm c}\approx \langle N_{\rm ph}\rangle \gg 1$. This means significantly positive photon-photon correlations at the superradiant transition. There is a reentrance no negative correlations in the superradiant phase as it follows from the decaying of the Fano-factor above the critical point. As a generalization, for opposite limit of wide spectral distribution of qubits environment we find $\langle N_{\rm ph}\rangle{\sim} \sqrt{k_{\rm B}T/\delta\epsilon} \ln\frac{\epsilon_{\rm c}}{\omega}$, where $\delta\epsilon$ and $\epsilon_{\rm c}$ are the average level spacing and upper cut-off energy of the spectrum, respectively. For high temperatures, $k_{\rm B} T\gg \hbar \omega$, the neglecting of non-Gaussian fluctuations of quasiparticles is valid for any $N$ -- in contrast to the low-temperature regime. The finiteness of the qubit number can change a behavior of fluctuations at the critical point. Namely, for $\sqrt{k_{\rm B} T/(\hbar\omega)} \gtrsim N $ the quasiparticle fluctuations become greater than that of superradiant order parameter. This intermediate region shows a non-universal enhancement of $r_{\rm c}$ which reveals a two-level nature of qubits environment. We believe that the above results can be of an interest in the context of state-of-the-art hybrid systems and quantum metamaterials operating in GHz frequency domain. The coupling constants $g$ in superconducting systems range from MHz to several GHz showing a realization of an ultra-strong coupling regime. The ratios of $ g/\omega\sim 0.071$~\cite{Bosman2017}, $g/\omega\sim 0.6$~\cite{Andersen_2017,braumuller2017analog} and $g/\omega \sim 0.72 - 1.34$~\cite{Fumiki2016} have been demonstrated. Consequently, the critical qubit number $N_{\rm c}=(\omega/g)^2$ needed for turning on the superradiant transition can be around $10^0$ to $10^2$. Another possibility for realization of the phase transition are hybrid systems with NV centers in diamonds. Our estimations are based on Ref.~\cite{Putz2014} where individual coupling constant $g\sim 10$ Hz and the number of NV centers $N\sim 10^{12}$. The collective Rabi frequency $\Omega\sim 20$ MHz is two orders less than the critical value $\Omega_{\rm c}\sim 2$ GHz and, according to our consideration, the system is in the normal phase. For the above value of $g$, the number $N$ should be increased by four orders up to the critical $N_{\rm c}\sim 10^{16}$ in order to reach the superradiant phase. \section{Acknowledgments}\label{sec:ackn} The research was funded by the Russian Science Foundation under Grant No. 16-12-00095. Authors thank Andrey A. Elistratov for fruitful discussions.
1,108,101,563,782
arxiv
\section{Introduction} A tangent of a polygon is a line touching the polygon such that all of the polygon lies on the same side of the line. An outer common tangent of two polygons is a tangent of both polygons such that the polygons lie on the same side of the tangent. Two disjoint polygons have exactly two outer common tangents unless their convex hulls are nested. If they are properly nested, there is no outer common tangent. In this paper, we study the problem of computing the outer common tangents of two disjoint simple polygons, each given as a read-only array of its corners in cyclic order. We give an algorithm computing the outer common tangents in linear time using only a constant number of variables each storing a boolean value or an index of a corner in the array. We are therefore working in the \emph{constant workspace model} of computation. The constant workspace model is a restricted version of the RAM model in which the input is read-only, the output is write-only, and only $O(\log n)$ additional bits of \emph{workspace} (with both read and write access) are available, where $n$ is the size of the input. Clearly, $\Omega(\log n)$ bits in the workspace are necessary to solve any interesting computational problem, because that many bits are required to store an index of or a pointer to an entry in the input. Since blocks of $\Theta(\log n)$ bits are considered to form \emph{words} in the memory, algorithms in the constant workspace model use $O(1)$ words of memory, which explains the name of the model. The practical relevance of studying problems in the constant workspace model is increasing, as there are many current and emerging memory technologies where writing can be much more expensive than reading in terms of time and energy \cite{Carson:EECS-2015-163}. The constant workspace model was first studied explicitly for geometric problems by Asano et~al.\ \cite{asano2}. Recently, there has been growing interest in algorithms for geometric problems using constant or restricted workspace, see for instance \cite{abrahamsen2013, asano1, barba2014space, barba2, darwish2014optimal, harpeled2015, korman2015time}. The problem of computing common tangents of two polygons has received most attention in the case that the polygons are convex. For instance, computing the outer common tangents of disjoint convex polygons is used as a subroutine in the classical divide-and-conquer algorithm for the convex hull of a set of $n$ points in the plane due to Preparata and Hong \cite{preparata1977}. They give a naive linear-time algorithm for outer common tangents, as it suffices for an $O(n\log n)$-time convex hull algorithm. The problem is also considered in various dynamic convex hull algorithms \cite{brodal2002, hershberger1992, overmars1981}. Overmars and van Leeuwen \cite{overmars1981} give an $O(\log n)$-time algorithm for computing an outer common tangent of two disjoint convex polygons when a separating line is known, where each polygon has at most $n$ corners. Kirkpatrick and Snoeyink \cite{kirkpatrick19952} give an $O(\log n)$-time algorithm for the same problem but without using a separating line. Guibas et~al.\ \cite{guibas1991} give a lower bound of $\Omega(\log^2 n)$ on the time required to compute an outer common tangent of two intersecting convex polygons, even if they are known to intersect in at most two points. They also describe an algorithm achieving that bound. Toussaint \cite{toussaint1983} considers the problem of computing separating common tangents of convex polygons and notes that the problem occurs in problems related to visibility, collision avoidance, range fitting, etc. He gives a linear-time algorithm. Guibas et~al.\ \cite{guibas1991} give an $O(\log n)$-time algorithm for the same problem. All the above-mentioned algorithms with sublinear running times make essential use of the convexity of the polygons. If the polygons are not convex, a linear-time algorithm can be used to compute the convex hulls before computing the tangents \cite{melkman1987}. However, if the polygons are given in read-only memory, $\Omega(n)$ extra bits are required to store the convex hulls, so this approach does not work in the constant workspace model. Abrahamsen \cite{abrahamsen2015} gives a linear-time constant-workspace algorithm to compute the outer common tangents of two simple polygons the convex hulls of which are disjoint. In this paper, we show that the same is possible as long as the polygons (but not necessarily their convex hulls) are disjoint. The algorithm is only slightly different from the one in \cite{abrahamsen2015}, but its proof of correctness requires much more effort. In particular, the proof relies on an intricate continuous analysis of the algorithm. Before, it was not even clear whether to expect existence of a linear-time constant-workspace algorithm that does not require the convex hulls to be disjoint, because it happens quite often that a computational problem exhibits different behavior for disjoint polygons and for polygons that are not disjoint. For instance, as it has been mentioned above, the outer common tangents of two disjoint convex polygons can be computed in time $O(\log n)$, while doing the same for two convex polygons that intersect in two points requires time $\Omega(\log^2 n)$. A separating common tangent of two polygons is a tangent of both polygons such that the polygons lie on the opposite sides of the tangent. Two disjoint polygons have exactly two separating common tangents provided that their convex hulls are disjoint. If they intersect properly, there is no separating common tangent. Abrahamsen \cite{abrahamsen2015} describes a linear-time constant-workspace algorithm that computes the separating common tangents of two simple polygons. In particular, it detects whether the convex hulls of two simple polygons are disjoint. Our current algorithm can decide whether the convex hulls two simple polygons are nested, which happens when it is unable to find an outer common tangent. To the best of our knowledge, this was not known to be possible in linear time and constant workspace prior to this work. Our algorithm and the algorithm from \cite{abrahamsen2015} together enable us to determine, for two disjoint simple polygons in general position, the full relation between their convex hulls (whether they are nested, overlapping, or disjoint) in linear time and constant workspace. It remains open whether an outer common tangent of two polygons that are not disjoint can be found in linear time using constant workspace. \section{Terminology and Notation} For any two points $a$ and $b$ in the plane, the closed line segment with endpoints $a$ and $b$ is denoted by $ab$. When $a\neq b$, the straight line containing $a$ and $b$ that is infinite in both directions is denoted by $\mathcal L(a,b)$, and the ray starting at $a$ and going through $b$ is denoted by $\mathcal R(a,b)$. For three points $a$, $b$, and $c$, consider the line $\mathcal L(a,b)$ as oriented from $a$ towards $b$, and define $\mathcal T(a,b,c)$ to be $1$ if $c$ lies to the left of $\mathcal L(a,b)$, $0$ if $a$, $b$, $c$ are collinear, and $-1$ if $c$ lies to the right of $\mathcal L(a,b)$. Let $\LHP(a,b)$ denote the closed half-plane lying to the left of $\mathcal L(a,b)$ and $\RHP(a,b)$ denote the closed half-plane lying to the right of $\mathcal L(a,b)$. A \emph{simple polygon}, or just a \emph{polygon}, with \emph{corners} $x_0,\ldots,x_{n-1}$ is a closed polygonal curve in the plane composed of $n$ \emph{edges} $x_0x_1,\ldots,x_{n-2}x_{n-1},x_{n-1}x_0$ such that the segments have no common points other than the common endpoints of pairs of consecutive edges. The region of the plane bounded by a polygon $P$ (including $P$ itself) is a \emph{polygonal region}. Assume for the rest of this paper that $P_0$ and $P_1$ are two disjoint simple polygons with $n_0$ and $n_1$ corners, respectively. (We allow one of $P_0$ and $P_1$ to be contained in the ``interior region'' of the other -- in that case our algorithm will report that the convex hulls are nested and no outer common tangent exists.) Assume that $P_k$ is defined by a read-only array of its corners $\pp k0,\pp k1,\ldots,\pp k{n_k-1}$ for $k\in\{0,1\}$. Assume further, without loss of generality, that the corners of $P_0$ are given in counterclockwise order and the corners of $P_1$ are given in clockwise order. (The orientation of a polygon can be easily tested in linear time using constant workspace, and the algorithm can choose to traverse the polygon forwards or backwards, accordingly.) Finally, assume that the corners are in general position in the sense that $P_0$ and $P_1$ have no corners in common and the combined set of corners $\{\pp 00,\ldots,\pp 0{n_0-1},\pp 10,\ldots,\pp 1{n_1-1}\}$ contains no triple of collinear points. Indices of the corners of $P_k$ are considered modulo $n_k$, so that $\pp ki$ and $\pp kj$ denote the same corner when $i\equiv j\pmod{n_k}$. For $a,b\in P_k$, the \emph{chain} $P_k[a,b]$ is the portion of $P_k$ from $a$ to $b$ in the order assigned to $P_k$ (counterclockwise for $P_0$, clockwise for $P_1$). If $i$ and $j$ are indices of corners on $P_k$, we write $P_k[i,j]$ to denote $P_k[\pp ki,\pp kj]$. A \emph{tangent} of $P_k$ is a line $\ell$ such that $\ell$ and $P_k$ are not disjoint and $P_k$ is contained in one of the closed half-planes determined by $\ell$. The line $\ell$ is a \emph{common tangent} of $P_0$ and $P_1$ if it is a tangent of both $P_0$ and $P_1$. A common tangent is an \emph{outer common tangent} if $P_0$ and $P_1$ are on the same side of the tangent, otherwise the common tangent is \emph{separating}. \begin{figure}% \centering \begin{minipage}[b][2.5in][b]{2in}% \centering \input{allTangents.tex} \caption{The convex hulls are disjoint -- separating and outer common tangents exist.}% \label{allTangents} \end{minipage}% \qquad \begin{minipage}[b][2.5in][b]{1.5in}% \centering \input{spirals.tex} \caption{The convex hulls overlap -- only outer common tangents exist.}% \label{spirals} \end{minipage}% \qquad \begin{minipage}[b][2.5in][b]{1.3in}% \centering \input{noComTan.tex} \caption{The convex hulls are nested -- no common tangents exist.}% \label{noComTan} \end{minipage}% \end{figure} For a simple polygon $P$, let $\mathcal H(P)$ denote the convex hull of $P$. The following lemma asserts well-known properties of common tangents of polygons. See Figures \ref{allTangents}--\ref{noComTan}. \begin{lemma}\label{folklore} A line is a tangent of a polygon\/ $P$ if and only if it is a tangent of\/ $\mathcal H(P)$. Under our general position assumptions, the following holds. If one of\/ $\mathcal H(P_0)$ and\/ $\mathcal H(P_1)$ is completely contained in the other, there are no outer common tangents of\/ $P_0$ and\/ $P_1$. Otherwise, there are two or more, and there are exactly two if\/ $P_0$ and\/ $P_1$ are disjoint. If\/ $\mathcal H(P_0)$ and\/ $\mathcal H(P_1)$ are not disjoint, there are no separating common tangents of\/ $P_0$ and\/ $P_1$. Otherwise, there are exactly two. \end{lemma} \section{Algorithm} Let the outer common tangents of $P_0$ and $P_1$ be defined by pairs of corners $(\ell_0,\ell_1)$ and $(r_0,r_1)$ so that $\ell_0,r_0\in P_0$, $\ell_1,r_1\in P_1$, and $P_0,P_1\subset\LHP(\ell_0,\ell_1)\cap\RHP(r_0,r_1)$. Algorithm~\ref{alg1} returns a pair of indices $(s_0,s_1)$ such that $(r_0,r_1)=(\pp 0{s_0},\pp 1{s_1})$ or, if the convex hulls of $P_0$ and $P_1$ are nested so that the tangents do not exist, the algorithm reports that by returning $\ttt{nested}$. Finding $(\ell_0,\ell_1)$ requires running Algorithm~\ref{alg1} with the roles of $P_0$ and $P_1$ interchanged and with the orders of the corners of $P_0$ and $P_1$ reversed -- each array reference $\pp ki$ is translated to $\pp{1-k}{-i}$ for $k\in\{0,1\}$, and the returned result is $(s_1,s_0)$ such that $(\ell_0,\ell_1)=(\pp 0{s_0},\pp 1{s_1})$. \begin{algorithm}[t] \LinesNumbered \DontPrintSemicolon \SetArgSty{} \SetKwIF{If}{ElseIf}{Else}{if}{}{else if}{else}{end if} \SetKwFor{While}{while}{}{end while} $s_0\gets 0$;\quad $v_0\gets 0$;\quad $b_0\gets \ttt{false}$;\quad $s_1\gets 0$;\quad $v_1\gets 0$;\quad $b_1\gets \ttt{false}$;\quad $u\gets 0$\;\nllabel{init} \While{$s_0<2n_0$ and $s_1<2n_1$ and ($v_0<s_0+n_0$ or $v_1<s_1+n_1$)} {\nllabel{while} $v_u\gets v_u+1$\; \If {$\mathcal T(\pp 0{s_0},\pp 1{s_1},\pp u{v_u})=1$} {\nllabel{testSide} \If {$\pp {1-u}{s_{1-u}}\in\Delta(\pp u{s_u},\pp u{v_u-1},\pp u{v_u})$} {\nllabel{testTriangle} $b_u\gets \ttt{true}$\;\nllabel{setB} } \If {not $b_u$} { $s_u\gets v_u$;\quad $v_{1-u}\gets s_{1-u}$;\quad $b_{1-u}\gets \ttt{false}$\;\nllabel{update} } } $u\gets 1-u$\; } \If {$s_0\geq 2n_0$ or $s_1\geq 2n_1$ or $b_0$ or $b_1$} {\nllabel{testReturn} \Return {$\ttt{nested}$}\;\nllabel{returnNested} } \Return {$(s_0,s_1)$}\;\nllabel{returnRes} \caption{$\ttt{OuterCommonTangent}(P_0,P_1)$} \label{alg1} \end{algorithm} The algorithm maintains a pair of indices $(s_0,s_1)$ which determines the \emph{tangent candidate} $\mathcal L(\pp 0{s_0},\pp 1{s_1})$. Starting from $(s_0,s_1)=(0,0)$ and advancing the indices $s_0$, $s_1$ appropriately, the algorithm attempts to reach a situation that $(\pp 0{s_0},\pp 1{s_1})=(r_0,r_1)$, that is, $P_0,P_1\subset\RHP(\pp 0{s_0},\pp 1{s_1})$. At the start and after each update to $(s_0,s_1)$, the algorithm traverses $P_0$ and $P_1$ in parallel with indices $(v_0,v_1)$, starting from $(v_0,v_1)=(s_0,s_1)$ and advancing $v_0$ and $v_1$ alternately. The variable $u\in\{0,1\}$ determines the polygon $P_u$ in which we advance the traversal in a given iteration. If the test in line \ref{testSide} happens to be positive, then the corner $\pp u{v_u}$ lies on the ``wrong side'' of the tangent candidate, witnessing $P_u\not\subset\RHP(\pp 0{s_0},\pp 1{s_1})$. In that case, the algorithm updates the tangent candidate by setting $s_u\gets v_u$ and reverts $v_{1-u}$ back to $s_{1-u}$ in line \ref{update}, unless a special boolean variable $b_u$ is set, which we will comment on shortly. The reason for reverting $v_{1-u}$ back to $s_{1-u}$ in line \ref{update} is that a corner of $P_{1-u}$ which was on the correct side of the tangent candidate before the update to $s_u$ can be on the wrong side of the tangent candidate after the update to $s_u$, and then it needs to be traversed again in order to be detected. The algorithm returns $(s_0,s_1)$ in line \ref{returnRes} when it has traversed both polygons entirely with indices $v_0$ and $v_1$ after last updates to $s_0$ and $s_1$ without detecting any corner on the wrong side of the tangent candidate. That can happen only when $P_0,P_1\subset\RHP(\pp 0{s_0},\pp 1{s_1})$. See Figure~\ref{runningEx} for an example of how the algorithm proceeds. \begin{figure} \centering \input{runningEx.tex} \caption{An example of how Algorithm~\ref{alg1} finds the outer common tangent $\mathcal L(c,h)$ of $P_0$ and $P_1$. The start points are $(\pp 00,\pp 10)=(a,e)$. The gray dashed line segments are the segments $\pp 0{s_0}\pp 1{s_1}$ on the various tangent candidates. In the $11$th iteration, an update makes $(\pp 0{s_0},\pp 1{s_1})=(b,f)$, so the tangent candidate becomes the dotted line $\mathcal L(b,f)$. In the $19$th iteration, $u=0$ and $\pp 0{v_0}=d$, so $b_0$ is set to $\ttt{true}$. In the $28$th iteration, $u=1$ and $\pp 1{v_1}=g$, and therefore $b_0$ is cleared. In the $31$st iteration, an update makes $(\pp 0{s_0},\pp 1{s_1})=(c,h)$ and the outer common tangent has been found.} \label{runningEx} \end{figure} In the test in line \ref{testTriangle}, $\Delta(a,b,c)$ denotes the filled triangle with corners $a$, $b$, $c$. If that test is positive, then $\pp {1-u}{s_{1-u}}$ belongs to the convex hull of $P_u$, so $\pp {1-u}{s_{1-u}}\neq r_{1-u}$. In that case, the boolean variable $b_u$ is set, and then it prevents any updates to $s_u$ in line \ref{update} until it is cleared after a later update to $s_{1-u}$ in line \ref{update}. It will be shown in the proof of Lemma~\ref{b0b1False} that such an update to $s_{1-u}$ must occur if the convex hulls of $P_0$ and $P_1$ are not nested. The main effort in proving correctness of Algorithm~\ref{alg1} lies in the following lemma, which is proved in Section~\ref{secProof}. \begin{lemma}\label{mainLemma} If the outer common tangents of\/ $P_0$ and\/ $P_1$ exist, then the loop in line \ref{while} of Algorithm~\ref{alg1} ends with\/ $s_0<2n_0$ and\/ $s_1<2n_1$. \end{lemma} The above implies that the algorithm ends up returning $(s_0,s_1)$ in line \ref{returnRes} provided that $b_0=b_1=\ttt{false}$ when the loop in line \ref{while} ends (this will be proved in Lemma~\ref{b0b1False}). To explain the role of the special variables $b_0$ and $b_1$, suppose temporarily that the conditions $s_0<2n_0$ and $s_1<2n_1$ are omitted from the test in line \ref{while}. If we were making the updates in line \ref{update} regardless of the current values of $b_0$ and $b_1$, the algorithm could never end making updates to $s_0$ and $s_1$ even if the outer common tangents exist (see \cite{abrahamsen2015} for an example of such a behavior). In particular, Lemma~\ref{mainLemma} would no longer be true. On the other hand, if the convex hulls of $P_0$ and $P_1$ are nested, then one of the following happens: \begin{itemize} \item the algorithm never ends making updates to $s_0$ and $s_1$, \item one of $b_0$, $b_1$, say $b_k$, is $\ttt{true}$ and the algorithm has traversed $P_{1-k}$ entirely with the index $v_{1-k}$ after last update to $s_{1-k}$ without detecting any corner on the wrong side of the tangent candidate. \end{itemize} In both cases, taking the conditions $s_0<2n_0$ and $s_1<2n_1$ in line \ref{while} back into account, the algorithm reports that the convex hulls of $P_0$ and $P_1$ are nested in line \ref{returnNested}. \begin{lemma}\label{b0b1False} If the outer common tangents of\/ $P_0$ and\/ $P_1$ exist, then the loop in line \ref{while} of Algorithm~\ref{alg1} ends with\/ $b_0=b_1=\ttt{false}$. \end{lemma} \begin{proof} We prove a slightly stronger statement, namely, that at most one of $b_0$ and $b_1$ can be $\ttt{true}$ at a time, and if one of $b_0$ and $b_1$ is $\ttt{true}$, then it will be cleared subsequently. Hence, the algorithm cannot terminate with $b_0=\ttt{true}$ or $b_1=\ttt{true}$. Consider an iteration $i$ of the loop in line \ref{while} which leads to changing the value of $b_0$ from $\ttt{false}$ to $\ttt{true}$ in line \ref{setB}. By induction, we can assume that $b_1=\ttt{false}$. Since the test in line \ref{testTriangle} is positive, the edge $P_0[v_0-1,v_0]$ intersects $\mathcal L(\pp 0{s_0},\pp 1{s_1})$ at a point $x$ such that $\pp 1{s_1}$ lies on the segment $\pp 0{s_0}x$. Moreover, $P_0[\pp 0{s_0},x]\subset\RHP(\pp 0{s_0},\pp 1{s_1})$, otherwise $b_0$ would be set before. Let $y$ be the first corner of $P_1$ after $\pp 1{s_1}$ such that $y\notin\RHP(\pp 0{s_0},\pp 1{s_1})$. Such a corner exists, otherwise $P_1$ would be contained in the convex hull of $P_0$. It follows that the test in line \ref{testSide} will be positive in the first iteration $j$ after $i$ in which $u=1$ and $\pp 1{v_1}=y$. The edge $P_1[v_1-1,v_1]$ intersects $\mathcal L(\pp 0{s_0},\pp 1{s_1})$ at a point on the segment $\pp 0{s_0}x$, and hence the test in line \ref{testTriangle} is negative in iteration $j$. Therefore, $b_0$ is cleared and we again have $b_0=b_1=\ttt{false}$. The same argument shows that $b_1$ will be cleared after being set. \end{proof} \begin{theorem}\label{mainThm} Algorithm~\ref{alg1} is correct, runs in linear time, and uses constant workspace. Specifically, if the outer common tangents exist, then Algorithm~\ref{alg1} returns a pair of indices\/ $(s_0,s_1)$ such that\/ $(r_0,r_1)=(\pp 0{s_0},\pp 1{s_1})$, that is, $P_0,P_1\subset\RHP(\pp 0{s_0},\pp 1{s_1})$. Otherwise, the algorithm returns\/ $\ttt{nested}$. \end{theorem} \begin{proof} First, suppose the algorithm returns $(s_0,s_1)$ in line \ref{returnRes}. Consider the final values of $s_0$, $s_1$, $b_0$ and $b_1$. Due to the test in line \ref{testReturn}, we have $s_0<2n_0$, $s_1<2n_1$, and $b_0=b_1=\ttt{false}$, so the loop in line \ref{while} has ended because $v_0\geq s_0+n_0$ and $v_1\geq s_1+n_1$. After the last update to $(s_0,s_1)$, the test in line \ref{testSide} has been performed for every $v_0\in\{s_0+1,\ldots,s_0+n_0\}$ and every $v_1\in\{s_1+1,\ldots,s_1+n_1\}$ and was negative -- otherwise a further update would have been performed in line \ref{update}, as $b_0=b_1=\ttt{false}$. This shows that $P_0,P_1\subset\RHP(\pp 0{s_0},\pp 1{s_1})$. Now, suppose that the outer common tangents exist. By Lemma~\ref{mainLemma} and Lemma~\ref{b0b1False}, the loop in line \ref{while} ends with $s_0< 2n_0$, $s_1< 2n_1$, and $b_0=b_1=\ttt{false}$. Hence $(s_0,s_1)$ is returned in line \ref{returnRes}. In view of the discussion above, this proves correctness of the algorithm. It is clear that Algorithm~\ref{alg1} uses constant workspace. For the running time, note that if an update to $(s_0,s_1)$ happens in iteration $i$, the sum $s_0+s_1$ is increased by at least $\frac{i-j}2$, where $j$ is the number of the previous iteration in which an update to $(s_0,s_1)$ happened or $j=0$ if there has been no update before. By induction, we see that there have been at most $2(s_0+s_1)$ iterations until an update to $(s_0,s_1)$. Suppose first that $s_0<2n_0$ and $s_1<2n_1$ when the loop in line \ref{while} terminates. There have been at most $4(n_0+n_1)$ iterations until the final update to $(s_0,s_1)$. Thereafter, at most $2\max\{n_0,n_1\}\leq 2(n_0+n_1)$ iterations follow until $v_0\geq s_0+n_0$ and $v_1\geq s_1+n_1$, when the loop in line \ref{while} terminates. Hence, there are at most $6(n_0+n_1)$ iterations in total. Now, suppose that $s_0\geq 2n_0$ or $s_1\geq 2n_1$ when the loop terminates. By the same argument, the second to last update to $(s_0,s_1)$ happens after at most $4(n_0+n_1)$ iterations, after which at most $2(n_0+n_1)$ iterations follow until the last update to $(s_0,s_1)$. The loop is terminated immediately after the last update. Hence, we get the same bound of $6(n_0+n_1)$ iterations. Clearly, each iteration takes constant time, so the total running time of the algorithm is linear. \end{proof} \section{Proof of Lemma~\ref{mainLemma}}\label{secProof} For our analysis, it will be convenient to imagine the execution of Algorithm~\ref{alg1} in continuous time. By considering various discrete events happening during the continuous execution of the algorithm, we are able to prove the invariant stated in Lemma~\ref{mainLemma}. \subsection{Additional Terminology and Notation} For $U\subseteq\mathbb R^2$, let $\mathcal F(U)$ denote the set of compact subsets of $U$. By an \emph{interval}, we mean a bounded interval of real numbers. We allow an interval to be closed or open at each endpoint independently. We shall consider functions defined on an interval $I$ with the following sets (or their subsets) as codomains: $\mathbb R$ with the standard metric, $\mathbb R^2$ with the Euclidean metric, and $\mathcal F(\mathbb R^2)$ with the Hausdorff metric, a set $\mathcal S$ of functions with the discrete metric, and the power set $2^{\mathcal S}$ of a set $\mathcal S$ of functions, again with the discrete metric. The only purpose of these metrics is to have a suitable notion of convergence. We think of the domain $I$ as \emph{time}. If $f$ is a function with domain $I$ and $I'$ is a subinterval of $I$, then $f\restriction I'$ denotes the restriction of $f$ to $I'$. For a function $f\colon I\to X$, where $X$ is (a subset of) one of the codomains above, a point in time $t\in I$ is a \emph{discontinuity} of $f$ if $f$ is not continuous at $t$. We write \begin{itemize} \item $f(\mathord\nearrow\, t^\star)$ to denote the limit of $f(t)$ as $t\to t^\star$ from below, where $t^\star\in I\setminus\{\inf I\}$, \item $f(\mathord\searrow\, t^\star)$ to denote the limit of $f(t)$ as $t\to t^\star$ from above, where $t^\star\in I\setminus\{\sup I\}$. \end{itemize} If the limits $f(\mathord\nearrow\, t^\star)$ exist for all $t^\star\in I\setminus\{\inf I\}$ and the limits $f(\mathord\searrow\, t^\star)$ exist for all $t^\star\in I\setminus\{\sup I\}$, then we say that $f$ has \emph{one-sided limits}. Each of the functions $f$ that we consider has one-sided limits and finitely many discontinuities. Note that $f$ has a discontinuity at a point in time $t\in I$ if and only if $f(\mathord\nearrow\, t)\neq f(t)$ or $f(\mathord\searrow\, t)\neq f(t)$. A function $f\colon I\to\mathcal F(U)$, where $U\subseteq\mathbb R^2$, is \emph{monotonically decreasing} if $f(t)\supseteq f(t')$ for any $t,t'\in I$ such that $t<t'$. \begin{lemma}\label{monotone} Let\/ $I$ be an interval and\/ $f\colon I\to\mathcal F(U)$ be a function with one-sided limits and finitely many discontinuities, where\/ $U\subseteq\mathbb R^2$. Suppose\/ $f\restriction I'$ is monotonically decreasing for every subinterval\/ $I'\subseteq I$ such that\/ $f\restriction I'$ is continuous on\/ $I'$. Furthermore, suppose that \begin{itemize} \item $f(\mathord\nearrow\, t)\supseteq f(t)$ for any\/ $t\in I\setminus\{\inf I\}$ such that\/ $f(\mathord\nearrow\, t)\neq f(t)$, \item $f(t)\supseteq f(\mathord\searrow\, t)$ for any\/ $t\in I\setminus\{\sup I\}$ such that\/ $f(t)\neq f(\mathord\searrow\, t)$. \end{itemize} Then\/ $f$ is monotonically decreasing in the entire domain\/ $I$. \end{lemma} \begin{proof} Let $t_1<\cdots<t_n$ be the discontinuities of $f$. Let $t,t'\in I$ and $t<t'$. If there is no $i$ with $t\leq t_i\leq t'$, then $f\restriction[t,t']$ is continuous, so it follows from the assumption that $f(t)\supseteq f(t')$. Otherwise, let $i$ be minimum and $j$ be maximum such that $t\leq t_i\leq t_j\leq t'$. If $t<t_i$, then the assumptions yield $f(t)\supseteq f(\mathord\nearrow\, t_i)\supseteq f(t_i)$, Similarly, the assumptions yield $f(t_k)\supseteq f(\mathord\searrow\, t_k)\supseteq f(\mathord\nearrow\, t_{k+1})\supseteq f(t_{k+1})$ for $k\in\{i,\ldots,j-1\}$, and $f(t_j)\supseteq f(\mathord\searrow\, t_j)\supseteq f(t')$ if $t_j<t'$. Thus $f(t)\supseteq f(t')$. \end{proof} \subsection{Continuous Interpretation of the Algorithm}\label{secContinuous} Let $m$ denote the number of iterations of the loop in line \ref{while} performed by Algorithm~\ref{alg1}. For $i\in\{0,\ldots,m\}$ and $k\in\{0,1\}$, let $v_k(i)$ and $s_k(i)$ denote the values of $v_k$ and $s_k$, respectively, after $i$ iterations of the loop. In particular, $v_k(0)=s_k(0)=0$. For $x\in\mathbb R\setminus\mathbb Z$, let $\pp kx$ denote the interpolated point $(\lceil x\rceil-x)\pp k{\lfloor x\rfloor}+(x-\lfloor x\rfloor)\pp k{\lceil x\rceil}$ on the edge $P_k[\lfloor x\rfloor,\lceil x\rceil]$. We extend the functions $s_0$ and $s_1$ to the real interval $[0,m]$ as follows. We imagine that the $i$th iteration of the loop in line \ref{while} starts at time $i-1$ and ends at time $i$, and during that iteration $v_u$ grows continuously from $v_u(i-1)=v_u(i)-1$ to $v_u(i)$. Thus we define $v_u(t)=v_u(i)-i+t$ for $t\in(i-1,i)$. Suppose that the update in line \ref{update} is to be performed in the $i$th iteration. If $s_u(i-1)=v_u(i-1)$, then all of the edge $P_u[v_u(i-1),v_u(i)]$ is in $\LHP(\pp 0{s_0(i-1)},\pp 1{s_1(i-1)})$. We therefore imagine that the update happens at time $i-1$ and then $s_u$ grows continuously together with $v_u$ up to $v_u(i)$; thus we define $s_u(t)=v_u(t)$ and $v_{1-u}(t)=s_{1-u}(i-1)$ for $t\in(i-1,i)$. If $s_u(i-1)<v_u(i-1)$, then the edge $P_u[v_u(i-1),v_u(i)]$ intersects the tangent candidate at a point $\pp u{v_u(t^\star)}$, where $t^\star\in(i-1,i)$. We therefore imagine that the update in line \ref{update} happens at time $t^\star$ and then $s_u$ grows continuously together with $v_u$ up to $v_u(i)$; thus we define \begin{itemize} \item $s_u(t)=s_u(i-1)$ and $v_{1-u}(t)=v_{1-u}(i-1)$ for $t\in(i-1,t^\star]$, \item $s_u(t)=v_u(t)$ and $v_{1-u}(t)=s_{1-u}(i-1)$ for $t\in(t^\star,i)$, \end{itemize} and we say that $s_u$ \emph{jumps} from $s_u(t^\star)$ to $v_u(t^\star)=s_u(\mathord\searrow\, t^\star)$ at time $t^\star$. Finally, in either case, we define $s_{1-u}(t)=s_{1-u}(i-1)$ for $t\in(i-1,i)$. The functions $s_0,s_1\colon[0,m]\to\mathbb R$ thus defined are nondecreasing, have one-sided limits and finitely many discontinuities, and are left-continuous, that is, $s_0(\mathord\nearrow\, t)=s_0(t)$ and $s_1(\mathord\nearrow\, t)=s_1(t)$ for every $t\in(0,m]$. We have also defined functions $v_0,v_1\colon[0,m]\to\mathbb R$, but we are not going to use them any more. \begin{observation}\label{rotateObs} At any point in time during the execution of the continuous version of Algorithm~\ref{alg1}, at most one of\/ $s_0$, $s_1$ is changing. The tangent candidate\/ $\mathcal L(\pp 0{s_0},\pp 1{s_1})$ either is not moving, or is turning continuously counterclockwise around\/ $\pp 0{s_0}$ (when\/ $s_1$ is changing), or is turning continuously clockwise around\/ $\pp 1{s_1}$ (when\/ $s_0$ is changing). \end{observation} The following is trivial if $s_k(t)=s_k(\mathord\searrow\, t)$ and otherwise is a direct consequence of the test in line \ref{testTriangle} and of the fact that the update in line \ref{update} is only performed when $b_u=\ttt{false}$. \begin{observation}\label{chainObs} If\/ $t\in[0,m)$ and\/ $k\in\{0,1\}$, then\/ $\pp k{s_k(\mathord\searrow\, t)}\in\mathcal R(\pp {1-k}{s_{1-k}(t)},\pp k{s_k(t)})$ and\/ $P_k[s_k(t),s_k(\mathord\searrow\, t)]\subset\RHP(\pp 0{s_0(t)},\pp 1{s_1(t)})$. \end{observation} \subsection{Auxiliary Structure on the Polygons}\label{secAuxiliary} In this subsection, we introduce some auxiliary concepts used in the proof of Lemma~\ref{mainLemma}. They are defined in terms of the polygons $P_0$, $P_1$ only and are independent of the algorithm. Assume for this entire subsection that the convex hulls of $P_0$ and $P_1$ are not nested. Thus there are two outer common tangents -- let them be given by points $\ell_0,r_0\in P_0$ and $\ell_1,r_1\in P_1$ such that $P_0,P_1\subset\LHP(\ell_0,\ell_1)\cap\RHP(r_0,r_1)$. Let $L=\ell_0\ell_1$ and $R=r_0r_1$. Let $E$ be the polygonal region bounded by the chains $P_0[\ell_0,r_0]$, $P_1[\ell_1,r_1]$ and by the segments $L$, $R$. Since $P_0$ is oriented counterclockwise and $P_1$ clockwise, the interiors of $P_0$ and $P_1$ lie outside $E$. \begin{lemma}\label{doorPoints} Every segment\/ $xy$ such that\/ $xy\cap P_0=\{x\}$ and\/ $xy\cap P_1=\{y\}$ is contained in\/ $E$. \end{lemma} \begin{proof} The set $E\setminus(P_0[\ell_0,r_0]\cup P_1[\ell_1,r_1])$ separates $P_0$ and $P_1$ in $\LHP(\ell_0,\ell_1)\cap\RHP(r_0,r_1)$, so it contains a point $z$ in common with the segment $xy$. If $z\in L$ or $z\in R$, then $xy=\ell_0\ell_1$ or $xy=r_0r_1$, respectively, so $xy$ lies in $E$. So suppose $z$ is in the interior of $E$. The segment $zx$ cannot cross the boundary of $E$ at any point other than $x$, and $zy$ at any point other than $y$. This shows that $xy$ lies in $E$. \end{proof} \begin{figure} \centering \input{sigmaAndDoors.tex} \caption{The doors are the five dashed segments on $S=q_0q_1$: $D_4$, $D_5$, $D_3$, $D_1$, $D_2$ in the order from $q_0$ to $q_1$. The weights of the doors are $2$, $1$, $1$, $-1$, $0$, respectively. $D_3=y_0y_1$ is the primary door. The boundary of the primary region $E'=E_3\cup E_4\cup E_5$ is drawn with thick lines.} \label{sigmaAndDoors} \end{figure} See Figure~\ref{sigmaAndDoors}. Let $q_0\in P_0$ and $q_1\in P_1$ be fixed points such that at least one of $q_0$, $q_1$ is a corner of the respective polygon $P_0$ or $P_1$. Let $S=q_0q_1$. We consider the segment $S$ as oriented from $q_0$ to $q_1$, so that we can speak of the \emph{left side} of $S$, $\LHP(q_0,q_1)$, and the \emph{right side} of $S$, $\RHP(q_0,q_1)$. A \emph{door} is a subsegment $xy$ of $S$ such that $xy\cap P_k=\{x\}$ and $xy\cap P_{1-k}=\{y\}$ for some $k\in\{0,1\}$. By Lemma~\ref{doorPoints}, every door is contained in $E$. A \emph{fence} is a subsegment $xy$ of $S$ such that $xy\subset E$, $xy\cap P_k=\{x,y\}$, and $xy\cap P_{1-k}=\emptyset$ for some $k\in\{0,1\}$. Exceptionally, when $S$ contains an edge $xy$ of $P_k$, we call the whole edge $xy$ a fence. Since at least one of $q_0$, $q_1$ is a corner, the latter is possible only when $x=q_k$ or $y=q_k$. Let $\mathcal D$ be the set of all doors defined by the fixed points $q_0$ and $q_1$. Figure~\ref{sigmaAndDoors} also illustrates the following lemma. \begin{lemma}\label{order} The doors in\/ $\mathcal D$ can be ordered as\/ $D_1,\ldots,D_d$ so that if\/ $D_i\cap P_0=\{x_i\}$ and\/ $D_i\cap P_1=\{y_i\}$ for\/ $i\in\{1,\ldots,d\}$, then \begin{itemize} \item the order of points along\/ $P_0[\ell_0,r_0]$ is\/ $\ell_0,x_1,\ldots,x_d,r_0$ (with possible coincidences), \item the order of points along\/ $P_1[\ell_1,r_1]$ is\/ $\ell_1,y_1,\ldots,y_d,r_1$ (with possible coincidences). \end{itemize} The doors partition\/ $E$ into polygonal regions\/ $E_0,\ldots,E_d$ such that \begin{itemize} \item $E_0$ is bounded by\/ $L$, $P_0[\ell_0,x_1]$, $D_1$ and\/ $P_1[\ell_1,y_1]$ (it is degenerate when\/ $D_1=L$), \item $E_i$ is bounded by\/ $D_i$, $P_0[x_i,x_{i+1}]$, $D_{i+1}$ and\/ $P_1[y_i,y_{i+1}]$, for\/ $i\in\{1,\ldots,d-1\}$, \item $E_d$ is bounded by\/ $D_d$, $P_0[x_d,r_0]$, $R$ and\/ $P_1[y_d,r_1]$ (it is degenerate when\/ $D_d=R$). \end{itemize} \end{lemma} \begin{proof} Suppose there are doors $xy,x'y'\in\mathcal D$ such that $x$ is strictly before $x'$ on $P_0[\ell_0,r_0]$ while $y'$ is strictly before $y$ on $P_1[\ell_1,r_1]$. It follows that the clockwise order of the four points along the boundary of $E$ is $x,x',y,y'$ and no two of these points coincide. By Lemma~\ref{doorPoints}, both $xy$ and $x'y'$ lie in $E$, so they must cross at a point different from their endpoints, which is a contradiction. This shows that the order of endpoints of the doors along $P_0[\ell_0,r_0]$ agrees with that along $P_1[\ell_1,r_1]$, which proves the first statement. The second statement is a straightforward corollary to the first. \end{proof} From now on, we use $D_1,\ldots,D_d$ to denote the doors in their order according to Lemma~\ref{order}, and we use $E_0,\ldots,E_d$ to denote the regions defined in Lemma~\ref{order}. Recall that we consider $S$ as a segment oriented from $q_0$ to $q_1$. Every door inherits that orientation, so that we can speak of the left side and the right side of the door. Taking into account that the regions $E_{i-1}$ and $E_i$ lie on opposite sides of $D_i$, we classify each door $D_i$ as \begin{itemize} \item a \emph{right-door} if $E_{i-1}$ lies to the right and $E_i$ lies to the left of $D_i$ (in particular, if $D_i=L$), \item a \emph{left-door} if $E_{i-1}$ lies to the left and $E_i$ lies to the right of $D_i$ (in particular, if $D_i=R$). \end{itemize} \begin{lemma}\label{noBadChain} Consider a chain\/ $P_k[a,b]$, where\/ $k\in\{0,1\}$. If\/ $P_k[a,b]\cap S=\{a,b\}$ and\/ $P_k[a,b]\subset\RHP(q_0,q_1)$, then all doors contained in the segment\/ $ab$ occur in pairs of a left-door followed by a right-door, consecutive in the order on\/ $\mathcal D$. \end{lemma} \begin{proof} Consider the polygonal region $F$ bounded by the chain $P_k[a,b]$ and by the segment $ab$. It follows that $F\subset\RHP(q_0,q_1)$. Each of the regions $E_0,\ldots,E_d$ lies either inside or outside $F$, where $E_0$ and $E_d$ lie outside $F$. Each region $E_i$ lying inside $F$ connects the door $D_i$, which is therefore a left-door, and the door $D_{i+1}$, which is therefore a right-door. \end{proof} So far we were considering $q_0$ and $q_1$ as fixed points. Now, we allow them to change in time. Specifically, let $I$ be a real interval that can be open or closed at each endpoint independently, and consider $q_0$ and $q_1$ as continuous functions $q_0\colon I\to P_0$ and $q_1\colon I\to P_1$. This way $S$ becomes a continuous function $S\colon I\to\mathcal F(\mathbb R^2)$. Furthermore, suppose at least one of $q_0(t)$, $q_1(t)$ is a corner of the respective polygon for every $t\in I$, so that $S(t)$ can contain at most one other corner (by the general position assumption). Let $X(t)$ denote the set of intersection points of $S(t)$ with $P_0\cup P_1$. In the exceptional case that $S(t)$ contains an edge of $P_0$ or $P_1$, we only include the endpoints of the edge in $X(t)$. The points in $X(t)$ are changing continuously except that an intersection point appears or disappears at a point in time $t\in I$ when $S(t)$ sweeps over a corner whose both incident edges lie on the same side of $S(t)$. Note that since the corners of $P_0$ and $P_1$ are assumed to be in general position and one of $q_0$ and $q_1$ is a corner, at most one point can appear in or disappear from $X(t)$ at any point in time. The doors are changing continuously except when one of the following \emph{door events} happens as a point appears in or disappears from $X(t)$: \begin{enumerate} \item a fence splits into two doors, \item two doors merge into a fence, \item a door splits into a smaller door and a fence, \item a door and a fence merge into a larger door. \end{enumerate} Specifically, every door $D$ can be represented as a continuous function $D\colon I_D\to\mathcal F(\mathbb R^2)$, where $I_D$ is a subinterval of $I$ (open or closed at each endpoint independently) such that \begin{enumerate} \item if $t=\inf I_D\in I_D$, then an endpoint of $D(t)$ is in $X(t)$ but not in $X(\mathord\nearrow\, t)$, \item if $t=\sup I_D\in I_D$, then an endpoint of $D(t)$ is in $X(t)$ but not in $X(\mathord\searrow\, t)$, \item if $t=\sup I_D\notin I_D$, then an interior point of $D(\mathord\nearrow\, t)$ is in $X(t)$ but not in $X(\mathord\nearrow\, t)$, \item if $t=\inf I_D\notin I_D$, then an interior point of $D(\mathord\searrow\, t)$ is in $X(t)$ but not in $X(\mathord\searrow\, t)$. \end{enumerate} At any point in time $t\in I$, the set of doors $\mathcal D(t)$ consists of the doors $D$ such that $t\in I_D$ ordered according to Lemma~\ref{order}. The following observation, a straightforward consequence of Lemma~\ref{order}, summarizes how $\mathcal D(t)$ and the order on $\mathcal D(t)$ are changing in time. \begin{observation}\label{doorObs} The set\/ $\mathcal D(t)$ and the order on\/ $\mathcal D(t)$ are constant in time intervals where no door event happens. A door event at time\/ $t$ makes the following change to\/ $\mathcal D(t)$: \begin{enumerate} \item if a fence splits into two doors\/ $D$ and\/ $D'$, then\/ $D$ and\/ $D'$ are added to\/ $\mathcal D(\mathord\nearrow\, t)$ as consecutive doors to form\/ $\mathcal D(t)$, \item if two doors\/ $D$ and\/ $D'$ merge into a fence, then\/ $D$ and\/ $D'$ are consecutive in\/ $\mathcal D(t)$ and they are removed from $\mathcal D(t)$ to form\/ $\mathcal D(\mathord\searrow\, t)$, \item if a door\/ $D$ splits into a smaller door\/ $D'$ and a fence, then\/ $D$ is replaced by\/ $D'$ in\/ $\mathcal D(\mathord\nearrow\, t)$ to form\/ $\mathcal D(t)$, \item if a door\/ $D$ and a fence merge into a larger door\/ $D'$, then\/ $D$ is replaced by\/ $D'$ in\/ $\mathcal D(t)$ to form\/ $\mathcal D(\mathord\searrow\, t)$. \end{enumerate} In case of door events 1 and 2, the two doors\/ $D$ and\/ $D'$ are, in their order in\/ $\mathcal D(t)$, \begin{itemize} \item a right-door followed by a left-door if the edges incident to\/ $w$ lie to the right of\/ $S(t)$, \item a left-door followed by a right-door if the edges incident to\/ $w$ lie to the left of\/ $S(t)$, \end{itemize} where\/ $w$ denotes the corner that triggers the event (i.e., the corner that appears in or disappears from\/ $X(t)$ at time\/ $t$). In case of door events 3 and 4, the door\/ $D'$ keeps the left/right-door status of\/ $D$. The left/right-door status of every door\/ $D$ remains constant over the entire time interval\/ $I_D$. \end{observation} Now, consider $q_0$ and $q_1$ again as fixed points. Recall that $D_1,\ldots,D_d$ denote the doors in their order according to Lemma~\ref{order}. We define the \emph{weight} $W(D_i)$ of every door $D_i$ by induction, as follows: $$ W(D_1)=\begin{cases} 1&\text{if $D_1$ is a right-door,}\\ -1&\text{if $D_1$ is a left-door,} \end{cases}\quad W(D_i)=\begin{cases} W(D_{i-1})+1&\text{if $D_i$ is a right-door,}\\ W(D_{i-1})-1&\text{if $D_i$ is a left-door,} \end{cases} $$ for $i\in\{2,\ldots,d\}$. See Figure~\ref{sigmaAndDoors}. The following is a direct consequence of Observation~\ref{doorObs}. \begin{observation}\label{weightObs} When\/ $q_0\colon I\to P_0$, $q_1\colon I\to P_1$ are continuous functions, every door\/ $D\colon I_D\to\mathcal F(\mathbb R^2)$ maintains constant weight over the entire time interval\/ $I_D$. Furthermore, the function\/ $W^\star\colon I\to\mathbb Z$ defined so that\/ $W^\star(t)$ is the weight of the last door in the order on\/ $\mathcal D(t)$ is constant over the entire time interval\/ $I$. \end{observation} \begin{lemma}\label{invWIM} For any fixed points\/ $q_0$, $q_1$, there is at least one door with weight\/ $1$. \end{lemma} \begin{proof} The statement is obvious if $q_0=\ell_0$ and $q_1=\ell_1$, because in that case there is just one door $L$, which is a right-door by definition, so it has weight $1$. To prove the lemma in general, let $I=[0,1]$ and (abusing notation) consider arbitrary continuous functions $q_0\colon I\to P_0$ and $q_1\colon I\to P_1$ such that $q_0(0)=\ell_0$, $q_1(0)=\ell_1$, and $q_0(1)$, $q_1(1)$ are the points $q_0$, $q_1$ fixed in the statement of the lemma. By Observation~\ref{weightObs}, the function $W^\star\colon I\to\mathbb Z$ is constant over $I$, so $W^\star(1)=W^\star(0)=1$ as observed above. This shows that the last door in the order on $\mathcal D(1)$ has weight $1$. \end{proof} For any fixed points $q_0$, $q_1$, let the \emph{primary door} $D'$ be the first door with weight $1$ in the order on $\mathcal D$. Such a door always exists due to Lemma~\ref{invWIM}. \begin{observation}\label{primaryObs} The primary door\/ $D'$ is a right-door and is not preceded by a left-door in the order on\/ $\mathcal D$. \end{observation} Let $y_0$ and $y_1$ denote the endpoints of $D'$ so that $y_0\in P_0$ and $y_1\in P_1$. Let $Y_0=P_0[y_0,r_0]$ and $Y_1=P_1[y_1,r_1]$. Finally, let the \emph{primary region} $E'$ be defined as the polygonal region determined by $D'$, $Y_0$, $R$ and $Y_1$. See Figure~\ref{sigmaAndDoors}. \begin{observation}\label{regionObs} If\/ $D'=D_i$, then\/ $E'$ is the union of\/ $E_i,\ldots,E_d$. In particular, $E'$ contains the doors\/ $D_{i+1},\ldots,D_d$. By Observation~\ref{primaryObs}, the region\/ $E'$ meets\/ $D'$ from the left. \end{observation} \subsection{Back to the Algorithm} We recall the functions $s_0,s_1\colon[0,m]\to\mathbb R$ describing the execution of Algorithm~\ref{alg1} as explained in Section \ref{secContinuous}, and we define functions $q_0\colon[0,m]\to P_0$ and $q_1\colon[0,m]\to P_1$ as follows: $$ q_0(t)=\pp 0{s_0(t)},\quad q_1(t)=\pp 1{s_1(t)}\quad\text{for }t\in[0,m]. $$ They have the property that at least one of $q_0(t)$, $q_1(t)$ is a corner at any point in time $t\in[0,m]$. Some other objects that have been defined in Section \ref{secAuxiliary} based on fixed points $q_0$, $q_1$ now become functions of time $t\in[0,m]$: the segment $S$, the primary door $D'$, the points $y_0$, $y_1$, the chains $Y_0$, $Y_1$, and the primary region $E'$. The functions $q_0$ and $q_1$ have finitely many discontinuities -- the points of time $t\in[0,m)$ when the respective $s_k$ jumps from $s_k(t)$ to $s_k(\mathord\searrow\, t)$. It is also clear that they have one-sided limits, since the functions $s_0$ and $s_1$ are bounded and piecewise monotone. It follows that the functions $D'\colon[0,m]\to\mathcal F(\mathbb R^2)$, $y_0\colon[0,m]\to P_0$, $y_1\colon[0,m]\to P_1$, $Y_0\colon[0,m]\to\mathcal F(P_0)$, $Y_1\colon[0,m]\to\mathcal F(P_1)$, and $E'\colon[0,m]\to\mathcal F(\mathbb R^2)$ also have one-sided limits and finitely many discontinuities, which arise from discontinuities of $q_0$, $q_1$ and from door events in between. The following lemma is the heart of the proof of correctness of the algorithm. Informally speaking, it asserts that the primary region $E'$ can only shrink in time, since the primary door $D'$ always sweeps continuously into or jumps into $E'$. \begin{lemma}\label{doorsWalk} The functions\/ $Y_0$ and\/ $Y_1$ are monotonically decreasing. \end{lemma} \begin{proof} First, we let $I$ be an arbitrary subinterval of $[0,m]$ in which $q_0$ and $q_1$ are continuous, and we prove the lemma for functions restricted to $I$: $y_0\restriction I$, $y_1\restriction I$, $Y_0\restriction I$ and $Y_1\restriction I$. Following the convention from Section \ref{secAuxiliary}, we consider doors as continuous functions $D\colon I_D\to\mathcal F(\mathbb R^2)$ with $I_D\subseteq I$ and, for $t\in I$, we let $\mathcal D(t)$ denote the set of doors $D$ such that $t\in I_D$. Accordingly, we redefine $D'(t)$ to denote the function $D\colon I_D\to\mathcal F(\mathbb R^2)$ that is chosen as the primary door at time $t\in I$. By Observation~\ref{weightObs}, every door $D\colon I_D\to\mathcal F(\mathbb R^2)$ maintains constant weight over the entire time interval $I_D$. By Observation~\ref{doorObs}, the only possible changes to $\mathcal D$ and to the order on $\mathcal D$ over time interval $I$ are that doors are being added to or removed from $\mathcal D$. Therefore, any change to the choice of the primary door can only occur at a point in time $t\in I$ when a door event happens; moreover, the primary door $D'(t)$ must participate in that event, that is, if $D'(t)=D$, then $t=\inf I_D$ or $t=\sup I_D$. Consider an interval $I'\subseteq I$ over which the choice of the primary door remains constant, that is, there is a door $D\colon I_D\to\mathcal F(\mathbb R^2)$ such that $I'\subseteq I_D$ and $D'(t)=D$ for every $t\in I'$. Since $D$ is a continuous function, so are the functions $y_0\restriction I'$, $y_1\restriction I'$, $Y_0\restriction I'$ and $Y_1\restriction I'$. Furthermore, it follows from Observation~\ref{rotateObs} that the segment $S$ is constant or is sweeping continuously to the left at any point in time $t\in I$. By Observation~\ref{regionObs}, $D$ can only be moving towards the interior of $E'$ in time interval $I'$. This shows that $E'\restriction I'$ and hence $Y_0\restriction I'$ and $Y_1\restriction I'$ are monotonically decreasing functions. In view of Lemma~\ref{monotone}, to complete the proof that $Y_0\restriction I$ and $Y_1\restriction I$ are monotonically decreasing, it remains to prove that \begin{itemize} \item $Y_0(\mathord\nearrow\, t)\supseteq Y_0(t)$ and $Y_1(\mathord\nearrow\, t)\supseteq Y_1(t)$ whenever $D'(\mathord\nearrow\, t)\neq D'(t)$, for $t\in I\setminus\{\inf I\}$, \item $Y_0(\mathord\searrow\, t)\subseteq Y_0(t)$ and $Y_1(\mathord\searrow\, t)\subseteq Y_1(t)$ whenever $D'(\mathord\searrow\, t)\neq D'(t)$, for $t\in I\setminus\{\sup I\}$. \end{itemize} We consider the kinds of door events as identified in Section \ref{secAuxiliary}, looking for events happening at time $t\in I$ that result in a primary door being added to or removed from $\mathcal D$. \begin{enumerate} \item A fence splits into two doors. Since $S(t)$ can only be sweeping to the left, both polygon edges incident to the corner triggering that event lie to the left of $S(t)$. Therefore, by Observation~\ref{doorObs}, the two doors are a left-door followed by a right-door in the order on $\mathcal D(t)$. Consequently, by Observation~\ref{primaryObs}, neither of the two doors can be primary. \item Two doors merge into a fence. If $D'(t)$ is one of the two doors, then the choice of the primary door changes to some door $D\in\mathcal D(\mathord\searrow\, t)\subset\mathcal D(t)$ that is after $D'(t)$ in the order on $\mathcal D(t)$. By Lemma~\ref{order}, the endpoints $y_0(\mathord\searrow\, t)$ and $y_1(\mathord\searrow\, t)$ of $D(t)$ lie on $Y_0(t)$ and $Y_1(t)$, respectively, so $Y_0(\mathord\searrow\, t)\subseteq Y_0(t)$ and $Y_1(\mathord\searrow\, t)\subseteq Y_1(t)$ as required. \item A door splits into a smaller door and a fence. It follows from Observation~\ref{doorObs} that the door added to $\mathcal D(t)$ maintains the weight of the door removed from $\mathcal D(\mathord\nearrow\, t)$. Therefore, assuming $D'(t)\neq D'(\mathord\nearrow\, t)$, $D'(t)$ is the door added to $\mathcal D(t)$ and $D'(\mathord\nearrow\, t)$ is the one removed from $\mathcal D(\mathord\nearrow\, t)$. Let $w\in P_k$ denote the corner that triggers the event, where $k\in\{0,1\}$. It follows that $y_{1-k}(t)=y_{1-k}(\mathord\nearrow\, t)$, so $Y_{1-k}(t)=Y_{1-k}(\mathord\nearrow\, t)$. Since $w=y_k(t)$, we need to prove that $w\in Y_k(\mathord\nearrow\, t)$. Let $D=D'(\mathord\nearrow\, t)$ and let $t_0$ be a value in $I_D\cap I$ such that $t_0<t$ and no door event happens in time interval $[t_0,t)$. For every $t' \in [t_0, t)$, let $\varphi(t')$ be the point on $D(t')$ closest to $w$. Since $D$ moves continuously, $\varphi$ is a continuous curve. Since $\varphi (t') \in E'(t')$ and $E'(t') \subset E'(t_0)$ as shown before for every $t' \in [t_0,t)$, $\varphi$ must be contained in $E'(t_0)$. Since $w\in D(\mathord\nearrow\, t)$, we have $\varphi(\mathord\nearrow\, t)=w$. Furthermore, $E'(t_0)$ is a closed set, and hence $w\in E'(t_0)$. The interior of $E'(t_0)$ is disjoint from $P_0$ and $P_1$, so $w$ must be a corner on the chain $Y_k(t_0)=P_k[y_k(t_0),r_k]$. Since $w=y_k(t)$, it follows that $Y_k(t)\subseteq Y_k(t_0)$. By letting $t_0$ approach $t$ from below, we get $Y_k(t)\subseteq Y_k(\mathord\nearrow\, t)$. \item A door and a fence merge into a larger door. Again, it follows from Observation~\ref{doorObs} that the door added to $\mathcal D(\mathord\searrow\, t)$ maintains the weight of the door removed from $\mathcal D(t)$. Therefore, assuming $D'(t)\neq D'(\mathord\searrow\, t)$, $D'(t)$ is the door removed from $\mathcal D(t)$ and $D'(\mathord\searrow\, t)$ is the one added to $\mathcal D(\mathord\searrow\, t)$. Let $w\in P_k$ denote the corner that triggers the event, where $k\in\{0,1\}$. It follows that $y_{1-k}(t)=y_{1-k}(\mathord\searrow\, t)$, so $Y_{1-k}(t)=Y_{1-k}(\mathord\searrow\, t)$. We make an argument similar to the one in the above case to show that $Y_k(\mathord\searrow\, t)\subseteq Y_k(t)$, but using reversed time. Let $D=D'(\mathord\searrow\, t)$ and let $t_0$ be a value in $I_D\cap I$ such that $t_0>t$ and no door event happens in time interval $(t,t_0]$. For every $t' \in (t,t_0]$, let $\varphi(t')$ be the point on $D(t')$ closest to $w$. Since $D$ moves continuously, $\varphi$ is a continuous curve. For every $t'\in [0,m]$, let $F(t')$ be the polygonal region bounded by $P_0[\ell_0,y_0(t')]$, the primary door at time $t'$, $P_1[\ell_1,y_1(t')]$, and the segment $L=\ell_0\ell_1$. Thus $F(t')$ is a sort of complementary region to $E'(t')$ in the region $E$. Since $E'$ is monotonically decreasing on $(t,t_0]$ as shown before, $F'$ is monotonically increasing on $(t,t_0]$. Therefore, since $\varphi(t') \in F(t')$, $\varphi$ must be contained in $F(t_0)$. Since $w\in D(\mathord\searrow\, t)$, we have $\varphi(\mathord\searrow\, t)=w$. Furthermore, $F(t_0)$ is a closed set, and hence $w\in F(t_0)$. The interior of $F(t_0)$ is disjoint from $P_0$ and $P_1$, so $w$ must be a corner on the chain $P_k[\ell_k,y_k(t_0)]$. Since $w=y_k(t)$, it follows that $P_k[\ell_k,y_k(t)]\subseteq P_k[\ell_k,y_k(t_0)]$. By letting $t_0$ approach $t$ from above, we get $P_k[\ell_k,y_k(t)]\subseteq P_k[\ell_k,y_k(\mathord\searrow\, t)]$. Hence, $y_k(\mathord\searrow\, t)$ is on the chain $Y_k(t)=P_k[y_k(t),r_k]$ and therefore $Y_k(\mathord\searrow\, t)\subseteq Y_k(t)$. \end{enumerate} Now, we return to the general case of functions $y_0$, $y_1$, $Y_0$ and $Y_1$ defined on the entire interval $[0,m]$. Consider a point in time $t\in[0,m)$ that is a discontinuity of $q_k$, where $k\in\{0,1\}$. That is, $s_k$ jumps from $s_k(t)$ to $s_k(\mathord\searrow\, t)$ at time $t$. We shall see that the jump of $s_k$ has no effect on the choice of the primary door. By Observation~\ref{chainObs}, the point $\pp k{s_k(\mathord\searrow\, t)}=q_k(\mathord\searrow\, t)$ lies on the ray $\mathcal R(q_{1-k}(t),q_k(t))$ and the chain $P_k[q_k(t),q_k(\mathord\searrow\, t)]$ belongs to $\RHP(q_0(t),q_1(t))$. Let $\mathcal D(t)$ and $\mathcal D(\mathord\searrow\, t)$ denote the sets of doors as defined for the segments $S(t)$ and $S(\mathord\searrow\, t)$, respectively. We shall prove that the primary door with respect to $\mathcal D(t)$ (i.e., defined for $S(t)$) is the same as with respect to $\mathcal D(\mathord\searrow\, t)$ (i.e., defined for $S(\mathord\searrow\, t)$). Suppose $q_k(\mathord\searrow\, t)$ is on the segment $S(t)$. It follows that $S(\mathord\searrow\, t)\subset S(t)$, $\mathcal D(\mathord\searrow\, t)\subseteq\mathcal D(t)$, and $\mathcal D(t)\setminus\mathcal D(\mathord\searrow\, t)$ is the set of doors on $S(t)\setminus S(\mathord\searrow\, t)$. Since $P_k[q_k(t),q_k(\mathord\searrow\, t)]\subset\RHP(q_0(t),q_1(t))$, it follows from Lemma~\ref{noBadChain} that the doors in $\mathcal D(t)\setminus\mathcal D(\mathord\searrow\, t)$ occur in pairs of a left-door followed by a right-door, consecutive in the order on $\mathcal D(t)$. Therefore, the weights of every door $D\in\mathcal D(\mathord\searrow\, t)$ with respect to the sets of doors $\mathcal D(\mathord\searrow\, t)$ and $\mathcal D(t)$ are equal. By Observation~\ref{primaryObs}, none of the doors in $\mathcal D(t)\setminus\mathcal D(\mathord\searrow\, t)$ can be primary with respect to $\mathcal D(t)$, so the primary door is the same with respect to $\mathcal D(t)$ as with respect to $\mathcal D(\mathord\searrow\, t)$. Now, suppose $q_k(\mathord\searrow\, t)$ is not on the segment $S(t)$. It follows that $S(t)\subset S(\mathord\searrow\, t)$, $\mathcal D(t)\subseteq\mathcal D(\mathord\searrow\, t)$, and $\mathcal D(\mathord\searrow\, t)\setminus\mathcal D(t)$ is the set of doors on $S(\mathord\searrow\, t)\setminus S(t)$. An argument analogous to that for $q_k(\mathord\searrow\, t)\in S(t)$ above shows that the primary door is the same with respect to $\mathcal D(\mathord\searrow\, t)$ as with respect to $\mathcal D(t)$. To conclude, let $t_0=0$, $t_1,\ldots,t_{n-1}$ be the discontinuities of $q_0$ or $q_1$ ordered so that $t_1<\cdots<t_{n-1}$, and $t_n=m$, and consider the closed intervals $I_i=[t_{i-1},t_i]$ for $i\in\{1,\ldots,n\}$. Fix an index $i$ and consider the restrictions $q_0\restriction I_i$ and $q_1\restriction I_i$. Only one of them, say $q_k\restriction I_i$, is not continuous, and the only discontinuity of $q_k\restriction I_i$ is $t_{i-1}$. As we have proved above, if we redefine $q_k(t_{i-1})$ by letting $q_k(t_{i-1})=q_k(\mathord\searrow\, t_{i-1})$, the primary door $D'(t_{i-1})$ does not change, but then $q_k\restriction I_i$ becomes continuous. Therefore, what we have proved for restrictions of $Y_0$ and $Y_1$ to subintervals $I\subseteq[0,m]$ such that $q_0\restriction I$ and $q_1\restriction I$ are continuous implies that $Y_0\restriction I_i$ and $Y_1\restriction I_i$ are monotonically decreasing, for every $i\in\{1,\ldots,n\}$. The assumptions of Lemma~\ref{monotone} are satisfied for $Y_0$ and $Y_1$, so $Y_0$ and $Y_1$ are monotonically decreasing in the entire domain $[0,m]$. \end{proof} We are now ready to prove Lemma~\ref{mainLemma}. Using the continuous interpretation of the algorithm, it can be rephrased as follows. \begin{lemma} For any\/ $t\in[0,m]$, we have\/ $0\leq s_0(t)<2n_0$ and\/ $0\leq s_1(t)<2n_1$. \end{lemma} \begin{proof} We only present the proof of the bound on $s_0(t)$. That for $s_1(t)$ is analogous. Let $c_0(0)$ be the unique real in the interval $[0,n_0)$ such that $y_0(0)=\pp 0{c_0(0)}$. Let $\hat c_0$ be the unique real in the interval $[c_0(0),c_0(0)+n_0)$ such that $r_0=\pp 0{\hat c_0}$. By Lemma~\ref{doorsWalk}, for $t\in(0,m]$, there is a unique real $c_0(t)\in [c_0(0),\hat c_0]$ such that $y_0(t)=\pp 0{c_0(t)}$, and this defines a nondecreasing function $c_0\colon[0,m]\to\mathbb R$ with one-sided limits and finitely many discontinuities. Obviously, $0\leq s_0(t)$ and $c_0(t)\leq\hat c_0<c_0(0)+n_0<2n_0$. It remains to prove $s_0(t)\leq c_0(t)$ for $t\in[0,m]$. We first prove that for every $t\in[0,m)$ with $s_0(t)\leq c_0(t)$, there is $\epsilon>0$ such that $s_0(t')\leq c_0(t')$ for all $t'\in [t,t+\epsilon)$. Let $t\in[0,m)$ be such that $s_0(t)\leq c_0(t)$. First, suppose $s_0$ is continuous and either constant or strictly increasing on some interval $[t,t+\epsilon)$ with $\epsilon>0$. If $s_0(t)<c_0(t)$, then the statement is clear, so suppose $s_0(t)=c_0(t)$. If $s_0$ is constant on $[t,t+\epsilon)$, then the statement is clear, as $c_0$ is nondecreasing. If $s_0$ is strictly increasing on $[t,t+\epsilon)$, we either have $c_0(t')=s_0(t')$ for $t'\in[t,t+\epsilon')$, for some $\epsilon'\in(0,\epsilon]$, or $c_0$ jumps at time $t$ to a higher value, that is, $c_0(t)<c_0(\mathord\searrow\, t)$. In both cases, the statement holds. Now, suppose $s_0$ jumps at time $t$, that is, $s_0(t)<s_0(\mathord\searrow\, t)$. By choosing $\epsilon>0$ small enough, we can assume that $s_0$ is continuous on the interval $(t,t+\epsilon)$ and that the points $\{\pp 0{s_0(t')}\colon t'\in(t,t+\epsilon)\}$ are a part of one edge $e$ of $P_0$, which also contains the point $\pp 0{s_0(\mathord\searrow\, t)}$. By Observation~\ref{chainObs}, we have $P_0[s_0(t),s_0(\mathord\searrow\, t)]\subset\RHP(\pp 0{s_0(t)},\pp 1{s_1})$. This and the facts that $\pp 0{c_0(t')}\in S(t')$ and $S(t')\cap\RHP(\pp 0{s_0(t)},\pp 1{s_1})=\{\pp 1{s_1}\}$ imply $c_0(t')>s_0(\mathord\searrow\, t)$, for every $t'\in(t,t+\epsilon)$. We conclude that for every $t'\in(t,t+\epsilon)$, either $c_0(t')=s_0(t')$ or $\pp 0{c_0(t')}$ is on an edge of $P_0$ other than $e$, in which case $c_0(t')>s_0(t')$. We now return to proving that $s_0(t)\leq c_0(t)$ for every $t\in[0,m]$. Suppose the contrary, and let $t^\star=\inf\{t\in[0,m]\colon s_0(t)>c_0(t)\}$. In view of the discussion above, we must have $s_0(t^\star)>c_0(t^\star)$. Then $t^\star>0$, because $c_0(0)\geq 0=s_0(0)$. By the definition of $s_0$, we have $s_0(\mathord\nearrow\, t^\star)=s_0(t^\star)> c_0(t^\star)\geq c_0(\mathord\nearrow\, t^\star)$. This contradicts the definition of $t^\star$. \end{proof}
1,108,101,563,783
arxiv
\section{Introduction}\label{Sect1} The theory of gradient flows in metric spaces has been initiated by De Giorgi and collaborators \cite{DeGiorgiMarinoTosques80}, \cite{DeGiorgi93} (see also the more recent \cite{AmbrosioGigliSavare08}): a basic feature of the approach is to provide a very general existence theory - at this level uniqueness is typically lost - without neither curvature assumptions on the space nor semiconvexity of the functional. In this setting gradient flow trajectories $(x_t)$ of ${\sf E}$ (or curves of maximal slopes) are defined by imposing the maximal rate of dissipation \[ \frac{\d}{\d t}{\sf E}(x_t)=-|\dot x_t|^2=-|\partial^-{\sf E}|^2(x_t),\qquad a.e.\ t, \] where here $|\dot x_t|$ is the metric speed of the curve (see Theorem \ref{thm:ms}) and $|\partial^-{\sf E}|$ is the slope of ${\sf E}$ (see \eqref{eq:defsl}). It has been later understood (\cite{AmbrosioGigliSavare08}, \cite{AmbrosioGigliSavare11-2}, \cite{Gigli12}, \cite{OP17}, \cite{MS20}) that if ${\sf E}$ is $\lambda$-convex and the metric space has some form of some Hilbert-like structure at small scales, then an equivalent formulation can be given via the so-called Evolution Variational Inequality \begin{equation} \frac{\d}{\d t}\frac{{\sf d}^2(x_t,y)}2+{\sf E}(x_t)+\frac\lambda2{\sf d}^2(x_t,y)\leq {\sf E}(y)\qquad a.e.\ t \tag{EVI} \end{equation} for any choice of point $y$ on the space. See Theorem \ref{thm:GFdef} for the precise definitions and \cite{MS20} for a thorough study of the EVI condition. The geometry of the metric space and the convexity properties of the functional under consideration greatly affect the kind of results one can obtain for gradient flows. For the purpose of this manuscript, the works \cite{Mayer98}, \cite{Jost98} are particularly relevant: it is showed that the classical Crandall-Liggett generation theorem can be generalized to the metric setting of \Cat0 spaces to produce a satisfactory theory of gradient flows for semi-convex and lower semicontinuous functionals. If the metric space one is working on admits some nicely-behaved tangent spaces/cones, one might hope to give a meaning to the classical defining formula \[ x_t'\in-\partial^-{\sf E}(x_t)\qquad a.e.\ t \] or to its more precise variant \begin{equation} \label{eq:gfi} x_t'^+=\text{ the element of minimal norm in }-\partial^-{\sf E}(x_t)\qquad\forall t>0. \end{equation} This has been done in \cite{Lyt05}, where previous approaches in \cite{PP95} have been generalized. Here, notably, the basic assumptions on the metric space are of first order in nature (and refer precisely to the structure of tangent cones) and the energy functional is assumed to be semiconvex and locally Lipschitz. While the convexity assumption is very natural when studying gradient flows (all in all, even in the Hilbert setting many fundamental results rely on such hypothesis), asking for Lipschitz continuity is a bit less so: it certainly covers many concrete examples, for instance of functionals built upon distance functions on spaces satisfying some one-sided curvature bound, but from the analytic perspective it may be not satisfying: already the Dirichlet energy as a functional on $L^2$ is not Lipschitz, and the same holds for the Korevaar-Schoen energy we aim to study here. \bigskip Our motivation to study this topic comes from the desire of providing a notion of Laplacian for \Cat0-valued Sobolev maps, where here `Sobolev' is intended in the sense of Korevaar-Schoen \cite{KS93} (see also the more recent review of their theory done in \cite{GT20}). Denoting by ${\sf E}^{{\sf KS}}$ the underlying notion of energy and imitating one of the various equivalent definitions for the Laplacian in the classical smooth and linear setting, one is lead to define the Laplacian of $u$ as the element of minimal norm in $-\partial^-{\sf E}^{\sf KS}(u)$. This approach of course carries at least two tasks: to define what $-\partial^-{\sf E}$ is and to show that it is not empty for a generic convex and lower semicontinuous functional ${\sf E}$. Providing a reasonable definition for $-\partial^-{\sf E}$ is not that hard (see Definition \ref{def:md}), but is less obvious to show that this object is not-empty (in particular, minimizing ${\sf E}(\cdot)+\frac{{\sf d}^2(\cdot,x)}{2\tau}$ is of no help here, see the discussion in Remark \ref{re:mm}). It is here that the theory of gradient flows comes to help: \bigskip \begin{quote} our main result is that, for semiconvex and lower semicontinuous functions on a $\Cat\kappa$ space, the analogue of \eqref{eq:gfi} holds, see Theorem \ref{thm:rightD}. \end{quote} \bigskip As a byproduct, we deduce that the domain of $-\partial^-{\sf E}$ is dense in the one of ${\sf E}$. A result similar to ours has been obtained in \cite{CKK19} under some additional geometric assumptions on the base space, which in some sense tell that there is the opposite of any tangent vector. As said, we then apply this result to study the Laplacian of $\Cat0$-valued Sobolev maps. Let us remark that in this case the relevant metric space $L^2(\Omega,{\rm Y}_{\bar y})$ is that of $L^2$ maps from some open subset $\Omega$ of a metric measure space ${\rm X}$ to a pointed \Cat0 space $({\rm Y},\bar y)$ and the energy functional is the Korevaar-Schoen energy ${\sf E}^{\sf KS}$: it is well known that $L^2(\Omega,{\rm Y}_{\bar y})$ is a \Cat0 space and that ${\sf E}^{\sf KS}$ is convex and lower semicontinuous, but certainly not Lipschitz, whence the need to generalize Lytchak results to cover also this case. Once we have a notion for $-\partial^-{\sf E}^{\sf KS}$ we enrich the paper with: \begin{itemize} \item[i)] the actual definition of Laplacian $\Delta u$ of a $\Cat0$-valued map $u$ (Definition \ref{def:laplacian}), which pays particular attention to the link between the tangent cones in $L^2(\Omega,{\rm Y}_{\bar y})$, where $-\partial^-{\sf E}^{\sf KS}$ lives, and the tangent cones in ${\rm Y}$, where we think `variations' of $u$ should live, see in particular Propositions \ref{prop:l2gen} and \ref{prop:link}, \item[ii)] a basic, weak, integration by parts formula, see Proposition \ref{prop:varen}, which is sufficient to show that our approach is compatible with the classical one valid in the smooth category, \item[iii)] a presentation of a simple and concrete example (Example \ref{ex:s1}) showing why $\Delta u$ seems to be very much linked to the geometry of ${\rm Y}$, but less so to Sobolev calculus on it. \end{itemize} \bigskip Finally, we point out that this note is part of a larger program aiming at stating and proving the Eells-Sampson-Bochner inequality \cite{ES64} for Sobolev maps from (open subsets of a) $\mathrm{RCD}$ space ${\rm X}$ to a $\Cat0$ space ${\rm Y}$ (see \cite{DMGSP18,GPS18,GT20} for partial results in this direction): knowing what the Laplacian of a \Cat0-valued map is, is a crucial step for this program. \bigskip{\bf Acknowledgement.} We thank A.\ Lytchak and M.\ Ba\v{c}\'{a}k for comments on an preliminary version of this manuscript. \section{Calculus on ${\sf CAT}(\kappa)$-spaces}\label{Sect2} \subsection{$\Cat\kappa$-spaces} Let us briefly recall some useful tools in metric spaces $({\rm Y},{\sf d}_{{\rm Y}})$. \begin{definition}[Locally AC curve] Let $({\rm Y},{\sf d}_{{\rm Y}})$ be a metric space and let $I \subset \mathbb{R}$ be an interval. A curve $I \ni t \mapsto \gamma_t \in {\rm Y}$ is \emph{absolutely continuous} if there exists a function $g \colon I\mapsto \mathbb{R}^+$ in $L^1(I)$ s.t. \begin{equation} {\sf d}_{{\rm Y}}(\gamma_t,\gamma_s)\le \int_s^tg(r)\, \d r \qquad\forall s\le t \emph{ in } I. \label{eq:AC} \end{equation} Moreover, $\gamma$ is said to be locally absolutely continuous if every point admits a neighbourhood where it is absolutely continuous. \end{definition} Next, we state the existence of the metric counterpart of `modulus of velocity' of a curve. \begin{theorem}[Metric speed]\label{thm:ms} Let $({\rm Y},{\sf d}_{{\rm Y}})$ be a metric space and let $I \subset \mathbb{R}$ be an interval. Then, for every AC curve $ I\ni t \mapsto \gamma_t \in {\rm Y}$, there exists the limit $$ \lim_{h \downarrow 0} \frac{{\sf d}_{{\rm Y}}(\gamma_{t+h},\gamma_t)}{h} \qquad \text{a.e. } t \in I,$$ which we denote by $\vert \dot{\gamma}_t\vert $ and call \emph{metric speed}. Moreover, it is the least, in the a.e. sense, function $L^1(I)$ that can be taken in (\ref{eq:AC}). \end{theorem} See, for the proof, \cite[Theorem 1.1.2]{AmbrosioGigliSavare08}. A curve $[0,1] \ni t\mapsto \gamma_t \in {\rm Y}$ is a minimizing constant speed geodesic (or simply a geodesic) if ${\sf d}_{{\rm Y}}(\gamma_t,\gamma_s) =\vert t-s\vert {\sf d}_{{\rm Y}}(\gamma_0,\gamma_1)$, for every $t,s\in [0,1]$. We say that ${\rm Y}$ is a geodesic metric space provided for any couple of points, there exists a constant speed geodesic joining them. Whenever the geodesic connecting $y$ to $z$ is unique, we shall denote it by ${\sf G}_y^z$. For $\kappa\in\mathbb{R}$, we call $\mathbb{M}_\kappa$, the \emph{model space} of curvature $\kappa$, i.e. the simply connected, complete 2-dimensional manifold with constant curvature $\kappa$, and ${\sf d}_\kappa$ the distance induced by the metric tensor. This restricts $(\mathbb{M}_\kappa,{\sf d}_\kappa)$ to only three possibilities: the hyperbolic space $\mathbb H^2_\kappa$ of constant sectional curvature $\kappa$, if $\kappa<0$, the plane $\mathbb{R}^2$ with usual euclidean metric, if $\kappa=0$, and the sphere $\mathbb{S}^2_\kappa$ of constant sectional curvature $\kappa$, if $\kappa>0$. Also, set $D_\kappa:=\mathrm{diam}(\mathbb{M}_\kappa)$, i.e. \begin{align*} D_\kappa=\left\{ \begin{array}{ll} \infty&\quad\text{ is }\kappa\le 0,\\ \frac{\pi}{\sqrt\kappa}&\quad\text{ if }\kappa>0. \end{array} \right. \end{align*} We refer to \cite[Chapter I.2]{BH99} for a detailed study of the model spaces $\mathbb{M}_\kappa$. \bigskip In order to speak of $\kappa$-upper bound of the sectional curvature in a geodesic metric space $({\rm Y},{\sf d}_{\rm Y})$, we shall enforce a metric comparison property to geodesic triangles of ${\rm Y}$, the intuition being that they are `thinner' than in $\mathbb{M}_\kappa$. To define them we start by recalling that if $a,b,c\in{\rm Y}$ is a triple of points satisfying ${\sf d}_{\rm Y}(a,b)+{\sf d}_{\rm Y}(b,c)+{\sf d}_{\rm Y}(c,a)<2D_\kappa$, then there are points, unique up to isometries of the ambient space and called \emph{comparison points}, $\bar a,\bar b,\bar c\in\mathbb{M}_\kappa$ such that \[ {\sf d}_\kappa(\bar a,\bar b)={\sf d}_{\rm Y}(a,b),\qquad\qquad{\sf d}_\kappa(\bar b,\bar c)={\sf d}_{\rm Y}(b,c),\qquad\qquad{\sf d}_\kappa(\bar c,\bar a)={\sf d}_{\rm Y}(c,a). \] In the case where ${\rm Y}$ is geodesics (and this will be always assumed), we refer to $\triangle(a,b,c)$ as the geodesic triangle in ${\rm Y}$ consisting in three points $a,b,c$, the \emph{vertices}, and a choice of three corresponding geodesics, the \emph{edges}, linking pairwise the points. By $\triangle^{\kappa}(\bar{a},\bar{b},\bar{c})$ we denote the so built geodesic triangle in $\mathbb{M}_\kappa$, which from now on we call comparison triangle. A point $d\in{\rm Y}$ is said to be intermediate between $b,c\in{\rm Y}$ provided ${\sf d}_{\rm Y}(b,d)+{\sf d}_{\rm Y}(d,c)={\sf d}_{\rm Y}(b,c)$ (this means that $d$ lies on a geodesic joining $b$ and $c$). The \emph{comparison point of $d$} is the (unique, once we fix the comparison triangle) point $\bar d\in\mathbb{M}_\kappa$, such that \[ {\sf d}_\kappa(\bar d,\bar b)={\sf d}_{\rm Y}(d,b),\qquad\qquad{\sf d}_\kappa(\bar d,\bar c)={\sf d}_{\rm Y}(d,c). \] \begin{definition}[$\Cat\kappa$-spaces]\label{cat} A metric space $({\rm Y},{\sf d}_{\rm Y})$ is called a $\Cat\kappa $-space if it is complete, geodesic and satisfies the following triangle comparison principle: for any $a,b,c\in {\rm Y}$ satisfying ${\sf d}_{\rm Y}(a,b)+{\sf d}_{\rm Y}(b,c)+{\sf d}_{\rm Y}(c,a)<2D_\kappa$ and any intermediate point $d$ between $b,c$, denoting by $\triangle^\kappa(\bar a,\bar b,\bar c)$ the comparison triangle and by $\bar d\in\mathbb{M}_\kappa$ the corresponding comparison point (as said, $\bar a,\bar b,\bar c,\bar d$ are unique up isometries of $\mathbb{M}_\kappa$), it holds \begin{equation} \label{eq:defcat} {\sf d}_{\rm Y}(a,d)\le {\sf d}_\kappa(\bar a,\bar d). \end{equation} A metric space $({\rm Y},{\sf d}_{\rm Y})$ is said to be locally $\Cat\kappa$ if it is complete, geodesic and every point in ${\rm Y}$ has a neighbourhood which is a $\Cat\kappa$-space with the inherited metric. \end{definition} Notice that balls of radius $<D_\kappa/2$ in the model space $\mathbb{M}_\kappa$ are convex, i.e. meaning that geodesics with endpoint them lies entirely inside. Hence the comparison property \eqref{eq:defcat} grants that the same is true on $\Cat\kappa$-spaces (see \cite[Proposition II.1.4.(3)]{BH99} for the rigorous proof of this fact). It is then easy to see that, for the same reasons, $({\rm Y},{\sf d}_{\rm Y})$ is locally $\Cat\kappa$ provided every point has a neighbourhood $U$ where the comparison inequality (\ref{eq:defcat}) holds for every triple of points $a,b,c\in U$, where the geodesics connecting the points (and thus the intermediate points) are allowed to exit the neighbourhood $U$. Let us fix the following notation: if $({\rm Y},{\sf d}_{\rm Y})$ is a local $\Cat\kappa$-space, for every $y\in{\rm Y}$ we set \[ {\sf r}_y:=\sup\big\{r\leq D_\kappa/2\ :\ \bar B_r(y)\ \text{is a $\Cat\kappa$-space}\big\}. \] Notice that in particular $B_{{\sf r}_y}(y)$ is a $\Cat\kappa$-space. The definition trivially grants that ${\sf r}_y\geq{\sf r}_z-{\sf d}(y,z)$ and thus in particular $y\mapsto {\sf r}_y$ is continuous. Finally, we remark the important fact which will be exploited in the sequel \begin{quote} \label{eq:geocontdep} On $\Cat\kappa$-spaces, geodesics with endpoint at distance $<D_\kappa$ are unique (up to reparametrization) and vary continuously with respect to the endpoints. \end{quote} For a quantitative version of this fact, see \cite[Lemma 2.2]{DMGSP18}. Finally, it will be important to examine the case of global \Cat0-spaces, as they naturally arise as tangent structures of $\Cat\kappa$-spaces (see Theorem \ref{thm:tancat} below) and also because we are going to examine \Cat0-valued maps in Section \ref{Sect4}. Since $\mathbb{M}_0$ is the euclidean plane $\mathbb{R}^2$ equipped with the euclidean norm, for ${\rm Y}$ \Cat0 and $a,b,c \in {\rm Y}$ as in Definition \ref{cat}, the defining inequality \eqref{eq:defcat} reads \[ {\sf d}_{\rm Y}(\gamma_t,a) \le \Vert (1-t)\bar{b}+t\bar{c}-\bar{a}\Vert, \] for every $t\in [0,1]$, where $\gamma_t$ is the constant speed geodesic connecting $b$ to $c$ and $\bar{a},\bar{b},\bar{c} \in \mathbb{R}^2$ are comparison points. By squaring and expanding the right hand side, we easily obtain the condition \begin{equation} {\sf d}_{\rm Y}^2(\gamma_t, a) \le (1-t){\sf d}_{\rm Y}^2(\gamma_0, a)+ t{\sf d}_{\rm Y}^2(\gamma_1,a) -t(1-t){\sf d}_{\rm Y}^2(\gamma_0,\gamma_1), \label{eq:cat0def} \end{equation} for every $t \in [0,1]$. Inequality \eqref{eq:cat0def} (which can be equivalently used to define \Cat0-spaces) is to be understood as a synthetic deficit of the curvature of ${\rm Y}$, with respect to the euclidean plane $\mathbb{R}^2$ (where it holds with equality). In other words, it quantifies how much the triangle $\triangle(a,b,c)$ is `thin' compared to $\triangle^0(\bar{a},\bar{b},\bar{c})$ in the euclidean plane. The advantage of \eqref{eq:cat0def} is to be more practical to work with in convex analysis and optimization. \subsection{Tangent cone} We recall here the notion of tangent cone on a $\Cat\kappa$-space, referring to the above-mentioned bibliography for a much more complete discussion. We define the tangent cone of a $\Cat\kappa$-space by means of geodesics. Let us start with some considerations valid in a general geodesic space ${\rm Y}$: as we shall see, the construction is valid on this generality, but it will benefit from the $\Cat\kappa$ condition making a suitable calculus possible. Let $ y \in {\rm Y}$, and denote by ${\sf Geo}_y{\rm Y}$ the space of (constant speed) geodesics emanating from $y$ and defined on some right neighbourhood of $0$. We endow this space with the pseudo-distance ${\sf d}_y $ defined as following: \begin{equation} {\sf d}_y (\gamma,\eta ) := \limsup_{t\downarrow 0}\frac{{\sf d}_{{\rm Y}}(\gamma_t, \eta_t)}{t} \qquad \forall \gamma, \eta \in {\sf Geo}_y{\rm Y}. \label{eq:dy} \end{equation} It is easy to see that ${\sf d}_y$ naturally induces an equivalence relation on ${\sf Geo}_y{\rm Y}$, by simply imposing $\gamma \sim \eta$ if ${\sf d}_y(\gamma,\eta)=0.$ By construction, ${\sf d}_y$ passes to the quotient ${\sf Geo}_y{\rm Y}/\sim$ and with (a common) abuse of notation, we still denote ${\sf d}_y$ the distance on the quotient space. The equivalence class of the geodesic $\gamma$ under this relation will be denoted $\gamma'_0$. In particular this applies to the geodesics ${\sf G}_y^z$ defined on $[0,1]$, whose corresponding element in ${\sf Geo}_y{\rm Y}/\sim$ will be denoted by $({\sf G}_y^z)'_0$. \begin{definition}[Tangent cone] Let ${\rm Y}$ be a geodesic space and $y \in {\rm Y}$. The \emph{tangent cone} $({\rm T}_y{\rm Y},{\sf d}_y)$, is the completion of $({\sf Geo}_y{\rm Y}/\sim,{\sf d}_y)$. Moreover, we call $0_y \in {\rm T}_y{\rm Y}$, the equivalence class of the steady geodesic at $y$. \end{definition} A direct consequence of the local $\Cat\kappa$ condition is that, for every $y \in {\rm Y}, \gamma,\eta \in {\sf Geo}_y{\rm Y}$, the limsup in \eqref{eq:dy} is actually a limit. It will be also useful to notice that \begin{equation} \text{if ${\rm Y}$ is \Cat0, } t\mapsto \frac{{\sf d}_{\rm Y}(\gamma_t,\eta_t)}{t} \text{ is non-decreasing}\qquad \forall \gamma,\eta \in {\sf Geo}_y{\rm Y}, \label{eq:mondis} \end{equation} a property which is directly implied by \eqref{eq:cat0def}. A well known (see e.g.\ \cite[Theorem II-3.19]{BH99}) and useful fact is that tangent cones at local $\Cat\kappa$ spaces are $\Cat0$ spaces: \begin{theorem}\label{thm:tancat} Let ${\rm Y}$ be locally $ \Cat\kappa$. Then, for every $y \in {\rm Y}$, the tangent cone $({\rm T}_y{\rm Y},{\sf d}_y)$ is a \Cat0-space. \end{theorem} We now build a calculus on the tangent cone that resembles the one of Hilbert spaces. \begin{itemize} \item[\textopenbullet] \emph{Multiplication by a positive scalar}. Let $\lambda\geq 0$. Then the map sending $t\mapsto \gamma_t$ to $t\mapsto\gamma_{\lambda t}$ is easily seen to pass to the quotient in ${\sf Geo}_y{\rm Y}/\sim$ and to be $\lambda$-Lipschitz. Hence it can be extended by continuity to a map defined on ${\rm T}_y{\rm Y}$, called multiplication by $\lambda$. \item[\textopenbullet] \emph{Norm}. $|v|_y:={\sf d}_y(v,0)$. \item[\textopenbullet] \emph{Scalar product}. $\la v,w{\big\rangle}_y: = \tfrac12\big[|v|_y^2+|w|_y^2-{\sf d}_y^2(v,w)\big]$. \item[\textopenbullet] \emph{Sum}. $v\oplus w:=2 m$, where $m$ is the midpoint of $v,w$ (well-defined because ${\rm T}_y{\rm Y}$ is a $\Cat0$-space). \end{itemize} We report from \cite[Theorem 2.9]{DMGSP18} the following fact: \begin{equation} \label{eq:densecone} \text{for ${\ensuremath{\mathcal D}}$ dense in $B_{{\sf r}_y}(y)$ we have that $\{\alpha({\sf G}_y^w)'_0\colon \alpha \in \mathbb{Q}^+, \ w \in {\ensuremath{\mathcal D}}\}$ is dense in ${\rm T}_y{\rm Y}$}. \end{equation} Moreover, we recall the following proposition: \begin{proposition}[Basic calculus on the tangent cone]\label{prop:hilbertine} Let ${\rm Y}$ be locally $ \Cat\kappa$ and $y \in {\rm Y}$. Then, the four operations defined above are continuous in their variables. The `sum' and the `scalar product' are also symmetric. Moreover: \begin{subequations} \begin{align} \label{eq:norm} {\sf d}_y(\lambda v,\lambda w)&=\lambda {\sf d}_y(v,w),\\ \label{eq:prhom} \la\lambda v,w{\big\rangle}_y&= \la v,\lambda w{\big\rangle}_y=\lambda \la v,w{\big\rangle}_y,\\ \label{eq:CS} |\la v,w{\big\rangle}_y|&\le |v|_y|w|_y,\\ \label{eq:CSeq} \la v,w{\big\rangle}_y&= |v|_y|w|_y\quad\text{ if and only if }\quad |w|_yv=|v|_yw,\\ \label{eq:PI} {\sf d}_y^2(v,w)+|v\oplus w|_y^2&\le 2(|v|_y^2+|w|_y^2),\\ \label{eq:concav} \la v_1\oplus v_2,w{\big\rangle}_y&\geq \la v_1,w{\big\rangle}_y+\la v_2,w{\big\rangle}_y \end{align} \end{subequations} for any $v,v_1,v_2,w\in {\rm T}_y{\rm Y}$ and $\lambda\geq 0$. \end{proposition} \begin{proof} The continuity of `norm', `scalar product' and `multiplication by a scalar' are obvious by definition, the one of `sum' then follows from the continuity of the midpoint of a geodesic as a function of the extremal points. Points \eqref{eq:norm}, \eqref{eq:prhom}, \eqref{eq:CS}, \eqref{eq:CSeq}, \eqref{eq:PI} are well known and recalled, e.g., in \cite[Proposition 2.11]{DMGSP18}. The concavity property \eqref{eq:concav} is also well known. A way to prove it is to notice that from \eqref{eq:prhom} and letting $m$ be the midpoint of $v_1,v_2$ we get that \[ \la v_1\oplus v_2,w{\big\rangle}_y=2\eps^{-1}\la \eps m,w{\big\rangle}_y=\eps^{-1}\big(\eps^2|m|^2_y+|w|_y^2-{\sf d}_y^2(\eps m,w)\big)\qquad\forall \eps>0. \] From the fact that ${\rm T}_y{\rm Y}$ is \Cat0 and the fact that $\eps m$ is the midpoint of $\eps v_1,\eps v_2$ (consequence of \eqref{eq:norm}) we get that ${\sf d}_y^2(\eps m,w)\leq \frac12{\sf d}_y^2(\eps v_1,w)+\frac12 {\sf d}_y^2(\eps v_2,w)$ and plugging this in the above we get \[ \begin{split} \la v_1\oplus v_2,w{\big\rangle}_y&\geq\eps^{-1}\big(\tfrac12\big(|w|_y^2-{\sf d}_y^2(\eps v_1,w)\big)+\tfrac12\big(|w|_y^2-{\sf d}_y^2(\eps v_2,w)\big)\big)\\ &=\la v_1,w{\big\rangle}_y+\la v_2,w{\big\rangle}_y-\tfrac\eps2(|v_1|^2_y+|v_2|^2_y)\qquad\forall \eps>0 \end{split} \] and the conclusion follows letting $\eps\downarrow0$. \end{proof} It will also be useful to know that \begin{equation} \label{eq:sumexpl} \alpha({\sf G}_y^z)'_0\oplus\beta({\sf G}_y^w)'_0=\lim_{t\downarrow0}\frac2\eps({\sf G}_y^{m_t})'_0, \end{equation} for $z,w \in B_{{\sf r}_y}(y)\setminus\{y\}$, where $m_t$ is the midpoint of $({\sf G}_y^z)_{\alpha t}$ and $({\sf G}_y^z)_{\beta t}$, see for instance \cite[II-Theorem 3.19]{BH99} for the simple proof. We conclude recalling that on $\Cat\kappa$-spaces not only a notion of metric derivative is in place for absolutely continuous curves, but it is possible to speak about right (or left) derivatives in the following sense, as proved in \cite{Lyt04}: \begin{proposition}[Right derivatives]\label{prop:rlder} Let ${\rm Y}$ be locally $ \Cat\kappa$ and $(y_t)$ an absolutely continuous curve. Then, for a.e.\ $t$, the tangent vectors $\frac1h({\sf G}_{y_t}^{y_{t+h}})'_0\in {\rm T}_{\gamma_t}{\rm Y}$ have a limit $y'^+_t$ in ${\rm T}_{\gamma_t}{\rm Y}$ as $h\downarrow 0$. \end{proposition} For us such concept will be useful in particular in connection with the well known first-order variation of the squared distance: \begin{proposition}\label{cor:derd2} Let ${\rm Y}$ be a $\Cat\kappa$-space, $(y_t)$ an absolutely continuous curve and $z\in{\rm Y}$. Then: \[ \frac{\d}{\d t}\tfrac12{\sf d}_{\rm Y}^2(y_t,z)=-\la y_t'^+,({\sf G}_{y_t}^z)'_0{\big\rangle}_{\gamma_t}\qquad a.e.\ t. \] \end{proposition} To prove the above proposition, see e.g. \ \cite[Propositions 2.17 and 2.20]{DMGSP18}, one needs to introduce the notion of angle between geodesics and study its monotonicity properties, its behaviour along absolutely continuous curve and finally its connection with the inner product we introduced. Nevertheless, even if we omit the proof, in the sequel we shall use the following fact (see \cite[Lemma 2.19]{DMGSP18}): let ${\rm Y}$ be $\Cat\kappa$, $(y_t)$ be an absolutely continuous curve and $z \in {\rm Y}$. Then, for the time $t$ s.t. $|\dot{y}_t|$ exists and it is positive, we have \begin{equation} \begin{split} -\la \tfrac1h({\sf G}_{y_t}^{y_{t+h}})'_0,({\sf G}_{y_t}^z)'_0{\big\rangle}_{y_t} &\le -{\sf d}_{\rm Y}(y_t,z)\frac{{\sf d}_{\rm Y}(y_{t+h},y_t)}h\cos(\angle_{y_t}^\kappa(y_{t+h},z)),\quad\forall h>0\text{ s.t. }y_{t+h}\in B_{{\sf r}_{y_t}}(y_t)\\ \liminf_{h\downarrow 0}-\cos(\angle_{y_t}^\kappa(y_{t+h},z))&=\liminf_{h\downarrow 0}\frac{{\sf d}_{\rm Y}(y_{t+h},z)-{\sf d}_{\rm Y}(y_t,z)}{h|\dot{y}_t|}, \end{split} \label{eq:d2variation} \end{equation} where $\angle_{y_t}^\kappa(y_{t+h},z)$ is the angle at $\bar{y}$ in $\mathbb{M}_k$ of the comparison triangle $\triangle^\kappa(\bar{y},\bar{y}_h,\bar{z})$. The first of these is an obvious consequence of the definition of $\angle_{y}^\kappa(z_1,z_2)$ together with the fact that $\kappa\mapsto \angle_{y}^\kappa(z_1,z_2)$, and thus $\kappa\mapsto -\cos(\angle_{y}^\kappa(z_1,z_2))$, is increasing, while the second one follows from the Taylor expansion of $\cos(\angle_{y}^\kappa(z_1,z_2))$ for ${\sf d}_{\rm Y}(y,z_1)$ small (notice that the explicit formula for $\cos(\angle_{y}^\kappa(z_1,z_2))$ in terms of ${\sf d}_{\rm Y}(y,z_1),{\sf d}_{\rm Y}(y,z_2),{\sf d}_{\rm Y}(z_1,z_2)$ can be obtained by the cosine rule). \subsection{Weak convergence} In this section, we recall the concept of weak convergence in a \Cat 0-space, highlighting the similarities with weak convergence on a Hilbert setting. Still, it is important to underline that although a well-behaved notion of `weakly converging sequence' exists, in \cite{Bac18} it is stressed that the existence of a well-behaved weak topology inducing such convergence is an open challenge. For the goal of this manuscript, here we just recall an operative definition of weak convergence and its properties. \bigskip Let us first clarify the notion of \emph{semiconvexity} on a geodesic metric space. \begin{definition}[Semiconvex function] Let ${\rm Y}$ be a geodesic space and ${\sf E} \colon {\rm Y} \rightarrow \mathbb{R} \cup \{+\infty\} $. We say that ${\sf E}$ is \emph{$\lambda$-convex}, $\lambda \in \mathbb{R}$, if for any geodesic $\gamma$ it holds $$ {\sf E}(\gamma_t) \le (1-t){\sf E}(\gamma_0) +t{\sf E}(\gamma_1) -\frac{\lambda}{2}t(1-t){\sf d}_{{\rm Y}}^2(\gamma_0,\gamma_1)\qquad\forall t\in[0,1].$$ If $\lambda=0$, then we simply speak of convex functions. We shall denote by $D({\sf E})\subset{\rm Y}$ the set of $y$'s such that $ {\sf E}(y)<\infty$. \end{definition} Notice that, if ${\rm Y}$ is \Cat0 and ${\sf E}\colon{\rm Y}\to\mathbb{R}^+\cup\{+\infty\}$ is 2-convex and lower semicontinuous, then it admits a unique minimizer. To see this, we argue as for Proposition \ref{prop:proj} and prove that any minimizing sequence $(y_n)\subset{\rm Y}$ is Cauchy: let $I:=\inf {\sf E}\geq 0$, $y_{n,m}$ the midpoint of $y_n,y_m$ and notice that \[ I\leq {\sf E}(y_{n,m})\leq \frac12\big({\sf E}(y_n)+{\sf E}(y_m)\big)-\frac14 {\sf d}^2_{\rm Y}(y_n,y_m)\qquad\forall n,m\in\mathbb{N}, \] so that rearranging and passing to the limit we get \[ \frac14\varlimsup_{n,m\to\infty}{\sf d}^2_{\rm Y}(y_n,y_m)\leq \varlimsup_{n,m\to\infty}\frac12\big({\sf E}(y_n)+{\sf E}(y_m)\big)-I=0, \] giving the claim. The first example of $2$-convex functional we have encountered is the squared distance from a point in a \Cat0-space, as inequality \eqref{eq:cat0def} suggests. Hence, for $(y_n)\subset{\rm Y}$ be a bounded sequence, we can consider the mapping \[ {\rm Y} \ni y \mapsto \omega(y ; (y_n)) := \limsup_{n} {\sf d}_{{\rm Y}}^2(y,y_n). \] and notice that, as a limsup of a sequence of $2$-convex and locally equiLipschitz functions, it is still $2$-convex and locally Lipschitz. By the above remark, it has a unique minimizer. \begin{definition}[Asymptotic center and weak convergence]\label{def:weakconv} Let ${\rm Y}$ be \Cat0-space and $(y_n)$ be a bounded sequence. We call the minimizer of $\omega(\cdot,(y_n))$ the \emph{asymptotic center} of $(y_n)$. We say that a sequence $(y_n) \subset {\rm Y}$ \emph{weakly converges} to $y$, and write $y_n \rightharpoonup y$, if $y$ is the asymptotic center of every subsequence $(y_{n_k})$ of $(y_n)$. \end{definition} In analogy with the Hilbert setting, we shall sometimes say that $(y_n)$ converges strongly to $y$ if ${\sf d}_{\rm Y}(y_n,y)\to 0$. The main properties of weak convergence are collected in the following statement: \begin{proposition}\label{prop:weakprop} Let ${\rm Y}$ be a \Cat0-space. Then, the following holds: \begin{itemize} \item[i)] If $(y_n)$ converges to $y$ strongly, then it converges weakly. \item[ii)] $y_n\to y$ if and only if $y_n\rightharpoonup y$ and for some $z\in{\rm Y}$ we have ${\sf d}_{\rm Y}(y_n,z)\to {\sf d}_{\rm Y}(y,z)$. \item[iii)] Any bounded sequence admits a weakly converging subsequence. \item[iv)] If $C\subset {\rm Y}$ is convex and closed, then it is sequentially weakly closed. \item[v)] If ${\sf E}\colon{\rm Y}\to\mathbb{R}\cup\{+\infty\}$ is a convex and lower semicontinuous function, then it is sequentially weakly lower semicontinuous. \end{itemize} Moreover, at the tangent cone $ {\rm T}_y{\rm Y}$ at $y\in{\rm Y}$ (which is also a \Cat0-space by Theorem \ref{thm:tancat}) we also have \begin{itemize} \item[vi)] Let $(v_n),(w_n)\subset {\rm T}_y{\rm Y}$ be such that $v_n\to v$ and $w_n\rightharpoonup w$ for some $v,w\in {\rm T}_y{\rm Y}$. Then $\varlimsup_{n\to\infty}\la v_n,w_n{\big\rangle} \leq\la v,w{\big\rangle}$. \end{itemize} \end{proposition} \begin{proof} $(i)$ is obvious, as a strong limit is trivially the asymptotic center of the full sequence. For $(ii),(iii),(iv)$ see {\cite[Proposition 3.1.6]{Bac14}}, {\cite[Proposition 3.1.2]{Bac14}} and {\cite[Proposition 3.2.1]{Bac14}} respectively. $(v)$ follows trivially from $(iv)$ by considering the strongly closed and convex sublevels of ${\sf E}$. Finally, for $(vi)$ we let $C:=\sup_n|w_n|_y<\infty$ and notice that for every $\eps>0$ it holds \[ \begin{split} 2\eps\la v_n,w_n{\big\rangle}_y=\la v_n,2\eps w_n {\big\rangle}_y&\leq |v_n|_y^2+|2\eps w|^2_y-{\sf d}_y^2(v,2\eps w_n)+\big({\sf d}_y^2(v,2\eps w_n)-{\sf d}_y^2(v_n,2\eps w_n)\big)+4\eps^2C^2\\ &\leq |v_n|_y^2+|2\eps w|^2_y-{\sf d}_y^2(v,2\eps w_n)+4\eps C{\sf d}_y(v,v_n)(|v|_y+|v_n|_y)+4\eps^2C^2 \end{split} \] and that $\eps w_n\rightharpoonup \eps w$ (by \eqref{eq:norm}). Sending $n\to \infty$ and using the sequential weak lower semicontinuity of ${\sf d}_y^2(v,\cdot)$ (consequence of $(v)$) we obtain that \[ 2\eps\varlimsup_{n\to\infty}\la v_n,w_n{\big\rangle}_y\leq |v|_y^2+|2\eps w|^2_y-{\sf d}_y^2(v,2\eps w)+4\eps^2C^2=2\eps \la v,w{\big\rangle}_y+4\eps^2C^2 \] and the claim follows dividing by $\eps>0$ and letting $\eps\downarrow0$. \end{proof} \subsection{Geometric tangent bundle} \label{Sect2.geo} In this section we briefly recall some concepts from \cite{DMGSP18} about the construction of the Geometric Tangent Bundle ${\rm T}_G{\rm Y}$ of a given \emph{separable} local $\Cat\kappa$-space ${\rm Y}$. From now on, ${\ensuremath{\mathcal B}}({\rm Y})$ is the Borel $\sigma$-algebra on ${\rm Y}$. As a set, the space ${\rm T}_G{\rm Y}$ is defined as \[ {\rm T}_G{\rm Y}:=\big\{(y,v)\colon y\in{\rm Y},\ v\in{\rm T}_y{\rm Y}\big\}. \] Such set is equipped with a $\sigma$-algebra ${\ensuremath{\mathcal B}}({\rm T}_G{\rm Y})$, called Borel $\sigma$-algebra (with a slight abuse of terminology, because there is no topology inducing it), defined as the smallest $\sigma$-algebra such that the following maps are measurable: \begin{itemize} \item[i)] the canonical projection $\pi_{\rm Y}\colon{\rm T}_G{\rm Y}\to {\rm Y}$ \item[ii)] the maps $\pi_{{\rm Y}}^{-1}(B_{r_{\bar{y}}}\bar{y})\ni(y,v)\mapsto\la v,({\sf G}^z_y)'_0{\big\rangle}_y\in\mathbb{R}$ for every $\bar{y}\in {\rm Y}, z \in B_{r_{\bar{y}}}(\bar{y})$. \end{itemize} It turns out that ${\ensuremath{\mathcal B}}({\rm T}_G{\rm Y})$ is countably generated and that, rather than asking $(ii)$ for every $z\in{\rm Y}$, one can require it only for a dense set of points (notice that in the axiomatization chosen in \cite{DMGSP18} one speaks about the differential of the distance function rather than of scalar product with vectors of the form $({\sf G}^z_y)'_0$, but the two approaches are actually trivially equivalent thanks to the explicit expression of the differential of the distance in terms of such scalar product which is hidden in Proposition \ref{cor:derd2}). We also recall that \begin{equation} \label{eq:normbor} \text{the map ${\rm T}_G{\rm Y}\ni (y,v)\mapsto |v|_y\in\mathbb{R}$ is Borel.} \end{equation} A \emph{section} of ${\rm T}_G{\rm Y}$ is a map ${\sf s}\colon {\rm Y}\to{\rm T}_G{\rm Y}$ such that ${\sf s}_y\in {\rm T}_y{\rm Y}$ for every ${\rm Y}$. A section is said Borel if it is measurable w.r.t.\ ${\ensuremath{\mathcal B}}({\rm Y})$ and ${\ensuremath{\mathcal B}}({\rm T}_G{\rm Y})$. Among the various sections, \emph{simple} ones play a special role, similar to the one played by finite-ranged functions in the theory of Bochner integration: ${\sf s}$ is a simple section provided there are $(y_n)\subset{\rm Y}$, $(\alpha_n)\subset\mathbb{R}^+$ and $(E_n)$ Borel partition of ${\rm Y}$ such that $y_n \in B_{{\sf r}_y}(y)$ for every $y \in E_n$ and ${\sf s}\restr{E_n}=\alpha_n({\sf G}_\cdot^{y_n})'_0$. If this is the case we write ${\sf s}=\sum_n{\raise.3ex\hbox{$\chi$}}_{E_n}\alpha_n({\sf G}_\cdot^{y_n})'_0$, although the `sum' here is purely formal. The following basic result - obtained in \cite{DMGSP18} - will be useful, we report the proof for completeness: \begin{proposition}\label{prop:sb} Let ${\rm Y}$ be separable and locally $\Cat\kappa$. Then, simple sections of ${\rm T}_G{\rm Y}$ as defined above are Borel. \end{proposition} \begin{proof} It is sufficient to prove that for any given $\bar{y}\in{\rm Y}, z \in B_{r_{\bar{y}}}(\bar{y})$ and $\alpha\in\mathbb{R}^+$ the assignment $B_{r_{\bar{y}}}(\bar{y})\ni y\mapsto {\sf s}_y:= \alpha({\sf G}^z_y)'_0$ is Borel and to this aim, by the very definition of ${\ensuremath{\mathcal B}}({\rm T}_G{\rm Y})$, it is sufficient to check that $\pi_{\rm Y}\circ {\sf s}\colon{\rm Y}\to {\rm Y}$ is Borel - which it is, being this map the identity on ${\rm Y}$ - and, for any $w\in B_{r_{\bar{y}}}(\bar{y})$, the map $B_{r_{\bar{y}}}(\bar{y})\ni y\mapsto\la {\sf s}_y,({\sf G}^w_y)'_0{\big\rangle}_y$ is Borel. Thus fix $w$ and notice that thanks to \eqref{eq:normbor} and to the definition of scalar product on ${\rm T}_y{\rm Y}$ to conclude it is sufficient to check that $y\mapsto{\sf d}_y({\sf s}_y,({\sf G}^w_y)'_0)$ is Borel. We have \[ {\sf d}_y({\sf s}_y,({\sf G}^w_y)'_0)={\sf d}_y(\alpha({\sf G}^z_y)'_0,({\sf G}^w_y)'_0)=\lim_{t\downarrow0} \frac{{\sf d}_{\rm Y}\big(({\sf G}^z_y)_{\alpha t},({\sf G}^w_y)_t\big)}t. \] From the continuous dependence of geodesics on their endpoints we deduce that $y\mapsto {\sf d}_{\rm Y}\big(({\sf G}^z_y)_{\alpha t},({\sf G}^w_y)_t\big)$ is a continuous function for every $t\in (0,1\wedge\alpha^{-1})$. The conclusion then follows from the fact that a pointwise limit of continuous functions is Borel. \end{proof} It has been proved in \cite{DMGSP18} that simple sections are dense among Borel ones (see also Lemma \ref{le:denssimp} below in the case ${\rm X}={\rm Y}$ and $u={\rm Identity}$). Moreover, the operations on a single tangent space ${\rm T}_y{\rm Y}$ induce in a natural way operations on the space of Borel sections of ${\rm T}_G{\rm Y}$: these are Borel regular, as recalled in the next statement (see \cite[Proposition 3.6]{DMGSP18} for the proof). \begin{proposition}\label{prop:bormap} Let ${\rm Y}$ be separable and locally $\Cat\kappa$, ${\sf s},{\sf t}$ Borel sections of ${\rm T}_G{\rm Y}$ and $f\colon{\rm Y}\to\mathbb{R}^+$ Borel. Then, the maps from ${\rm Y}$ to $\mathbb{R}$ sending $y$ to $|{\sf s}_y|_y,{\sf d}_y({\sf s}_y,{\sf t}_y),\la {\sf s}_y,{\sf t}_y{\big\rangle}_y$ are Borel and the sections $y\mapsto f(y) {\sf s}_y,{\sf s}_y\oplus {\sf t}_y$ are Borel as well. \end{proposition} \section{Gradient flows on {\sf CAT}$(\kappa)$-spaces}\label{Sect3} \subsection{Metric approach} We recall here the basic definitions and properties of gradient flows on locally $\Cat\kappa$-spaces. We begin with the definition of (descending) slope $|\partial^-{\sf E}|$ of the functional ${\sf E}$: for $y\in D({\sf E})$ we put \begin{equation} \label{eq:defsl} \vert \partial^-{\sf E}\vert(y) := \limsup_{z \rightarrow y} \frac{({\sf E}(y)-{\sf E}(z))^+}{{\sf d}_{{\rm Y}}(y,z)}, \end{equation} and we denote the points where the slope is finite by $D(|\partial^-{\sf E}|)\subset D({\sf E})$. It is easy to prove that for $\lambda$-convex functionals, the slope admits the following `global' formulation (see \cite[Theorem 2.4.9]{AmbrosioGigliSavare08} for the proof): \begin{lemma} Let ${\rm Y}$ be a geodesic space and ${\sf E} \colon {\rm Y} \rightarrow \mathbb{R} \cup \{+\infty\}$ be $\lambda$-convex, $\lambda\in\mathbb{R}$, and lower semicontinuous. Then, for every $y \in D({\sf E})$, $$ \vert \partial^- {\sf E} \vert(y) = \sup _{z \neq y} \left(\frac{{\sf E}(y)-{\sf E}(z)}{{\sf d}_{{\rm Y}}(y,z)} + \frac{\lambda}{2}{\sf d}_{{\rm Y}}(y,z)\right)^+. $$ Moreover, $y \mapsto |\partial^-{\sf E}| (y)$ is a lower semicontinuous function. \label{lem:slopelsc} \end{lemma} We now come to various equivalent definitions of gradient flows on locally \Cat{$\kappa$}-spaces. The equivalence between the first two notions below is due to the convexity assumption, while the equivalence of these with the EVI is due to the geometric properties of $\Cat\kappa$-spaces, and in particular their Hilbert-like structure at small scales. \begin{theorem}[Gradient flows on locally $\Cat\kappa$-spaces: equivalent definitions]\label{thm:GFdef} Let ${\rm Y}$ be a locally $\Cat\kappa$-space, ${\sf E}\colon{\rm Y}\to\mathbb{R}\cup\{+\infty\}$ a $\lambda$-convex and lower semicontinuous functional, $\lambda \in \mathbb{R}$, $y\in{\rm Y}$ and $(0,\infty)\ni t\mapsto y_t\in{\rm Y}$ a locally absolutely continuous curve such that $y_t\to y$ as $t\downarrow0$. Then, the following are equivalent: \begin{itemize} \item[$(i)$] \textsc{Energy Dissipation Inequality} We have \[ -\partial_t{\sf E}(y_t)\geq \frac12|\dot y_t|^2+\frac12|\partial^-{\sf E}|^2(y_t) \] where the derivative in the left hand side is intended in the sense of distributions. \item[$(ii)$] \textsc{Sharp dissipation rate} $t\mapsto {\sf E}(y_t)$ is locally absolutely continuous and \begin{equation} \lim_{h \downarrow 0}\frac{{\sf E}(y_t)-{\sf E}(y_{t+h})}{h} =\vert\dot{y}^+_t\vert^2= \vert \partial^-{\sf E}\vert^2(y_t) \qquad \text{for every } t>0, \label{eq:metconv} \end{equation} where $\vert\dot{y}^+_t\vert:=\lim_{h\downarrow0} \frac{{\sf d}_{\rm Y}(y_{t+h},y_t)}{h}$ is the right metric speed, which in this case exists for every $t>0$. \item[$(iii)$] \textsc{Evolution Variational Inequality} For every $z\in{\rm Y}$ we have \begin{equation} \label{eq:evi} \frac{\d}{\d t}\frac{{\sf d}^2_{\rm Y}(y_t,z)}2+{\sf E}(y_t)+\frac\lambda2{\sf d}_{\rm Y}^2(y_t,z)\leq {\sf E}(z)\qquad a.e.\ t>0. \end{equation} \end{itemize} \end{theorem} \begin{proof} The fact that $(ii)$ implies $(i)$ is obvious. The converse implication has been proved in \cite{AmbrosioGigliSavare08} as a consequence of the so called \emph{strong upper gradient property} of the slope. The implication $(iii)\rightarrow(ii)$ is proved in \cite{MS20} (the argument in \cite{MS20} has been also reported in \cite{G11}). The fact that on locally $\Cat\kappa$-spaces $(ii)$ implies $(iii)$ has also been proved in \cite{MS20} (see in particular Theorems 4.2 and 3.14 there). More precisely, in \cite{MS20} only the `global' case of $\Cat\kappa$-spaces has been considered, but the arguments there can be quickly adapted to cover our case by noticing that: \begin{itemize} \item[-] arguing as for the proof of \eqref{eq:subder} below, we see that \eqref{eq:evi} holds at some $t$ if and only if it holds at $t$ for $z$ varying only in a neighbourhood of $y_t$, \item[-] property \eqref{eq:metconv} is local by nature, \item[-] if $B\subset{\rm Y}$ is closed, convex and $\Cat\kappa$, then a curve $I\ni t\mapsto y_t\in B$ satisfies $(ii)$ (resp.\ $(iii)$) in $B$ if and only if it satisfies $(ii)$ (resp.\ $(iii)$) in ${\rm Y}$. \end{itemize} \end{proof} A curve satisfying any of the equivalent conditions in this last theorem will be called \emph{gradient flow trajectory}. Moreover, we define the \emph{gradient flow map} ${\sf GF}^{\sf E} \colon (0,\infty) \times {\rm Y} \rightarrow {\rm Y}$ via ${\sf GF}_t^{\sf E}(y):= y_t$ for every $t \in (0,\infty), y \in {\rm Y}$, where, evidently, $y_t$ is the gradient flow trajectory starting at $y$ and associated to the functional ${\sf E}$ evaluated at time $t$. Some of their main properties are collected in the following statement: \begin{theorem}[Gradient flows on locally $\Cat\kappa$-spaces: some basic properties]\label{thm:GF} Let ${\rm Y}$ be a locally $\Cat\kappa$-space, ${\sf E}\colon{\rm Y}\to\mathbb{R}\cup\{+\infty\}$ a $\lambda$-convex and lower semicontinuous functional. Then, the following holds: \begin{itemize} \item[\textopenbullet] \textsc{Existence} \begin{center}For every $y\in \overline{D({\sf E})}$ there exists a gradient flow trajectory for ${\sf E}$ starting from $y$.\end{center} \item[\textopenbullet] \textsc{Uniqueness and $\lambda$-contraction} \begin{center}For any two gradient flow trajectories $(y_t),(z_t)$ starting from $y,z$ respectively we have \begin{equation} \label{eq:contr} {\sf d}_{\rm Y}(y_t,z_t)\leq e^{-\lambda (t-s)}{\sf d}_{\rm Y}(y_s,z_s)\qquad\forall t\ge s >0 \end{equation} \end{center} \item[\textopenbullet] \textsc{Monotonicity properties}\ For $(y_t)$ gradient flow trajectory for ${\sf E}$ starting from $y$ we have that \begin{center}\label{eq:s5} $t\mapsto y_t$ is locally Lipschitz in $(0,+\infty)$ with values in $D(|\partial^-{\sf E}|)\subset D({\sf E})$, \begin{align} &t \mapsto {\sf E}(y_t) \text{ is nonincreasing in $ [0,+\infty)$}, \nonumber\\ &t \mapsto e^{\lambda t}|\partial^-{\sf E}|({y_t}) \text{ is nonincreasing in $ [0,+\infty)$}. \label{eq:regul} \end{align} \end{center} \end{itemize} \end{theorem} \begin{proof} In the \Cat0 case, existence of a limit of the so-called minimizing movements scheme in this setting has been proved in \cite{Mayer98} and \cite{Jost98}. The fact that the limit curve obtained in this way satisfies the EVI condition has been proved in \cite{AmbrosioGigliSavare08}. The contractivity property, also at the level of the discrete scheme, has been proved in \cite{Mayer98} and \cite{Jost98} (at least in the case $\lambda=0$, the general case can be found e.g.\ in \cite{AmbrosioGigliSavare08} as a consequence of the EVI condition). Then, uniqueness is directly implied by \eqref{eq:contr} and the last claims are a consequence of \eqref{eq:metconv} and the contraction property. The $\Cat\kappa$ case has been treated in \cite{OP17}, at least under some compactness assumptions on the sublevels of the functional. Such compactness assumption has been removed in \cite{MS20}. Finally, the case of locally $\Cat\kappa$ spaces can be dealt with as in the proof of Theorem \ref{thm:GFdef} above. \end{proof} Finally, we conclude the section with an \emph{a priori} estimate, a variant of the ones investigated in \cite{MS20}, concerning contraction properties along the gradient flow trajectories at different times. The proof is inspired by the one of \cite[Lemma 2.1.4]{PE13} in the context of CBB-spaces. \begin{lemma}[A priori estimates] \label{lem:apriori} Let ${\rm Y}$ be locally $\Cat\kappa$ and ${\sf E} \colon {\rm Y}\rightarrow [0,\infty]$ be a $\lambda$-convex and lower semicontinuous functional, $\lambda \in \mathbb{R}$. Let $y,z \in {\rm Y}$ and consider the gradient flow trajectories $(y_t),(z_t)$ associated with ${\sf E}$. Then, for any $t \ge s > 0$, it holds \begin{equation} \label{eq:apriorigf} \begin{split} {\sf d}^2_{{\rm Y}}(y_{t},z_{s}) \le e^{-2\lambda s}\Big( {\sf d}^2_{\rm Y}(y,z) +& 2(t-s)({\sf E}(z)-{\sf E}(y)) \\ & + 2|\partial^- {\sf E}|^2(y)\int_0^{t-s}\theta_\lambda(r)\,\d r -\lambda\int_0^{t-s} {\sf d}_{\rm Y}^2(y_r,z)\, \d r\Big), \end{split} \end{equation} where $\theta_\lambda(t):= \int_0^t e^{-2\lambda r}\, \d r$. \end{lemma} \begin{proof} We start fixing $t>0$. First, we notice that, in light of $(ii)$ of Theorem \ref{thm:GFdef} and the basic properties in Theorem \ref{thm:GF}, we have for any $r >0$ (and not a.e. $r$), \[ -{\sf E}(y_r) + {\sf E}(y) = \int_0^r e^{-2\lambda q}e^{+2\lambda q}|\partial^-{\sf E}|^2(y_q)\, \d q \le|\partial^- {\sf E}|^2(y)\theta_\lambda(r). \] Thus, we can integrate from $0$ to $t$ the EVI condition \eqref{eq:evi} to get \[ \begin{split} \frac12({\sf d}_{\rm Y}^2(y_t,z) - {\sf d}_{\rm Y}^2(y,z) ) &\le \int_0^t {\sf E}(z)-{\sf E}(y_r) -\frac{\lambda}{2}{\sf d}_{\rm Y}^2(y_r,z)\, \d r \\ &\le t({\sf E}(z)-{\sf E}(y))+ |\partial^- {\sf E}|^2(y)\int_0^t \theta_\lambda(r)\, \d r -\frac{\lambda}{2}\int_0^t {\sf d}_{\rm Y}^2(y_r,z)\, \d r. \end{split} \] Finally, for general $t \ge s >0 $, we can reduce to above case by appealing to property \eqref{eq:contr}. \end{proof} \subsection{The object $-\partial^-{\sf E}(y)$} In this section we introduce the key object $-\partial^-{\sf E}(y)$ of this manuscript associated to a semiconvex and lower semicontinuous functional ${\sf E}$ over a local $\Cat\kappa$ space. As the notation suggests, and as will be clear from Definition \ref{def:md}, for functionals on Hilbert spaces this corresponds to $\{-v:v\in\partial^-{\sf E}(y)\}$. We start recalling the following well known fact: \begin{proposition}[Metric projection]\label{prop:proj} Let ${\rm Y}$ be a $\Cat0$-space and $C \subset {\rm Y}$ be a closed convex subset. Then, for every $y\in{\rm Y}$, there is a unique $ {\rm Pr}_{C}(y) \in C$, called \emph{metric projection} of $y$ onto $C$, such that ${\sf d}_{{\rm Y}}(y,{\rm Pr}_{C}(y)) = \inf_{C}{\sf d}_{{\rm Y}}(y,\cdot)$. \end{proposition} \begin{proof} Since the function to be minimized is continuous and $C$ closed, it is sufficient to prove that any minimizing sequence $(c_n)$ for $I:= \inf_{c \in C} {\sf d}_{{\rm Y}}^2(c,y)$ (which is equivalent to be minimizing for $\inf_C{\sf d}_{\rm Y}(y,\cdot)$) is Cauchy. Fix such sequence and, for every $n,m \in \mathbb{N}$, let $c_{m,n}$ be the mid-point between $c_n$ and $c_m$. Observe that since $C$ is convex, $c_{n,m}$ belongs to $C$ and thus is a competitor for the minimization problem. Condition (\ref{eq:cat0def}) therefore implies $$ I \le {\sf d}_{{\rm Y}}^2(c_{n,m},y) \le \frac{1}{2}{\sf d}_{{\rm Y}}^2(c_n,y) + \frac{1}{2}{\sf d}_{{\rm Y}}^2(c_m,y) - \frac{1}{4}{\sf d}_{{\rm Y}}^2(c_n,c_m),$$ for every $n,m \in \mathbb{N}$. Rearranging terms, and taking the limsup as $n,m$ go to infinity we observe $$\limsup_{n,m \rightarrow +\infty} \frac{1}{4}{\sf d}_{{\rm Y}}^2(c_n,c_m) \le \limsup_{n,m \rightarrow +\infty} \frac{1}{2}{\sf d}_{{\rm Y}}^2(c_n,y) + \frac{1}{2}{\sf d}_{{\rm Y}}^2(c_m,y) -I =0, $$ i.e.\ $(c_n)$ is Cauchy, as desired. \end{proof} We remark that the metric projection can be also shown to be $1$-Lipschitz and to satisfy a `Pythagoras' inequality' (see \cite[Theorem 2.1.12]{Bac14}), but we will not make use of this fact.bFinally, we are ready to give an effective definition of (opposite of the) subdifferential of ${\sf E}$ as a subset of the tangent cone. \begin{definition}[Minus-subdifferential]\label{def:md} Let ${\rm Y}$ be locally $\Cat\kappa$, ${\sf E} \colon {\rm Y} \rightarrow \mathbb{R} \cup \{+\infty\}$ be a $\lambda$-convex and lower semicontinuous functional, $\lambda \in \mathbb{R}$, and $y \in D({\sf E})$. We define the \emph{minus-subdifferential} of ${\sf E}$ at $y$, denoted by $-\partial^-{\sf E}(y)$, as the collection of $ v \in {\rm T}_y{\rm Y}$ satisfying the subdifferential inequality \[ {\sf E}(y) - \la v,\gamma'_0{\big\rangle}_y+\frac{\lambda}{2}{\sf d}_{\rm Y}^2(y,z) \le {\sf E}(z), \] for every $z \in {\rm Y},$ and some geodesic $\gamma$ from $y$ to $z$. Moreover, by $D(-\partial^-{\sf E})$, we denote the collection of $y \in {\rm Y}$ for which $-\partial^-{\sf E}(y)\neq \emptyset$. \label{def:subdiff} \end{definition} Notice that $v\in-\partial^-{\sf E}(y)$ if and only if \begin{equation} \label{eq:subder} - \la v,\gamma'_0{\big\rangle}_y\leq \lim_{t\downarrow0}\frac{{\sf E}(\gamma_t)-{\sf E}(y)}t\qquad\forall z\in{\rm Y},\ \text{for some geodesic } \gamma \text{ from }y \text{ to }z. \end{equation} so that in particular the definition of $-\partial^-{\sf E}(y)$ does not depend on $\lambda$. Indeed the `if' is obvious by $\lambda$-convexity while for the `only if' we apply the defining inequality with $z_t:=\gamma_t$ in place of $z$ and, for $t$ small enough, rearrange to get \[ -\la v,({\sf G}_y^{z_t})'_0{\big\rangle}_y+\frac{\lambda}{2}{\sf d}_{\rm Y}^2(y,z_t)\leq {\sf E}(z_t)-{\sf E}(y) \] so that the conclusion follows noticing that ${\sf d}_{\rm Y}^2(y,z_t)=t^2{\sf d}_{\rm Y}^2(y,z)$, $({\sf G}_y^{z_t})'_0=t\gamma'_0$ (because for $t\ll1$ the geodesic from $y$ to $z_t$ in unique), then dividing by $t$ and letting $t\downarrow0$. The same arguments also show that both in Definition \ref{def:md} and in \eqref{eq:subder} we can take $\gamma$ to be \emph{any} geodesic from $y$ to $z$. It is also worth to point out that \begin{equation} \label{eq:sub0} \begin{split} \text{For ${\sf E}$ convex and lower semicontinuous we have that:}\\ \text{$x$ is a minimum point for ${\sf E}$ if and only if $0\in-\partial^-{\sf E}(x)$.} \end{split} \end{equation} The proof of this fact being obvious. \begin{remark}\label{re:mm}{\rm It would certainly be possible to define the analogous notion of subdifferential $\partial^-{\sf E}$ by replacing $ - \la v,\gamma'_0{\big\rangle}_y$ with $\la v,\gamma'_0{\big\rangle}_y$ in the defining formula, however, since the tangent cone is only a cone and not a space, there is no obvious relation between the two definitions. For our purposes, $-\partial^-{\sf E}$ is the correct object to work with because, as discussed in the introduction, we aim at showing the existence of the Laplacian of a \Cat0-valued Sobolev map by looking at the gradient flow of the Korevaar-Schoen energy ${\sf E}^{\sf KS}$, thus we notice on one hand that, by definition and imitating what happens in the smooth category, the Laplacian of $u$ has to be introduced as (the element of minimal norm in) $-\partial^-{\sf E}^{\sf KS}(u)$, and on the other one that in the gradient flow equation \eqref{eq:gfi} it is $-\partial^-{\sf E}$ who appears. In this direction, it is interesting to point out that the classical procedure of minimizing \[ y\quad\mapsto\quad{\sf E}(y)+\frac{{\sf d}^2_{\rm Y}(y,\bar y)}{2\tau}, \] which is the cornerstone of most existence results about gradient flows in the metric setting (see e.g.\ \cite{AmbrosioGigliSavare08}), produces a (unique, if $\tau>0$ is small enough) point $y_\tau$ for which we have $\frac1\tau({\sf G}_{y_\tau}^{\bar y})'_0\in\partial^-{\sf E}(y_\tau)$. In particular it gives no informations about whether $-\partial^-{\sf E}(y_\tau)$ is not empty. In our approach this latter fact, and the related one that the slope at $y$ coincides with the norm of the element of least norm in $-\partial^-{\sf E}(y)$, will be a consequence of the fact that gradient flow trajectories satisfy an analogue of \eqref{eq:gfi}, see Theorem \ref{thm:rightD}. \fr } \end{remark} It will be important to know that in $-\partial^-{\sf E}(y)$ there is always an element of minimal norm: \begin{proposition} Let ${\rm Y}$ be a locally $\Cat \kappa$-space, ${\sf E}\colon {\rm Y} \rightarrow \mathbb{R} \cup \{+\infty\}$ be a $\lambda$-convex and lower semicontinuous functional, $\lambda \in \mathbb{R}$, and $y \in {\rm Y}$. Then, $- \partial^-{\sf E}(y)$ is a closed and convex subset of ${\rm T}_y{\rm Y}$. In particular, if this set is not empty, the optimization problem $$\inf_{v \in -\partial^-{\sf E}(y)} \vert v \vert_y$$ admits a unique minimiser. \end{proposition} \begin{proof} Recalling that ${\rm T}_y{\rm Y}$ is \Cat0, by Proposition \ref{prop:proj} the existence of a unique minimizer in $-\partial^-{\sf E}(y)$ for the norm, i.e.\ of a unique metric projection of $0_y$ onto $-\partial^-{\sf E}(y)$, will follow once we show that $-\partial^-{\sf E}(y)$ is closed and convex. The fact that it is closed follows from the definition and the consideration already stated in Proposition \ref{prop:hilbertine} that the scalar product $\la\cdot,\cdot{\big\rangle}_y$ is continuous on ${\rm T}_y{\rm Y}$. The convexity follows from the inequality \[ -\la({\sf G}_{v_1}^{v_2})_t,w{\big\rangle}_y\leq -(1-t)\la v_1,w{\big\rangle}_y-t\la v_2,w{\big\rangle}_y\qquad\forall v_1,v_2,w\in{\rm T}_y{\rm Y},\ t\in[0,1], \] which is a direct consequence of \eqref{eq:prhom} and \eqref{eq:concav}. \end{proof} \subsection{Subdifferential formulation} Here we prove the main results of this note, namely Theorem \ref{thm:rightD} and Corollary \ref{cor:eqfor} below. We shall use the following preliminary result (notice that the fact that equality holds in \eqref{eq:sbn} will be obtained in \eqref{eq:s3}): \begin{proposition}\label{prop:slopebnd} Let ${\rm Y}$ be locally $\Cat\kappa$ and ${\sf E} \colon {\rm Y} \rightarrow \mathbb{R} \cup \{+\infty\}$ be a $\lambda$-convex and lower semicontinuous functional, $\lambda\in\mathbb{R}$. Then, for every $y\in D(-\partial^-{\sf E})$, we have \begin{equation} \label{eq:sbn} |\partial^-{\sf E}|(y)\leq\inf_{v\in -\subd y} |v|_y. \end{equation} In particular, $D(-\partial^-{\sf E})\subset D(|\partial^-{\sf E}|) $. \end{proposition} \begin{proof} Let $v \in -\partial^-{\sf E} (y)$ and notice that $$ {\sf E}(y)-{\sf E}(z) +\frac{\lambda}{2}{\sf d}_{{\rm Y}}^2(y,z) \le \vert \la v,({\sf G}_y^z)'_0{\big\rangle}_y \vert \stackrel{\eqref{eq:CS}}\le \vert v\vert_y {\sf d}_{{\rm Y}}(y,z),\qquad\forall z\in{\rm Y} $$ which in turns implies $$ \left(\frac{{\sf E}(y)-{\sf E}(z)}{{\sf d}_{{\rm Y}}(y,z)} +\frac{\lambda}{2}{\sf d}_{{\rm Y}}(y,z)\right)^+\le \vert v \vert_y \qquad \forall z \in {\rm Y},\ z\neq y.$$ Taking the supremum over $z \neq y$ and recalling Lemma \ref{lem:slopelsc} we conclude. \end{proof} We now come to the main result of this manuscript, namely the existence of right incremental ratios of the flow for all time. \begin{theorem}[Right derivatives of the flow]\label{thm:rightD} Let ${\rm Y}$ be locally $\Cat\kappa$ and ${\sf E} \colon {\rm Y} \rightarrow \mathbb{R} \cup \{+\infty\}$ be a $\lambda$-convex and lower semicontinuous functional, $\lambda\in\mathbb{R}$. Let $y \in \overline{D({\sf E})}$, and $(y_t)$ be the gradient flow trajectory starting from $y$ (recall Theorem \ref{thm:GF}). Then, for \emph{every} $t>0$, the right `difference quotient' $\frac{1}{h}({\sf G}_{y_t}^{y_{t+h}})'_0$ strongly converges to the element of minimal norm in $-\partial^-{\sf E}({y_t})\subset {\rm T}_{y_t}{\rm Y}$ (i.e.\ to ${\rm Pr}_{-\partial^-{\sf E}({y_t})}(0_{y_t})$) as $h$ goes to $0^+$. The same holds for $t=0$ if (and only if) we have $y \in D(|\partial^-{\sf E}|)$. Moreover, $D(-\partial^-{\sf E})=D(|\partial^-{\sf E}|)$ and the identity \begin{equation} \label{eq:s3} |\partial^-{\sf E}|(y)=\min_{v\in -\partial^-{\sf E}(y)}|v|_y\qquad\forall y\in{\rm Y} \end{equation} holds, where as customary the minimum of the empty set is declared to be $+\infty$. In particular, $D(-\partial^-{\sf E})$ is dense in $D({\sf E})$. \end{theorem} \begin{proof} By the semigroup property ensured by the uniqueness of gradient flow trajectories and taking into account that $y_t\in D(|\partial^-E|)$ for every $t>0$ (recall \eqref{eq:s5}), it suffices to show the claim for $t=0$ under the assumption $y \in D(|\partial^-{\sf E}|)$. Suppose $y$ is not a minimum point for ${\sf E}$, otherwise there is nothing to prove. In particular, $(ii)$ of Theorem \ref{thm:GFdef} ensures that $|\dot{y}_0|$ exists and it is positive. Also, notice that the continuity at time $t=0$ of the gradient flow trajectory ensures that for $\epsilon>0$ sufficiently small we have $y_h\in B_{{\sf r}_y}(y)$ for every $h \in (0,\epsilon)$. In particular for such $h$ the tangent vector $v_h:=\tfrac1h({\sf G}_{y}^{y_{h}})'_0\in {\rm T}_y{\rm Y}$ is well defined and the statement makes sense. Fix such $\epsilon>0$. \noindent\underline{\textsc{Step 1}} For every $h\in(0,\epsilon)$ we have \begin{equation} \label{eq:s2} |v_h |_y=\frac{{\sf d}_{{\rm Y}}(y_h,y)}{h} \le \fint_0^h \vert \dot{y}_t\vert \, \d t \stackrel{\eqref{eq:metconv}}= \fint_0^h |\partial^-{\sf E}|(y_t) \, \d t \stackrel{\eqref{eq:regul}}\le |\partial^-{\sf E}| (y)\fint_0^h e^{-\lambda t} \, \d t. \end{equation} Hence $\sup_{h\in(0,\epsilon)}|v_h|_y<\infty$, therefore point $(iii)$ of Proposition \ref{prop:weakprop} gives that for every sequence $h_n\downarrow0$ there is a subsequence, not relabelled, such that $v_{h_n} \rightharpoonup v$ for some $v \in {\rm T}_y{\rm Y}$. Fix such sequence and such weak limit $v$. To conclude it is sufficient to prove that the convergence is strong and that $v$ is the element of minimal norm in $-\partial^-{\sf E}(y)$, as this in particular grants that the limit is independent on the particular subsequence chosen. \noindent\underline{\textsc{Step 2}} We claim that $v \in -\partial^-{\sf E}( {y})$. To see this, integrate \eqref{eq:evi} from $0$ to $h$ and divide by $h$ to obtain \[ \frac{{\sf d}_{{\rm Y}}^2(y_h,z)-{\sf d}_{{\rm Y}}^2(y,z)}{2h}+ \fint_0^h {\sf E}(y_t)+\frac{\lambda}{2}{\sf d}_{{\rm Y}}^2(y_t,z) \, \d t \leq {\sf E}(z)\qquad\forall z\in{\rm Y}, \ h\in(0,\epsilon). \] Letting $h=h_n\downarrow0$ and recalling that ${\sf E}$ is lower semicontinuous we deduce that \begin{equation} \label{eq:s1} \varliminf_{n\to\infty}\frac{{\sf d}_{{\rm Y}}^2(y_{h_n},z)-{\sf d}_{{\rm Y}}^2(y,z)}{2h_n}+{\sf E}(y)+\frac{\lambda}{2}{\sf d}_{{\rm Y}}^2(y,z)\leq {\sf E}(z)\qquad\forall z\in{\rm Y}. \end{equation} Next, fix $z \in {\rm Y}$, let $\gamma \in {\sf Geo}_y{\rm Y}$ with $\gamma_1=z$, denote $z_s:= \gamma_s$ and notice that, for $s$ sufficiently small, $z_s,y_{h_n} \in B_{{\sf r}_y}(y)$. Now \eqref{eq:d2variation} yields \[ \begin{split} \liminf_{n\rightarrow\infty }-\la v_{h_n} ,({\sf G}_y^{z_s})'_0{\big\rangle}_y &\le {\sf d}_{\rm Y}(y,z)\liminf_{n\rightarrow\infty}\frac{{\sf d}(y_{h_n},y)}{h_n}\frac{{\sf d}_{\rm Y}(y_{h_n},z_s)-{\sf d}_{\rm Y}(y,z_s)}{|\dot{y}_0|h_n}\\ &=\liminf_{n\rightarrow\infty}\frac{{\sf d}^2_{\rm Y}(y_{h_n},z_s)-{\sf d}^2_{\rm Y}(y,z_s)}{2h_n}, \end{split} \] having used the fact that $\liminf_na_nb_n = a\liminf_nb_n$ if $\lim_na_n=a>0$ and $(a_n),(b_n)\subset\mathbb{R}$ are bounded, and a chain rule argument in the last equality. Thus, recalling the weak upper semicontinuity of the scalar product proved in point $(vi)$ of Proposition \ref{prop:weakprop} we get \[ \varliminf_{n\to\infty}\frac{{\sf d}_{{\rm Y}}^2(y_{h_n},z_s)-{\sf d}_{{\rm Y}}^2(y,z_s)}{2h_n}\geq -\la v,({\sf G}_y^{z_s})'_0{\big\rangle}_{y}. \] Now, combine with \eqref{eq:s1} to get \[ {\sf E}(y) -\la v,({\sf G}_y^{z_s})'_0{\big\rangle}_{y} + \frac{\lambda}{2}{\sf d}^2_{\rm Y}(z_s,y) \le {\sf E}(z_s) \le (1-s){\sf E}(y) + s{\sf E}(z)-\frac{\lambda}{2}s(1-s){\sf d}^2_{\rm Y}(z,y). \] Finally, using that $({\sf G}_y^{z_s})'_0 = s\gamma'_0$, $ {\sf d}^2_{\rm Y}(z_s,y)=s^2{\sf d}_{\rm Y}^2(y,z)$ and \eqref{eq:prhom}, we can rearrange terms and take the limit as $s\downarrow 0$ to get \[ {\sf E}(y) -\la v,\gamma'_0{\big\rangle}_{y} + \frac{\lambda}{2}{\sf d}^2_{\rm Y}(z,y) \le {\sf E}(z)\qquad \text{ for every } \gamma \text{ geodesic from $y$ to $z$}.\] Given that $z$ was arbitrary, we conclude. \noindent\underline{\sc Step 3} Since $|\cdot|_y^2:{\rm T}_y{\rm Y}\to\mathbb{R}$ is convex and continuous, by point $(v)$ of Proposition \ref{prop:weakprop} we get \[ \begin{split} |v|^2_y\leq\varliminf_{n\to\infty}|v_{h_n}|^2_y\leq\varlimsup_{n\to\infty}|v_{h_n} |^2_y\stackrel{\eqref{eq:s2}}\leq |\partial^-E|^2(y)\stackrel{\eqref{eq:sbn}}\leq\inf_{w\in-\partial^-{\sf E}(y)}|w|^2_y\leq |v|^2_y, \end{split} \] and thus all the inequalities must be equalities. This proves at once the strong convergence of $(v_{h_n})$ to $v$ (by the convergence of norms and point $(ii)$ of Proposition \ref{prop:weakprop}) and that $v$ is the element of minimal norm in $-\partial^-{\sf E}(y)$. The argument also proves that if $y\in D(|\partial^-{\sf E}|)$, then $y\in D(-\partial^-{\sf E})$ and in this case the equality in \eqref{eq:s3} holds. Taking into account Proposition \ref{prop:slopebnd} we conclude that $D(|\partial^-{\sf E}|)=D(-\partial^-{\sf E})$ that \eqref{eq:s3} holds for every $y\in{\rm Y}$, as desired. The last claim then follows from the existence of gradient flow trajectories starting from points in $D({\sf E})$ (Theorem \ref{thm:GF}) and \eqref{eq:metconv}. \end{proof} As a direct consequence of the above result, we see that we can characterize gradient flow trajectories by means of the classical differential inclusion $x_t'\in-\partial^-{\sf E}(x_t)$ which can be used to define such evolution in the Hilbert setting: \begin{corollary}\label{cor:eqfor} Let ${\rm Y}$ be locally $\Cat\kappa$ and ${\sf E} \colon {\rm Y} \rightarrow \mathbb{R} \cup \{+\infty\}$ be a $\lambda$-convex and lower semicontinuous functional, $\lambda\in\mathbb{R}$. Let $y \in \overline{D({\sf E})}$, and $(0,\infty)\ni t\mapsto y_t\in D({\sf E})$ be a locally absolutely continuous curve. Then, the following are equivalent: \begin{itemize} \item[(i)] $(y_t)$ is a gradient flow trajectory for ${\sf E}$ starting from $y$, i.e.\ satisfies any of the three equivalent conditions in Theorem \ref{thm:GFdef}. \item[(ii)] The right derivative $y'^+_t$ exists for every $t>0$ and \[ \left\{\begin{array}{l} y'^+_t\in-\partial^-{\sf E}(y_t)\quad\forall t>0\quad \text{ and is the element of minimal norm},\\ \displaystyle{\lim_{t\downarrow0}y_t}=y. \end{array} \right. \] If $y\in D(|\partial^-{\sf E}|)=D(-\partial^-{\sf E})$ then the above holds also at $t=0$. \item[(iii)] It holds \[ \left\{\begin{array}{l} y'^+_t\in-\partial^-{\sf E}(y_t)\quad a.e.\ t>0,\\ \displaystyle{\lim_{t\downarrow0}y_t}=y. \end{array} \right. \] \end{itemize} \end{corollary} \begin{proof} The implication $(i)\Rightarrow(ii)$ is proved in Theorem \ref{thm:rightD} above and the one $(ii)\Rightarrow(iii)$ is obvious. The fact that $(iii)$ implies $(i)$ (in the form of the Evolution Variation Inequality) is a direct consequence of Proposition \ref{cor:derd2} (applied in a $\Cat\kappa$ neighbourhood of $y_t$ in combination with arguments similar to those outlined in the proof of Theorem \ref{thm:GFdef} to cover the case of a local $\Cat\kappa$ space) and the definition of $-\partial^-{\sf E}$. \end{proof} \begin{remark}{\rm In the setting of Alexandrov geometry it is more customary to study the gradient flow of semi\emph{concave} functions ${\sf F}$, thus studying (a properly interpreted version of) $y_t'\in\partial^+{\sf F}$. Let ${\sf E}$ be semiconvex on a $\Cat\kappa$-space ${\rm Y}$ and put ${\sf F}:=-{\sf E}$. Then it is clear that the slope $|\partial^-{\sf E}|$ as we defined it coincides with the \emph{absolute gradient} $|\nabla{\sf F}|$ as defined in \cite[Definition 4.1]{Lyt05}, therefore, taking into account the characterization \eqref{eq:metconv}, we see that up to a different choice of parametrization, our notion of gradient flow trajectory coincides with the one of gradient-like curve studied in \cite[Definition 6.1]{Lyt05}. The property $\frac{\d}{\d t^+}{\sf F}(y_t)=-\frac{\d}{\d t^+}{\sf E}(y_t)=|\partial^-{\sf E}|^2(y_t)=|v_t|^2_{y_t}$, where $v_t\in-\partial^-{\sf E}(y_t)$ is the element of minimal norm, together with the existence of the right derivative of $y_t$ and the characterization \eqref{eq:subder} show that the element of minimal norm in $-\partial^-{\sf E}(y)$ coincides with $\nabla {\sf F}(y)$ as defined in \cite[Definition 11.4.1]{AKP19} on spaces with curvature bounded from \emph{below}. This shows that our `differential' perspective on gradient flows is compatible with the one studied in \cite{AKP19} on CBB spaces. }\fr\end{remark} \section{Laplacian of \Cat0-valued maps}\label{Sect4} \subsection{Pullback geometric tangent bundle} \label{Sec2.uTGY} \subsubsection{The general non-separable case} For the purpose of this manuscript, a \emph{metric measure space} $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ is always intended to be given by: a complete and separable metric space $({\rm X},{\sf d})$ equipped with a non-negative and non-zero Borel measure giving finite mass to bounded sets. In some circumstances we shall add further assumptions on ${\rm X}$, typically in the form a $\mathrm{RCD}(K,N)$ condition. Thus let us fix a pointed \Cat0 space $({\rm Y},{\sf d}_{\rm Y},\bar y)$, a metric measure space $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ and an open subset $\Omega\subset{\rm X}$. We recall that the space $L^0(\Omega,{\rm Y})$ is the collection of all Borel maps $u\colon\Omega\to{\rm Y}$ which are essentially separably valued (i.e.\ for some separable subset $\tilde{\rm Y}\subset{\rm Y}$ we have ${\mbox{\boldmath$m$}}(u^{-1}({\rm Y}\setminus\tilde{\rm Y}))=0$), where two maps agreeing ${\mbox{\boldmath$m$}}$-a.e.\ are identified. Then $L^2(\Omega,{\rm Y}_{\bar y})\subset L^0(\Omega,{\rm Y})$ is collection of those (equivalence classes of maps) $u$ such that $\int_\Omega{\sf d}^2_{\rm Y}(u(x),\bar y)\,\d{\mbox{\boldmath$m$}}(x)<\infty$. The space $L^2(\Omega,{\rm Y}_{\bar y})$ comes naturally with the distance \[ {\sf d}_{L^2}^2(u,v):=\int_\Omega{\sf d}^2_{\rm Y}(u(x),v(x))\,\d{\mbox{\boldmath$m$}}(x) \] and by standard means one sees that with such distance the space is complete and that finite-ranged maps are dense. Moreover, for $u,v\in L^2(\Omega,{\rm Y}_{\bar y})$ a direct computation shows that $t\mapsto ({\sf G}^v_u)_t\in L^2(\Omega,{\rm Y}_{\bar y})$, where $({\sf G}^v_u)_t(x):=({\sf G}^{v(x)}_{u(x)})_t$, is a geodesic from $u$ to $v$ (the fact that $({\sf G}^v_u)_t\colon\Omega\to{\rm Y}$ is Borel follows from the continuous dependence of the - unique - geodesics on ${\rm Y}$ w.r.t.\ the endpoints). Also, by appealing to the equivalent characterization \eqref{eq:cat0def} of \Cat0-spaces, the computation \[ \begin{split} {\sf d}_{L^2}^2( ({\sf G}^v_u)_t,w)&=\int {\sf d}^2_{\rm Y}(({\sf G}^{v(x)}_{u(x)})_t,w(x))\,\d{\mbox{\boldmath$m$}}(x)\\ &\stackrel{\eqref{eq:cat0def}}\leq \int (1-t){\sf d}_{\rm Y}^2(u(x),w(x))+t{\sf d}_{\rm Y}^2(v(x),w(x))-t(1-t){\sf d}_{\rm Y}^2(u(x),v(x))\,\d{\mbox{\boldmath$m$}}(x)\\ &=(1-t){\sf d}_{L^2}^2( u,w)+t{\sf d}_{L^2}^2( v,w)-t(1-t){\sf d}^2_{L^2}(u,v) \end{split} \] valid for any $w\in L^2(\Omega,{\rm Y}_{\bar y})$ and every $t\in[0,1]$, reveals that $L^2(\Omega,{\rm Y}_{\bar y})$ is a \Cat0-space as well and thus ${\sf G}^v_u$ is the only geodesic from $u$ to $v$. In particular, given $u\in L^2(\Omega,{\rm Y}_{\bar y})$ we have a well defined tangent cone ${\rm T}_u L^2(\Omega,{\rm Y}_{\bar y})$ containing what we may think of as the set of `infinitesimal variations' of $u$. Intuitively, these variations should correspond to a collection, for ${\mbox{\boldmath$m$}}$-a.e.\ $x\in\Omega$, of a variation of $u(x)\in{\rm Y}$, i.e.\ to a collection of elements of ${\rm T}_{u(x)}{\rm Y}$. We now want to make this intuition rigorous and, due to the fact that \Cat0-spaces are typically studied in non separable environments, we first discuss this case, postponing to the next sections the separable case and its relations with the Borel structure on ${\rm T}_G{\rm Y}$ seen in Section \ref{Sect2.geo}. Fix $u \in L^2(\Omega,{\rm Y}_{\bar{y}})$ and a Borel representative of it, which by abuse of notation we shall continue to denote by $u$. By $u^*{\rm T}_G{\rm Y}$ we intend the set \[ u^*{\rm T}_G{\rm Y}:=\big\{(x,y,v)\colon x\in\Omega,\ y=u(x),\ v\in {\rm T}_y{\rm Y}\big\}\subset {\rm X}\times{\rm T}_G{\rm Y} \] \begin{figure}[!h] \centering \includegraphics[scale=.7]{uTGY_new.pdf} \caption{Pullback geometric tangent bundle $u^*{\rm T}_G{\rm Y}$ via $u \colon {\rm X}\to {\rm Y}$.} \label{fig:uTGY} \end{figure} A section of $u^*{\rm T}_G{\rm Y}$ is a map ${\sf S}\colon\Omega\to u^*{\rm T}_G{\rm Y}$ such that $\pi_{\rm X}({\sf S}(x))=x$, where $\pi_{\rm X}\colon u^*{\rm T}_G{\rm Y}\to{\rm X}$ is the canonical projection. Given such a section ${\sf S}$ we write ${\sf S}(x)=(x,u(x),{\sf S}_x)$ for any $x\in\Omega$. We shall denote by ${\sf 0}$ the zero section defined by ${\sf 0}_x:=0_{u(x)}\in {\rm T}_{u(x)}{\rm Y}$. Then given another $v\in L^2(\Omega,{\rm Y}_{\bar{y}})$ and a Borel representative of it, still denoted by $v$, and $\alpha\geq 0$, we can consider the section ${\sf S}$ of $u^*{\rm T}_G{\rm Y}$ given by $x\mapsto (x,u(x),\alpha({\sf G}_{u(x)}^{v(x)})'_0)$. We then have the following simple and useful lemma. \begin{lemma}\label{le:l2s} Let $({\rm Y},{\sf d}_{\rm Y},\bar y)$ be a pointed \Cat0-space, $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ a metric measure space, $\Omega\subset{\rm X}$ an open subset, $u,v^1,v^2\colon \Omega\to{\rm Y}$ Borel representatives of maps in $L^2(\Omega,{\rm Y}_{\bar y})$. Also, let $\alpha^1,\alpha^2\in\mathbb{R}^+$ and consider the sections ${\sf S}^i$ of $u^*{\rm T}_G{\rm Y}$ given by ${\sf S}^i_x:=\alpha_i({\sf G}_{u(x)}^{v_i(x)})'_0$, $i=1,2$. Then the maps $\Omega\ni x\mapsto |{\sf S}_x^1|_{u(x)},{\sf d}_{u(x)}({\sf S}_x^1,{\sf S}_x^2),\la {\sf S}_x^1,{\sf S}_x^2{\big\rangle}_{u(x)}$ are Borel. \end{lemma} \begin{proof} It is sufficient to prove that $\Omega\ni x\mapsto {\sf d}_{u(x)}({\sf S}_x^1,{\sf S}_x^2)\in\mathbb{R}$ is Borel, as then the other Borel regularities will follow. We have already noticed that the maps $x\mapsto ({\sf G}_u^{v^i})_{\alpha^it}(x)\in{\rm Y}$, $i=1,2$, are Borel, hence so is the map \[ x\mapsto\frac{{\sf d}_{\rm Y}\big(({\sf G}_u^{v^1})_{\alpha^1t}(x),({\sf G}_u^{v^2})_{\alpha^2t}(x)\big)}{t} \] for any $0<t\ll1$. Since these maps pointwise converge to $ x\mapsto {\sf d}_{u(x)}({\sf S}_x^1,{\sf S}_x^2)$ as $t\downarrow0$, the claim follows. \end{proof} In particular, for ${\sf S}^1,{\sf S}^2$ as in the above statement, the quantity \begin{equation} \label{eq:defdl2} {\sf d}_{L^2}\big({\sf S}^1,{\sf S}^2\big):=\sqrt{\int_\Omega {\sf d}^2_{u(x)}({\sf S}^1_x,{\sf S}^2_x)\,\d{\mbox{\boldmath$m$}}(x)}, \end{equation} is well defined. Standard arguments then show that ${\sf d}_{L^2}$ is symmetric, satisfies the triangle inequality and ${\sf d}_{L^2}({\sf S},{\sf S})=0$ (but it might happen that ${\sf d}_{L^2}({\sf S}^1,{\sf S}^2)=0$ for ${\sf S}^1\neq{\sf S}^2$ and that ${\sf d}_{L^2}({\sf S}^1,{\sf S}^2)=+\infty$). We then give the following definitions: \begin{definition}[$\mathcal L^2$ sections of $u^*{\rm T}_G{\rm Y}$]\label{def:l2se1} Let $({\rm Y},{\sf d}_{\rm Y},\bar y)$ be a pointed \Cat0-space, $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ a metric measure space, $\Omega\subset{\rm X}$ an open subset and $u$ a Borel representative of a map in $L^2(\Omega,{\rm Y}_{\bar y})$. Then, $\mathcal L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ is the collection of sections ${\sf S}$ of $u^*{\rm T}_G{\rm Y}$ such that: \begin{itemize} \item[i)] For any $\alpha\in\mathbb{R}^+$ and $v\colon \Omega\to{\rm Y}$ Borel and essentially separably valued we have that $x\mapsto {\sf d}_{u(x)}({\sf S}_x,\alpha({\sf G}_{u(x)}^{v(x)})'_0)$ is a Borel function. \item[ii)] There is a sequence $(\alpha_n)\subset\mathbb{R}^+$ and maps $v_n\colon \Omega\to{\rm Y}$ Borel and essentially separably valued such that for the sections ${\sf S}^n$ given by ${\sf S}^n_x:=\alpha_n({\sf G}_{u(x)}^{v_n(x)})'_0$ we have \begin{equation} \label{eq:hl2} \begin{split} \sup_{n\in\mathbb{N}}{\sf d}_{L^2}({\sf S}^n,{\sf 0})&<\infty,\\ \lim_{n\to\infty}{\sf d}_{u(x)}({\sf S}^n_x,{\sf S}_x)&= 0\qquad\forall x\in\Omega. \end{split} \end{equation} \end{itemize} \end{definition} It is clear from the definitions that for ${\sf S}^1,{\sf S}^2\in \mathcal L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ the map $x\mapsto {\sf d}_{u(x)}({\sf S}^1_x,{\sf S}^2_x)$ is Borel and $L^2({\mbox{\boldmath$m$}}\restr\Omega)$-integrable, therefore ${\sf d}_{L^2}({\sf S}^1,{\sf S}^2)$ is well defined by \eqref{eq:defdl2} and finite. \begin{definition}[$ L^2$ sections of $u^*{\rm T}_G{\rm Y}$]\label{def:l2se2} Let $({\rm Y},{\sf d}_{\rm Y},\bar y)$ be a pointed \Cat0-space, $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ a metric measure space, $\Omega\subset{\rm X}$ an open subset and $u$ a Borel representative of a map in $L^2(\Omega,{\rm Y}_{\bar y})$. We define $ L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ as the quotient of $\mathcal L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ with respect to the relation ${\sf S}^1\sim{\sf S}^2$ if ${\sf d}_{L^2}({\sf S}^1,{\sf S}^2)=0$. \end{definition} It is obvious that the relation indicated in the previous definition is an equivalence relation, so that $ L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ is well defined. Also, the quantity ${\sf d}_{L^2}$ passes to the quotient and defines a distance, still denoted by ${\sf d}_{L^2}$, on $ L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ and standard considerations show that the resulting object is a complete metric space. Now let $\tilde u\colon \Omega\to{\rm Y}$ be Borel and ${\mbox{\boldmath$m$}}$-a.e.\ equal to $u$ and consider the identification $I\colon \mathcal L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)\to \mathcal L^2(\tilde u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ sending ${\sf S}$ to the section $I({\sf S})$ defined by \[ I({\sf S})_x:=\left\{\begin{array}{ll} {\sf S}_x,&\qquad\text{ if }u(x)=\tilde u(x),\\ 0_{\tilde u (x)},&\qquad\text{ if }u(x)\neq \tilde u(x). \end{array}\right. \] It is clear that this map passes to the quotients and thus induces a map, still denoted by $I$, from $ L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ to $L^2(\tilde u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$. Also, the fact that $u=\tilde u$ ${\mbox{\boldmath$m$}}$-a.e.\ trivially implies that such $I$ is an isometry. Thanks to these considerations, it makes sense to consider the space $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ for $u\in L^2(\Omega,{\rm Y}_{\bar y})$, i.e.\ even when $u$ is only given up to ${\mbox{\boldmath$m$}}$-a.e.\ equality: it is just sufficient to pick any Borel representative of $u$, consider the corresponding space of $L^2$-sections up to ${\mbox{\boldmath$m$}}$-a.e.\ equality and notice that such space does not depend on the representative of $u$ chosen. The basic properties of the space $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ are collected in the following statement. \begin{proposition}[Properties of $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$]\label{prop:propl2} Let $({\rm Y},{\sf d}_{\rm Y},\bar y)$ be a pointed \Cat0-space, $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ a metric measure space, $\Omega\subset{\rm X}$ an open subset and $u\in L^2(\Omega,{\rm Y}_{\bar y})$. Then: \begin{itemize} \item[$(i)$] For every ${\sf S}^1,{\sf S}^2\in L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ the functions $\Omega\ni x\mapsto {\sf d}_{u(x)}({\sf S}^1_x,{\sf S}^2_x), |{\sf S}^1_x|_{u(x)},\la{\sf S}^1_x,{\sf S}^2_x{\big\rangle}_{u(x)}$ are (equivalence classes up to ${\mbox{\boldmath$m$}}$-a.e.\ equality of) Borel functions. \item[$(ii)$] For every ${\sf S}^1,{\sf S}^2\in L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ the section ${\sf S}^1\oplus{\sf S^2}$ given by the (equivalence class of the) map $x\mapsto (x,u(x),{\sf S}^1_x\oplus{\sf S}^2_x)$ belongs to $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$. \item[$(iii)$] For every ${\sf S}\in L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ and $f\in L^\infty(\Omega)$ the section $f{\sf S}$ given by the (equivalence class of the) map $x\mapsto (x,u(x),f(x){\sf S}^1_x)$ belongs to $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$. \end{itemize} \end{proposition} \begin{proof} The Borel regularity of $ x\mapsto {\sf d}_{u(x)}({\sf S}^1_x,{\sf S}^2_x)$ has already been noticed. Then the one of $|{\sf S}^1_x|_{u(x)}$ follows from the fact that the zero section ${\sf 0}$ belongs to $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ and thus the one of $\la{\sf S}^1_x,{\sf S}^2_x{\big\rangle}_{u(x)}$ follows by the definition of scalar product. We pass to $(ii)$ and first consider the case of ${\sf S}^1,{\sf S}^2\in \mathcal L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ of the form ${\sf S}^1_x=\alpha({\sf G}_{u(x)}^{v(x)})'_0$ and ${\sf S}^2_x=\beta({\sf G}_{u(x)}^{w(x)})'_0$ for $v,w\colon \Omega\to{\rm Y}$ Borel and essentially separably valued and $\alpha,\beta\geq 0$. Put $T:=\min\{\alpha^{-1},\beta^{-1}\}\in(0,\infty]$ and for $t\in(0,T)$ put $v_t:=({\sf G}_u^v)_{\alpha t}$, $w_t:=({\sf G}_u^w)_{\beta t}$ and let $m_t(x)$ be the midpoint of $v_t(x),w_t(x)$ for every $x\in\Omega$. From the continuity of the `midpoint' operation and the triangle inequality it easily follows that $x\mapsto m_t(x)$ is Borel, essentially separably valued and in $L^2(\Omega,{\rm Y}_{\bar y})$. Then, define the section ${\sf M}_t$ as ${\sf M}_{t,x}:=\frac1t({\sf G}_{u(x)}^{m_t(x)})'_0$ and recall \eqref{eq:sumexpl} to see that ${\sf M}_{t,x}\to \frac12({\sf S}^1\oplus {\sf S}^2)_x$ in ${\rm T}_{u(x)}{\rm Y}$ as $t\downarrow0$ for every $x\in\Omega$: this proves that $\frac12({\sf S}^1\oplus {\sf S}^2)$ satisfies the requirement $(i)$ in Definition \ref{def:l2se1}. The same convergence together with the bound \[ \begin{split} |{\sf M}_{t,\cdot}|_{u(\cdot)}&\leq\tfrac1t{\sf d}_{\rm Y}(u,m_{t}) \leq \tfrac2t\big({\sf d}_{\rm Y}(u,v_{t})+{\sf d}_{\rm Y}(u,w_{t}) \big)\leq 2\big(\alpha{\sf d}_{\rm Y}(u,v)+\beta{\sf d}_{\rm Y}(u,w) \big)\quad\text{on }\Omega \end{split} \] valid for every $t\in(0,T)$ shows that $\frac12({\sf S}^1\oplus {\sf S}^2)$ satisfies also the requirement $(ii)$ in Definition \ref{def:l2se1}. Now the fact that $\frac12({\sf S}^1\oplus {\sf S}^2)\in \mathcal L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ for generic ${\sf S}^1,{\sf S}^2\in \mathcal L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ follows by approximation (recall point $(ii)$ in Definition \ref{def:l2se1}) and the continuity of the `sum' operation noticed in Proposition \ref{prop:hilbertine}, then the analogous properties for elements of $ L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ trivially follow. Finally, the fact that $\frac12({\sf S}^1\oplus {\sf S}^2)\in L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ implies ${\sf S}^1\oplus {\sf S}^2\in L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ is trivial from the definitions (see also the arguments below). For $(iii)$ we notice that it is sufficient to prove that $f{\sf S}$ is in $\mathcal L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ whenever $f\colon \Omega\to\mathbb{R}$ is Borel and bounded and ${\sf S}\in\mathcal L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$. In this case the fact that $f{\sf S}$ satisfies the requirement $(i)$ in Definition \ref{def:l2se1} is obvious. For $(ii)$ we consider sections ${\sf S}^n_x= \alpha_n({\sf G}_{u(x)}^{v_n(x)})'_0$ for which \eqref{eq:hl2} hold and put $\tilde {\sf S}^n_x:=\alpha_n \|f\|_{L^\infty}({\sf G}_{u(x)}^{w_n(x)})'_0$, where $w_n(x):=({\sf G}_{u(x)}^{v_n(x)})_{f(x)/\|f\|_{L^\infty}}$. The fact that the $w_n$'s are Borel representatives of maps in $L^2(\Omega,{\rm Y}_{\bar y})$ can be easily checked from the definition while the fact that \eqref{eq:hl2} holds for $f{\sf S}$ and $(\tilde{\sf S}^n)$ is obvious. \end{proof} Let us now come back to the initial discussion and, for given $u\in L^2(\Omega,{\rm Y}_{\bar y})$, let us define the map $\iota\colon {\sf Geo}_uL^2(\Omega,{\rm Y}_{\bar y})\to L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ as follows. For $v\in L^2(\Omega,{\rm Y}_{\bar y})$ and $\alpha\geq 0$ we send the geodesic $t\mapsto ({\sf G}_u^v)_{\alpha t}$ to the (equivalence class of the) section given by $x\mapsto \alpha({\sf G}^{v(x)}_{u(x)})'_{0}$. The relation between ${\rm T}_uL^2(\Omega,{\rm Y}_{\bar y})$ and $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ is then described by the following result: \begin{proposition}[$L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ and ${\rm T}_uL^2(\Omega,{\rm Y}_{\bar y})$]\label{prop:l2gen} Let $({\rm Y},{\sf d}_{\rm Y},\bar y)$ be a pointed \Cat0-space, $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ a metric measure space, $\Omega\subset{\rm X}$ an open subset and $u\in L^2(\Omega,{\rm Y}_{\bar y})$. Then, the map $\iota\colon {\sf Geo}_uL^2(\Omega,{\rm Y}_{\bar y})\to L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ passes to the quotient and induces a map, still denoted $\iota$, from ${\sf Geo}_uL^2(\Omega,{\rm Y}_{\bar y})/\sim$ to $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ that can be uniquely extended by continuity to a bijective isometry, again denoted $\iota$, from ${\rm T}_uL^2(\Omega,{\rm Y}_{\bar y})$ to $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$. Moreover, the so defined isometry $\iota$ respects the operations on the tangent cones, i.e. \begin{subequations} \begin{align} \label{eq:e1} |{\sf v}|_u^2&=\int_\Omega |\iota({\sf v})_x|^2_{u(x)}\,\d{\mbox{\boldmath$m$}}(x),\\ \label{eq:e2} \la {\sf v}_1,{\sf v}_2{\big\rangle}_u&=\int_\Omega \la \iota({\sf v}_1)_x,\iota({\sf v}_2)_x{\big\rangle}_{u(x)}\,\d{\mbox{\boldmath$m$}}(x),\\ \label{eq:e3} {\sf d}^2_u( {\sf v}_1,{\sf v}_2)&=\int_\Omega {\sf d}^2_{u(x)}( \iota({\sf v}_1)_x,\iota({\sf v}_2)_x)\,\d{\mbox{\boldmath$m$}}(x),\\ \label{eq:e4} \iota(\lambda {\sf v})&=\lambda\iota({\sf v}),\\ \label{eq:e5} \iota({\sf v}_1\oplus {\sf v}_2)&=\iota({\sf v}_1)\oplus\iota({\sf v}_2), \end{align} \end{subequations} for any ${\sf v},{\sf v}_1,{\sf v}_2\in {\rm T}_uL^2(\Omega,{\rm Y}_{\bar y})$ and $\lambda\in\mathbb{R}^+$. \end{proposition} \begin{proof} Let $v^1,v^2\in L^2(\Omega,{\rm Y}_{\bar y})$, $\alpha_1,\alpha_2\geq 0$, consider the sections ${\sf S}^1,{\sf S}^2\in L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ given by ${\sf S}^i:=\iota(\alpha_i({\sf G}_{u}^{v^i})'_0)$. Notice that \[ \begin{split} {\sf d}^2_u\big( \alpha_1({\sf G}_u^{v_1})'_0,\alpha_2({\sf G}_u^{v_2})'_0\big)&=\lim_{t\downarrow0}\frac{{\sf d}_{L^2}^2\big(({\sf G}_u^{v_1})_{\alpha_1t},({\sf G}_u^{v_2})_{\alpha_2t}\big)}{t^2}\\ &=\lim_{t\downarrow0}\int_\Omega\frac{{\sf d}^2_{\rm Y}\big(({\sf G}_{u(x)}^{v_1(x)})_{\alpha_1t},({\sf G}_{u(x)}^{v_2(x)})_{\alpha_2t}\big)}{t^2}\,\d{\mbox{\boldmath$m$}}(x)\\ &=\int_\Omega\lim_{t\downarrow0}\frac{{\sf d}^2_{\rm Y}\big(({\sf G}_{u(x)}^{v_1(x)})_{\alpha_1t},({\sf G}_{u(x)}^{v_2(x)})_{\alpha_2t}\big)}{t^2}\,\d{\mbox{\boldmath$m$}}(x)\\ &=\int_\Omega {\sf d}^2_{u(x)}\big({\sf S}^1_x,{\sf S}^2_x\big)\,\d{\mbox{\boldmath$m$}}(x), \end{split} \] where, in passing the limit inside the integral, we used the dominate convergence theorem and the fact that the integrand is non-negative and non-decreasing in $t$ (recall \eqref{eq:mondis}). This proves at once that $\iota$ passes to the quotient to a map on ${\sf Geo}_uL^2(\Omega,{\rm Y}_{\bar y})/\sim$ and that the so induced map is an isometry which therefore can be extended to a map from ${\rm T}_uL^2(\Omega,{\rm Y}_{\bar y})$ to $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$. The fact that such extension is surjective follows from an approximation argument based on the requirement $(ii)$ in Definition \ref{def:l2se1}. Now observe that \eqref{eq:e3} has already been proved by the fact that $\iota$ is an isometry. Then \eqref{eq:e1} and \eqref{eq:e2} follow as well. Also, \eqref{eq:e4} is obvious by definition and then \eqref{eq:e5} follows from \eqref{eq:e3}, \eqref{eq:e4} and the metric characterization of the midpoints of $x,y$ as the point $m$ such that ${\sf d}^2(x,m)+{\sf d}^2(m,y)={\sf d}^2(x,y)/2$. \end{proof} \subsubsection{The separable setting} In this section we assume instead that ${\rm Y}$ is a separable and locally $\Cat\kappa$-space and we study the Borel structure of the pullback $u^*{\rm T}_G{\rm Y}$ of the geometric tangent bundle of ${\rm Y}$. We shall then see in the space case of ${\rm Y}$ being separable and $\Cat0$ how such Borel structure relates to the space $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ studied in the previous section. Thus let ${\rm Y}$ be separable and locally $\Cat\kappa$, $({\rm X},{\sf d})$ be a complete and separable metric space and $\Omega\subset{\rm X}$ be open. As before, for a given Borel map $u\colon \Omega\to{\rm Y}$, the pullback geometric tangent bundle $u^*{\rm T}_G{\rm Y}$ is defined as \[ u^*{\rm T}_G{\rm Y}:=\big\{(x,y,v)\colon x\in\Omega,\ y=u(x),\ v\in{\rm T}_y{\rm Y}\big\}\subset {\rm X}\times{\rm T}_G{\rm Y} \] and a section of $u^*{\rm T}_G{\rm Y}$ is a map ${\sf S} \colon \Omega \rightarrow u^*{\rm T}_G{\rm Y}$ such that ${\sf S}_x \in {\rm T}_{u(x)}{\rm Y}$ for every $x \in \Omega$. Now equip $u^*{\rm T}_G{\rm Y}\subset {\rm X}\times{\rm T}_G{\rm Y}$ with the restriction of the product $\sigma$-algebra ${\ensuremath{\mathcal B}}({\rm X})\otimes {\ensuremath{\mathcal B}}({\rm T}_G{\rm Y})$, which, with abuse of terminology, we shall call Borel $\sigma$-algebra on $u^*{\rm T}_G{\rm Y}$ and denote ${\ensuremath{\mathcal B}}(u^*{\rm T}_G{\rm Y})$. In particular, we shall say that a section is Borel if it is measurable w.r.t.\ ${\ensuremath{\mathcal B}}({\rm X})$ and ${\ensuremath{\mathcal B}}(u^*{\rm T}_G{\rm Y})$. A section is \emph{simple} provided there are a Borel partition $(E_n)$ of $\Omega$, $(\alpha_n)\subset \mathbb{R}^+$ and points $(y_n)\subset{\rm Y}$ s.t. $y_n \in B_{r_{u(x)}}(u(x))$, for every $x \in E_n$ and ${\sf S}\restr{E_n}=\alpha_n({\sf G}_{u(\cdot)}^{y_n})'_0$. We shall formally denote such section by $\sum_n{\raise.3ex\hbox{$\chi$}}_{E_n}\alpha_nu^*({\sf G}_{\cdot}^{y_n})'_0$. Notice that the restriction of such section to $E_n$ coincides with the (graph of) the composition of $u$ with the simple section of ${\rm T}_G{\rm Y}$ given by $y\mapsto(y,\alpha_n({\sf G}_{y}^{y_n})'_0)$. In particular, recalling Proposition \ref{prop:sb} we see that simple sections of $u^*{\rm T}_G{\rm Y}$ are Borel. Moreover, they are dense in the space of Borel sections: \begin{lemma}[Density of simple sections]\label{le:denssimp} Let $({\rm X},{\sf d})$ be a metric space, $({\rm Y},{\sf d}_{\rm Y})$ be separable and locally $\Cat\kappa$-space, $\Omega\subset{\rm X}$ an open subset and $u\colon \Omega\to{\rm Y}$ Borel. Let ${\sf S}\colon \Omega\to u^*{\rm T}_G{\rm Y}$ be a Borel section of $u^*{\rm T}_G{\rm Y}$ and $\eps>0$. Then, there is a simple section ${\sf T}$ such that ${\sf d}_{u(x)}({\sf S}_x,{\sf T}_x)<\eps$ for every $x\in\Omega$ \end{lemma} \begin{proof} We can reduce the proof to the case of ${\rm Y}$ being $\Cat\kappa$ by using the Lindel\"of property of ${\rm Y}$ and the coverings made by $B_{{\sf r}_y/2}(y)$. Doing so, we achieve uniqueness of geodesics between any couple of points. Let $D\subset{\rm Y}$ be countable and dense $(y_n,\alpha_n)$ be an enumeration of $D\times\mathbb{Q}^+$. Then for every $n\in\mathbb{N}$ consider the function $F_{n}\colon {\rm T}_G{\rm Y}\to\mathbb{R}$ given by \[ F_{n}(y,v):={\sf d}_{y}(v,\alpha_n({\sf G}_y^{y_n})'_0)=\sqrt{|v|_y^2+|\alpha_n|^2{\sf d}^2(y,y_n)-2\la v,\alpha_n({\sf G}_y^{y_n})'_0{\big\rangle}_y}. \] The defining requirements of ${\ensuremath{\mathcal B}}({\rm T}_G{\rm Y})$ and the property \eqref{eq:normbor} ensure that $F_{n}$ is Borel. Hence so is the map $\tilde F_n\colon u^*{\rm T}_G{\rm Y}\to\mathbb{R}$ defined as $\tilde F_n:=F_n\circ \pi_{{\rm T}_G{\rm Y}}$, where $\pi_{{\rm T}_G{\rm Y}}\colon u^*{\rm T}_G{\rm Y}\subset{\rm X}\times {\rm T}_G{\rm Y}\to {\rm T}_G{\rm Y}$ is the canonical projection. Hence given a Borel section ${\sf S}$ of $u^*{\rm T}_G{\rm Y}$ the map $\tilde F_n\circ {\sf S}\colon {\rm X}\to\mathbb{R}$ is Borel and thus, for given $\eps>0$, so is the set $\tilde E_n:=(\tilde F_n\circ {\sf S})^{-1}([0,\eps))$. We then put $E_n:=\tilde E_n\setminus\cup_{i<n}\tilde E_i$ and notice that the property \eqref{eq:densecone} ensures that the $E_n$'s form a partition of ${\rm X}$, thus giving the conclusion. \end{proof} Thanks to such density result we can show that the operations on the tangent cones preserve Borel regularity. The statement below is similar in spirit to (part of) the statement of Proposition \ref{prop:propl2}, but here no measure is fixed on $\Omega$ and that the sections are defined for every $x\in\Omega$, not for ${\mbox{\boldmath$m$}}$-a.e.\ $x$. \begin{proposition} Let $({\rm X},{\sf d})$ be a metric space, $({\rm Y},{\sf d}_{\rm Y})$ be separable and locally $\Cat\kappa$, $\Omega\subset{\rm X}$ an open subset and $u\colon \Omega\to{\rm Y}$ a Borel map. Let ${\sf S}^1,{\sf S}^2$ be Borel sections of $u^*{\rm T}_G{\rm Y}$ and $f\colon {\rm X}\to\mathbb{R}^+$ be a Borel map. Then the functions sending $x\in{\rm X}$ to $|{\sf S}^2_x|_{u(x)},\la {\sf S}^1_x,{\sf S}^2_x{\big\rangle}_{u(x)},{\sf d}_{u(x)}({\sf S}^1_x,{\sf S}^2_x)$ are Borel and the sections $x\mapsto f(x){\sf S}^1_x,{\sf S}^1_x\oplus {\sf S}^2_x$ are Borel as well. \end{proposition} \begin{proof} Let ${\sf S}^1,{\sf S}^2$ be simple of the form: ${\sf S}^1={\raise.3ex\hbox{$\chi$}}_{E_1}\alpha \, u^*({\sf G}_{\cdot}^{y_1})'_0$ and ${\sf S}^2={\raise.3ex\hbox{$\chi$}}_{E_2}\beta\, u^*({\sf G}_{\cdot}^{y_2})'_0$, with $E_i:=u^{-1}(A_i)$ and $A_i \in {\ensuremath{\mathcal B}}({\rm Y})$ such that $y_i \in B_{{\sf r}_y}(y)$ for every $y \in A_i,i=1,2$. Then they are the (graph of the) composition of $u$ with the simple sections of ${\rm T}_G{\rm Y}$ given by ${\raise.3ex\hbox{$\chi$}}_{A_1}\alpha \,({\sf G}_{\cdot}^{y_1})'_0$ and ${\raise.3ex\hbox{$\chi$}}_{A_2}\beta\, ({\sf G}_{\cdot}^{y_2})'_0$ respectively, hence in this case the conclusion comes from Proposition \ref{prop:bormap}. Then the conclusion comes from the `fiberwise' continuity of all the expressions considered (granted by Proposition \ref{prop:hilbertine}) and the density of simple sections established in Lemma \ref{le:denssimp} above. \end{proof} We now come to the relation between the space of (equivalence classes up to ${\mbox{\boldmath$m$}}$-a.e.\ equality of) Borel sections of $u^*{\rm T}_G{\rm Y}$ and the space $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ in the case where ${\rm Y}$ is separable and $\Cat0$. As expected, these spaces coincide when the right integrability of the first ones is in place: \begin{proposition}\label{prop:link} Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a metric measure space, $({\rm Y},{\sf d}_{\rm Y},\bar y)$ be a pointed separable \Cat0-space, $\Omega\subset{\rm X}$ an open subset and $u\colon \Omega\to{\rm Y}$ be a Borel map. Then, ${\sf S}\in L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ if and only if it is the equivalence class up to ${\mbox{\boldmath$m$}}$-a.e.\ equality of a Borel section ${\sf T}$ of $u^*{\rm T}_G{\rm Y}$ with $\int_\Omega|{\sf T}|^2_{u(x)}\,\d{\mbox{\boldmath$m$}}(x)<\infty$. \end{proposition} \begin{proof} Assume at first that ${\sf S}\in L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$. Then the fact that $\int_\Omega|{\sf S}_x|^2_{u(x)}\,\d{\mbox{\boldmath$m$}}(x)<\infty$ is a direct consequence of the definition and of Proposition \ref{prop:l2gen} above, thus we only need to prove that ${\sf S}$ is the equivalence class up to ${\mbox{\boldmath$m$}}$-a.e.\ equality of a Borel section of $u^*{\rm T}_G{\rm Y}$. To see this we need to prove that, letting $\pi_{\rm X},\pi_{{\rm T}_G{\rm Y}}$ be the projections of $u^*{\rm T}_G{\rm Y}\subset {\rm X}\times {\rm T}_G{\rm Y}$ to ${\rm X},{\rm T}_G{\rm Y}$ respectively, the maps $\pi_{\rm X}\circ{\sf S}$ and $\pi_{{\rm T}_G{\rm Y}}\circ{\sf S}$ are equivalence classes up to ${\mbox{\boldmath$m$}}$-a.e.\ equality of Borel maps. For the first one this is obvious, because it is the identity on ${\rm X}$. For the second one we recall the definition of ${\ensuremath{\mathcal B}}({\rm T}_G{\rm Y})$ to see that we need to prove that $\pi_{\rm Y}\circ\pi_{{\rm T}_G{\rm Y}}\circ{\sf S}$ is Borel (which it is, because it coincides with $u$) and that $x\mapsto \la {\sf S}_x,({\sf G}_{u(x)}^z)'_0{\big\rangle}$ is Borel for every $z\in{\rm Y}$ (which is easily seen to be the case from the requirement $(i)$ in Definition \ref{def:l2se1}). We pass to the converse implication and start observing that Lemma \ref{le:l2s} and the definition of ${\ensuremath{\mathcal B}}({\rm T}_G{\rm Y})$ just recalled ensure that for any $v\in L^2(\Omega,{\rm Y}_{\bar y})$ the section given by $({\sf G}_{u(x)}^{v(x)})'_0$ is the equivalence class up to ${\mbox{\boldmath$m$}}$-a.e.\ equality of a Borel section. It follows that if ${\sf T}$ is a Borel section as in the statement, then it satisfies the requirement $(i)$ in Definition \ref{def:l2se1}. We now claim that if ${\sf T}$ is also simple, then it also satisfies the requirement $(ii)$. To see this write ${\sf T}= \sum_n{\raise.3ex\hbox{$\chi$}}_{E_n}\alpha_n\,u^*({\sf G}_{\cdot}^{y_n})'_0$ and put ${\sf T}^i:=\sum_{n\leq i}{\raise.3ex\hbox{$\chi$}}_{E_n}\alpha_n\,u^*({\sf G}_{\cdot}^{y_n})'_0$ where it is intended that for $x\notin \cup_{n\leq i}E_n$ we have ${\sf T}^i_x=0_{u(x)}\in{\rm T}_{u(x)}{\rm Y}$. Then putting $\beta_i:=\max_{n\leq i}\alpha_n$, $y_{n,i}:=({\sf G}_{u(x)}^{y_n})_{\alpha_n/\beta_i}$ and defining $v^i\in L^2(\Omega,{\rm Y}_{\bar y})$ as $v^i\restr{E_n}:=y_{n,i}$ for $n\leq i$ and $v^i\restr{\Omega\setminus\cup_{n\leq i}E_n}\equiv u$ we see that ${\sf T}^i=\iota(\beta_i({\sf G}_u^{v^i})'_0)$, so that (the equivalence class up to ${\mbox{\boldmath$m$}}$-a.e.\ equality of) ${\sf T}^i$ belongs to $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$. It is then clear that ${\sf d}_{L^2}({\sf T}^i,{\sf T})\to0$, proving that the equivalence class of ${\sf T}$ belongs to $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$. Then the conclusion for a generic section ${\sf T}$ as in the statement can be easily obtained by an approximation argument starting from the density result in Lemma \ref{le:denssimp}. \end{proof} \subsection{The Korevaar-Schoen energy} We recall here the key definitions and results of \cite{GT20}, where the original analysis done in \cite{KS93} has been generalized to the setting of $\mathrm{RCD}(K,N)$ spaces (\cite{AmbrosioGigliSavare11-2}, \cite{Gigli12}). For the definitions of all the objects appearing below we refer to \cite{GT20} (but see also \cite{GPS18} for the definition of the differential $\d u$ appearing in the statement below). \begin{theorem}[The Korevaar-Schoen energy]\label{thm:defks} Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a $\mathrm{RCD}(K,N)$ space, $K\in\mathbb{R}$, $N\in[1,\infty)$, $({\rm Y},{\sf d}_{\rm Y},\bar y)$ a pointed \Cat0-space, $\Omega\subset{\rm X}$ open and $u\in L^2(\Omega,{\rm Y}_{\bar y})$. Then the following are equivalent: \begin{itemize} \item[i)] Letting ${\sf ks}_{2,r}[u,\Omega]\colon \Omega\to\mathbb{R}^+$ be defined by $$ {\sf ks}_{2,r}[u,\Omega] (x) := \begin{cases} \ \Big\vert \fint_{B_r(x)} \frac{{\sf d}_{{\rm Y}}^2(u(x),u(\tilde{x}))}{r^2} \, \d {\mbox{\boldmath$m$}}(\tilde{x}) \Big\vert^{1/2} &\text{ if } B_r(x) \subset \Omega, \\ \ 0 &\text{ otherwise.} \end{cases}$$ and the energy ${\sf E}^{\sf KS}(u)$ be given by \begin{equation} \label{eq:defE} {\sf E}^{\sf KS}(u):= \limsup_{r\downarrow 0} \frac12\int_\Omega {\sf ks}^2_{2,r}[u,\Omega] \, \d{\mbox{\boldmath$m$}}, \end{equation} we have ${\sf E}^{\sf KS}(u)<\infty$. \item[ii)] There is $G\in L^2(\Omega)$ such that for every $\varphi\colon {\rm Y}\to\mathbb{R}$ 1-Lipschitz with $\varphi(\bar y)=0$ we have $\varphi\circ u\in W^{1,2}(\Omega)$ with $|\d (\varphi\circ u)|\leq G$ ${\mbox{\boldmath$m$}}$-a.e.. \end{itemize} If any of these hold, the `energies at scale $r$' ${\sf ks}_{2,r}[u,\Omega]$ converge to $(d+2)^{-\frac12}|\d u|_{\sf HS}$ in $L^2(\Omega)$ as $r\downarrow0$. In particular, the $\varlimsup$ in \eqref{eq:defE} is actually a limit and the energy admits the representation \[ {\sf E}^{\sf KS}(u)=\frac1{2(d+2)}\int_\Omega|\d u|_{\sf HS}^2\,\d{\mbox{\boldmath$m$}}. \] Finally, the functional ${\sf E}^{\sf KS}\colon L^2(\Omega,{\rm Y}_{\bar y})\to[0,+\infty]$ is convex and lower semicontinuous. \end{theorem} \begin{remark}{\rm It should be noticed that the smallest function $G$ for which $(ii)$ holds is not the Hilbert-Schmidt norm $|\d u|_{\sf HS}$ of the differential $\d u$ of $u$, but rather the (pointwise) operator norm of $\d u$. The two quantities are nevertheless comparable, i.e.\ one controls the other up to multiplication with a dimensional constant. }\fr\end{remark} We shall denote by ${\sf KS}^{1,2}(\Omega,{\rm Y}_{\bar{y}})\subset L^2(\Omega,{\rm Y}_{\bar y})$ the collection of maps with finite energy and recall from \cite{GT20} that for $ u,v\in {\sf KS}^{1,2}(\Omega,{\rm Y}_{\bar{y}})$ we always have ${\sf d}_{\rm Y}(u,v)\in W^{1,2}(\Omega)$. Therefore it makes sense to ask whether $u,v$ attain the same boundary value by checking whether or not we have ${\sf d}_{\rm Y}(u,v)\in W^{1,2}_0(\Omega)$. Then given $\bar u\in {\sf KS}^{1,2}(\Omega,{\rm Y}_{\bar{y}})$ the `energy ${\sf E}^{\sf KS}_{\bar u}\colon L^2(\Omega,{\rm Y})\to [0,\infty]$ with $\bar u$ as prescribed boundary value' can be defined as \[ {\sf E}^{\sf KS}_{\bar u}(u):=\left\{\begin{array}{ll} {\sf E}^{\sf KS}(u)&\qquad\text{if $u\in {\sf KS}^{1,2}(\Omega,{\rm Y}_{\bar{y}})$ and ${\sf d}_{\rm Y}(u,\bar u)\in W^{1,2}_0(\Omega)$},\\ +\infty&\qquad\text{otherwise}. \end{array} \right. \] We shall denote the domain of ${\sf E}^{\sf KS}_{\bar u}$ by ${\sf KS}^{1,2}_{\bar u}(\Omega,{\rm Y}_{\bar{y}})\subset L^2(\Omega,{\rm Y}_{\bar y})$ and recall that \begin{equation} \label{eq:proprE} {\sf E}^{{\sf KS}}_{\bar u}\colon L^2(\Omega,{\rm Y}_{\bar y})\to[0,+\infty]\qquad\text{is convex and lower semicontinuous,} \end{equation} moreover it admits a unique minimizer, called harmonic map with $\bar u$ as boundary value. \begin{remark}{\rm Even if the definition of ${\sf E}^{\sf KS}_{\bar u}$ can be given in high generality, it should be noted that it may happen that ${\sf E}^{\sf KS}_{\bar u}={\sf E}^{\sf KS}$. This happens when $W^{1,2}_0(\Omega)=W^{1,2}(\Omega)$ which in turn occurs if ${\rm X}\setminus\Omega$ has null capacity. Thus in practical situations if one wants to enforce some boundary condition, it should be checked that ${\rm X}\setminus\Omega$ has positive capacity. }\fr\end{remark} For later use we recall that the convexity of both ${\sf E}^{{\sf KS}}$ and ${\sf E}^{{\sf KS}}_{\bar u}$ can be improved to the following inequality: \begin{equation} \label{eq:KSint} {\sf E}^{{\sf KS}}(({\sf G}_u^v)_t)+t(1-t){\sf E}^{{\sf KS}}(d)\leq (1-t){\sf E}^{{\sf KS}}(u)+t{\sf E}^{{\sf KS}}(v)\qquad\forall t\in[0,1], \end{equation} where $d(x):={\sf d}(u(x),v(x))$. Such inequality has been proved for the case $t=\frac12$ in \cite{GT20} (imitating the arguments in \cite{KS93}), the general case follows along the same arguments. It is worth to underline that in the above the maps $u,v,({\sf G}_u^v)_t$ are ${\rm Y}$-valued, while $d$ is real valued. In this sense the energy of ${\sf E}^{{\sf KS}}(d)$ of $d$ has a different meaning w.r.t.\ the energy of the other maps. Still, we recall (see \cite{GT20} and \cite{KS93}) that for a constant $c(d)$ depending only on the essential dimension $d\le N$ of ${\rm X}$ we have ${\sf E}^{{\sf KS}}(f)=c(d){\sf Ch}(f)$ for any $f\in L^2(\Omega)$, where ${\sf Ch}$ is the standard Cheeger/Dirichlet energy on ${\rm X}$. \subsection{The Laplacian of a \Cat0-valued map} Let us start by giving the general definition of Laplacian of a \Cat0-valued Sobolev map: \begin{definition}[Tension field/Laplacian]\label{def:laplacian} Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a $\mathrm{RCD}(K,N)$ space, $\Omega\subset{\rm X}$ an open subset, $({\rm Y},{\sf d}_{\rm Y},\bar y)$ a pointed \Cat0-space and $\bar u\in {\sf KS}^{1,2}(\Omega,{\rm Y}_{\bar{y}})$. Then the domain of the Laplacian $D(\Delta_{\bar u})\subset {\sf KS}_{\bar u }^{1,2}(\Omega,{\rm Y}_{\bar{y}})$ is defined as $D(\Delta_{\bar u}):=D(|\partial^-{\sf E}^{\sf KS}_{\bar u}|)$ and for $u\in D(\Delta_{\bar u})$ we put \[ \Delta_{\bar u} u:=\iota({\sf v})\in L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega) , \quad\text{ where ${\sf v}$ is the element of minimal norm in $-\partial^-{\sf E}^{\sf KS}_{\bar u}(u)$.} \] Similarly, for maps $u$ from ${\rm X}$ to ${\rm Y}$ we say that $u$ is in the domain of the Laplacian $D(\Delta)$ if $|\partial^-{\sf E}^{\sf KS}|(u)<\infty$ and in this case $\Delta u:=\iota({\sf v})$, where $\iota({\sf v})$ is the element of minimal norm in $-\partial^-{\sf E}^{\sf KS}(u)$. \end{definition} \begin{proposition}[Laplacian and variation of the energy]\label{prop:varen} Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a $\mathrm{RCD}(K,N)$ space, $\Omega\subset{\rm X}$ an open subset, $({\rm Y},{\sf d}_{\rm Y},\bar y)$ a pointed \Cat0-space and $\bar u\in {\sf KS}^{1,2}(\Omega,{\rm Y}_{\bar{y}})$. Also, let $u\in D(\Delta_{\bar u})$. Then, for every $v\in L^2(\Omega,{\rm Y}_{\bar y})$, we have \begin{equation} \label{eq:varen} -\int_{{\rm X}}\la \Delta_{\bar u} u(x), \big({\sf G}_{u(x)}^{v(x)}\big)'_0{\big\rangle}_{u(x)} \ \d {\mbox{\boldmath$m$}}(x)\leq \lim_{t\downarrow0}\frac{{\sf E}_{\bar u}^{{\sf KS}}(({\sf G}_u^v)_t)-{\sf E}_{\bar u}^{{\sf KS}}(u)}t. \end{equation} Moreover, $u$ is harmonic with $\bar u $ as boundary value if and only if $u\in D(\Delta_{\bar u})$ with $\Delta_{\bar u} u=0$. \end{proposition} \begin{proof} Inequality \eqref{eq:varen} follows applying \eqref{eq:subder}, the definition of $\Delta_{\bar u} u$ and recalling Proposition \ref{prop:l2gen}. The second claim is a restatement of \eqref{eq:sub0} in this setting. \end{proof} \begin{remark}{\rm This last proposition shows that our definition is compatible with the classical one valid in the smooth category. Indeed, if ${\rm X},{\rm Y}$ are smooth Riemannian manifold, $\bar u,u\colon \bar \Omega\subset{\rm X}\to{\rm Y}$ are smooth maps with the same boundary values, ${\sf v}$ is a smooth section of $u^*{\rm T}{\rm Y}$ (in the smooth setting ${\rm T}_G{\rm Y}$ is canonically equivalent to the standard tangent bundle ${\rm T}{\rm Y}$) which is 0 on $\partial\Omega$, then we can produce a smooth perturbation of $u$ by putting $u_t(x):=\exp_{u(x)}(t{\sf v}_x)$. A direct computation then shows that \[ \frac{\d}{\d t}\restr{t=0}{\sf E}^{\sf KS}_{\bar u}(u_t)=-\int_\Omega\la \tau(u)_x,{\sf v}_x{\big\rangle}_{u(x)}\,\d{\mbox{\boldmath$m$}}(x), \] where $\tau(u)$ is the \emph{tension field of $u$}, see for instance \cite[Section 9.2]{Jost17}. This formula is the smooth version of \eqref{eq:varen}. Notice indeed that $u_t=({\sf G}_u^{u_1})_t$ for $t\in[0,1]$ (and similarly $u_t=({\sf G}_u^{u_{-1}})_{-t}$ for $t\in[-1,0]$) and that if everything is smooth, then $t\mapsto {\sf E}^{\sf KS}_{\bar u}(u_t)$ is $C^1$, hence differentiable in 0, so that the one-sided bound in \eqref{eq:varen} becomes an equality in the smooth case. It is worth to underline that in our framework the lack of equality in \eqref{eq:varen} is not only related to the lack of smoothness of $t\mapsto {\sf E}^{\sf KS}_{\bar u}(u_t)$, which a priori could produce different left and right derivatives in 0, but also to the fact that tangent \emph{cones} are not really tangent \emph{spaces}: the opposite of a vector field does not necessarily exist and thus we are forced to take one-sided perturbations only. }\fr\end{remark} A direct consequence of Proposition \ref{prop:varen} above is the following: \begin{corollary} With the same assumptions and notation as in Proposition \ref{prop:varen} we have \[ {\sf E}_{\bar u}^{{\sf KS}}(u)-\int_{{\rm X}}\la \Delta_{\bar u} u(x), \big({\sf G}_{u(x)}^{v(x)}\big)'_0{\big\rangle} _{u(x)} \ \d {\mbox{\boldmath$m$}}(x)+{\sf E}^{{\sf KS}}(d)\leq {\sf E}_{\bar u}^{{\sf KS}}(v), \] where $d:={\sf d}(u,v)$. \end{corollary} \begin{proof} Couple \eqref{eq:varen} with \eqref{eq:KSint}. \end{proof} In the next discussion, we are interested in properties of the composition $f \circ u$, whenever $u$ is a harmonic map and $f$ is $\lambda$-convex functional. Observe that, in a smooth framework, the chain rule $\Delta (f\circ u) =$ Hess$ f(\nabla u,\nabla u) + \d f(\Delta u) $ immediately implies that \begin{equation} \label{eq:casoliscio} \Delta (f\circ u)\geq \lambda|\d u|^2_{\sf HS}\qquad\text{if $f$ is $\lambda$-convex and $u$ is harmonic.} \end{equation} A nonsmooth version of \eqref{eq:casoliscio} has already been addressed in \cite{LS19} (see Theorem 1.2 there) for maps with \emph{euclidean} source domain and \Cat0-target. Nevertheless, as we are going to show in Theorem \ref{thm:nablafu}, the discussion generalizes to our framework: the main stumbling block to overcome being the absence of \emph{Lipschitz} vector field on a $\mathrm{RCD}$-space. In the next, we shall need the following property of Sobolev functions and, specifically, of their directional derivatives (for the definition of test vector field see \cite{Gigli14} and for the concept of Regular Lagrangian Flow see \cite{Ambrosio-Trevisan14}): \begin{proposition}\label{prop:dercurv} Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a $\mathrm{RCD}(K,N)$ space, $({\rm Y},{\sf d}_{\rm Y},\bar y)$ a pointed complete metric space, $\Omega\subset {\rm X}$ open, $v\in L^2_{\mbox{\boldmath$m$}}(T{\rm X})$ a test vector field and $({\sf FI}_{s}^v)$ the associated Regular Lagrangian Flow. Also, let $u\in {\sf KS}^{1,2}(\Omega,{\rm Y}_{\bar{y}})$. Then, for every $K\subset\Omega$ compact, we have that \begin{equation} \label{eq:dery} \lim_{s\to 0}\frac{{\sf d}_{\rm Y}(u\circ{\sf FI}_{s}^v,u)}{s}=|\d u(v)|\qquad\text{in $L^2(K)$}. \end{equation} (notice that for $|s|$ small the map $u\circ{\sf FI}_{s}^v$ is well defined from $K$ to $Y$). Similarly, for a real valued Sobolev function $g\in W^{1,2}(\Omega)$ we have \begin{equation} \label{eq:derr} \lim_{s\to 0}\frac{g\circ{\sf FI}_{s}^v-g}{s}=\d g(v)\qquad\text{in $L^2(K)$}. \end{equation} \end{proposition} \begin{proof} Property \eqref{eq:derr} is (an equivalent version of) the definition of Regular Lagrangian Flow, see for instance \cite[Proposition 2.7]{GR17}. For \eqref{eq:dery} recall first \cite[Remark 4.15]{GT20} to get that functions in ${\sf KS}^{1,2}(\Omega,{\rm Y}_{\bar{y}})$ also belong to the `direction' Korevaar-Schoen space as defined in \cite{GT18}, then recall \cite[Theorem 4.5]{GT18}. \end{proof} The next Lemma deals with variations of a map $u$, suitably obtained through gradient flows trajectories in the target space, and the rate of change at the level of Korevaar-Schoen energy (see \eqref{eq:stimapunt}-\eqref{eq:dervaru} below). In the following statement, notice that $f\circ u$ belongs to $W^{1,2}(\Omega)$ - and thus $\d(f\circ u)$ is well defined - because $f$ is Lipschitz, $\Omega$ has finite measure and by $(ii)$ in Theorem \ref{thm:defks}. Also, for the very same reason, we shall drop the subscript $\bar{y}$ from ${\rm Y}$ when $\Omega$ is bounded as the $L^2$-integrability depends no more on the particular chosen point $\bar{y} \in {\rm Y}$. Compare the proof with \cite[Lemma 3.1]{LS19}. \begin{lemma}\label{lem:varEfu} Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a $\mathrm{RCD}(K,N)$ space, ${\rm Y}$ \Cat0-space and $\Omega\subset {\rm X}$ open and bounded. Also, let $f \in \mathrm{Lip}({\rm Y})$ be $\lambda$-convex, $\lambda\in\mathbb{R}$, and $u \in {\sf KS}^{1,2}(\Omega,{\rm Y})$. For $g \in \mathrm{Lip}_{bs}({\rm X})^+$, define the (equivalence class of the) variation map \[ u_t(x)={\sf GF}^f_{tg(x)}(u(x))\qquad\forall t>0,\ x\in\Omega. \] Then, $u_t \in {\sf KS}^{1,2}(\Omega,{\rm Y})$ for every $t>0$ and there is a constant $C>0$ depending on $f,g$ such that \begin{equation} \label{eq:stimapunt} |\d u_t|_{\sf HS}^2\leq e^{-2\lambda tg }\big(|\d u|_{\sf HS}^2-2t \,\la \d g,\d(f\circ u){\big\rangle}+Ct^2\big)\qquad{\mbox{\boldmath$m$}}\text{-}a.e.\ in \ \Omega, \end{equation} holds for every $t\in[0,1]$. In particular \begin{equation} \label{eq:dervaru} \limsup_{t\downarrow 0} \frac{{\sf E}^{{\sf KS}}(u_t)-{\sf E}^{{\sf KS}}(u)}{t} \le -\int_{\Omega} \frac{\lambda}{d+2} g|\d u|^2_{\sf HS} + \la \d(f\circ u),\d g{\big\rangle} \, \d {\mbox{\boldmath$m$}}. \end{equation} \end{lemma} \begin{proof} The map $x\mapsto(tg(x),u(x))$ is Borel and essentially separably valued and the map $(t,y)\mapsto{\sf GF}^f_t(y)$ is continuous, hence $x\mapsto u_t(x)$ is Borel and essentially separably valued. Also, the identity \eqref{eq:metconv} and the trivial estimate $|\partial^-f|\leq\mathrm{Lip}(f)$ show that $t\mapsto {\sf GF}^f_t(y)$ is $\mathrm{Lip}(f)$-Lipschitz for every $y\in{\rm Y}$, thus ${\sf d}_{\rm Y}(u_t(x),\bar y)\leq t\sup(g)\mathrm{Lip}(f)+{\sf d}_{\rm Y}(u(x),\bar y)$, for every $\bar y \in {\rm Y}$, from which it easy follows that $u_t\in L^2(\Omega,{\rm Y})$. Taking also into account the contraction property \eqref{eq:contr} we obtain that \[ \begin{split} {\sf d}_{\rm Y}(u_t(x),u_t(y))&\leq e^{\lambda^- t(g(x)+g(y))}{\sf d}_{\rm Y}\big(u(x),{\sf GF}^f_{t|g(y)-g(x)|}(u(y))\big)\\ &\leq e^{2\lambda^- t\sup g}\big({\sf d}_{\rm Y}(u(x),u(y))+t\mathrm{Lip}(g)\mathrm{Lip}(f){\sf d}(x,y)\big) \end{split} \] and thus \[ {\sf ks}_{2,r}^2[u_t,\Omega] (x)\leq 2e^{4\lambda^- t\sup g}\big({\sf ks}_{2,r}^2[u,\Omega] (x)+t^2\mathrm{Lip}^2(g)\mathrm{Lip}^2(f)\big). \] Integrating and using the fact that ${\mbox{\boldmath$m$}}(\Omega)<\infty$ we conclude that $u_t\in {\sf KS}^{1,2}(\Omega,{\rm Y})$. In order to obtain \eqref{eq:stimapunt} we need to be more careful in our estimates and to this aim we shall use Lemma \ref{lem:apriori} and Proposition \ref{prop:dercurv} above. Let $\gamma\colon [0,S]\to\Omega$ be a Lipschitz curve: for any $s\in[0,S]$ the bound \eqref{eq:apriorigf} gives (here we are fixing a Borel representative of $u$ and thus of $u_t$, but notice that the estimate \eqref{eq:interm3} does not depend on such choice): \[ \begin{split} {\sf d}^2_{\rm Y}&(u_t(\gamma_{0}),u_t(\gamma_{s}))\\ \leq &e^{-2\lambda t(g(\gamma_{0})\pm |g(\gamma_{0})-g(\gamma_{s})|)}\Big({\sf d}_{\rm Y}^2(u(\gamma_{0}),u(\gamma_{s}))+2t(g(\gamma_{s})-g(\gamma_{0}))(f(u(\gamma_{0}))-f(u(\gamma_{s})))\\ &+\int_0^{|t(g(\gamma_{0})-g(\gamma_{s}))|}2\mathrm{Lip}^2(f)\theta_\lambda(r)+\lambda^-\big({\sf d}_{\rm Y}^2({\sf GF}^f_r(u(\gamma_{0})),u(\gamma_{s}))+{\sf d}_{\rm Y}^2({\sf GF}^f_r(u(\gamma_{s})),u(\gamma_{0}))\big)\,\d r\Big), \end{split} \] where the sign in $\pm |g(\gamma_{0})-g(\gamma_{s})|$ depends on the sign of $\lambda$. Now use again the fact that $r\mapsto {\sf GF}^f_r(u(\gamma_{0}))$ is $\mathrm{Lip}(f)$-Lipschitz to get that \[ \begin{split} {\sf d}_{\rm Y}^2({\sf GF}^f_r(u(\gamma_{0})),u(\gamma_{s}))&\leq 2r^2\mathrm{Lip}^2(f)+2{\sf d}_{\rm Y}^2(u(\gamma_{0}),u(\gamma_{s})), \end{split} \] notice that the same bounds holds for ${\sf d}_{\rm Y}^2({\sf GF}^f_r(u(\gamma_{s})),u(\gamma_{0}))$, that \[ |t(g(\gamma_{0})-g(\gamma_{s}))|\leq ts\mathrm{Lip}(g)\mathrm{Lip}(\gamma) \] and that $\theta_\lambda(t)\leq te^{2\lambda^-t}$ to conclude that, for some constant $C$ depending only on $f,g,\mathrm{Lip}(\gamma),T$ and every $t\in[0,T]$, we have \begin{equation} \label{eq:interm} \begin{split} {\sf d}^2_{\rm Y}&(u_t(\gamma_{0}),u_t(\gamma_{s})) \leq e^{-2\lambda tg(\gamma_{0})+Cs}\Big({\sf d}_{\rm Y}^2(u(\gamma_{0}),u(\gamma_{s}))\\ &+2t(g(\gamma_{s})-g(\gamma_{0}))(f(u(\gamma_{0}))-f(u(\gamma_{s})))+Ct^2s^2+Cts{\sf d}_{\rm Y}^2(u(\gamma_{0}),u(\gamma_{s}))\Big). \end{split} \end{equation} Now let $v$ be a test vector field on ${\rm X}$ and ${\sf FI}_s^v$ its Regular Lagrangian Flow and recall that since $g,f\circ u\in W^{1,2}(\Omega)$, by \eqref{eq:derr} we know that for any $K\subset\Omega$ compact we have \begin{equation} \label{eq:interm2} \frac{g\circ {\sf FI}_s^v-g}s\to \d g(v)\qquad\text{ and }\qquad\frac{f\circ u\circ {\sf FI}_s^v-f\circ u}s\to \d (f\circ u)(v) \end{equation} in $L^2(K)$ as $s\downarrow0$. Thus writing \eqref{eq:interm} for $\gamma_s:={\sf FI}_s^v(x)$ for ${\mbox{\boldmath$m$}}$-a.e.\ $x\in\Omega$, dividing by $s^2$, letting $s\downarrow0$ and recalling \eqref{eq:dery} and \eqref{eq:interm2} we conclude that \begin{equation} \label{eq:interm3} |\d u_t(v)|^2\leq e^{-2\lambda tg }\Big(|\d u(v)|^2-2t \,\d g(v)\,\d(f\circ u)(v)+Ct^2\Big)\qquad{\mbox{\boldmath$m$}}\text{-}a.e.\ \text{in} \ \Omega, \end{equation} having also used the arbitrariness of $K\subset\Omega$ compact and the fact that the Lipschitz constant of $t\mapsto {\sf FI}_s^v(x)$ is bounded by $\|v\|_{L^\infty}$. We have established \eqref{eq:interm3} for $v$ regular, but both sides of the inequality are continuous w.r.t.\ $L^0$-convergence of uniformly bounded vectors $v$ with values in $L^0_{\mbox{\boldmath$m$}}(T{\rm X})$, thus by density we deduce that \eqref{eq:interm3} is valid for any $v\in L^\infty_{\mbox{\boldmath$m$}}(T{\rm X})$. Hence writing \eqref{eq:interm3} for $v$ varying in a local Hilbert base of $L^2_{\mbox{\boldmath$m$}}(T{\rm X})$ and adding up we deduce \eqref{eq:stimapunt}. Then \eqref{eq:dervaru} also follows. \end{proof} In order to state the analogue of \eqref{eq:casoliscio} in the non-smooth setting we need to recall the notion of measure-valued Laplacian as introduced in \cite{Gigli12} (the presentation that we make here is simplified by the fact that $\mathrm{RCD}$ spaces are infinitesimally Hilbertian). Thus let ${\rm X}$ be a $\mathrm{RCD}(K,N)$ space, $\Omega\subset{\rm X}$ open and bounded and $f\in W^{1,2}(\Omega)$. We say that $f$ has a measure valued Laplacian in $\Omega$, and write $f \in D({\mbox{\boldmath$\Delta$}},\Omega)$, provided there is a (signed) Radon measure $\mu$ on $\Omega$ such that \[ \int g\, \d \mu = -\int \la \d f,\d g{\big\rangle} \, \d {\mbox{\boldmath$m$}}\qquad\forall g \in \mathrm{Lip}_c(\Omega). \] It is clear that this measure is unique and, denoting it by ${\mbox{\boldmath$\Delta$}} f\restr{\Omega}$, that the assignment $f \mapsto {\mbox{\boldmath$\Delta$}} f\restr{\Omega}$ is linear. We shall need the following criterium for checking whether $f \in D({\mbox{\boldmath$\Delta$}},\Omega)$: for $f \in W^{1,2} (\Omega)$ and $h \in L^1({\mbox{\boldmath$m$}}\restr{\Omega})$ we have \begin{equation} - \int_{\rm X} \la \d f, \d g {\big\rangle}\, \d {\mbox{\boldmath$m$}} \ge \int_{\rm X} gh \, \d {\mbox{\boldmath$m$}}\quad\forall g \in \mathrm{Lip}_c(\Omega)^+ \quad \Rightarrow \quad f \in D({\mbox{\boldmath$\Delta$}},\Omega)\text{ and } {\mbox{\boldmath$\Delta$}} f\restr{\Omega} \ge h{\mbox{\boldmath$m$}}.\label{eq:DDeltagenu} \end{equation} We are now ready to state and prove the next theorem. \begin{theorem}\label{thm:nablafu} Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a $\mathrm{RCD}(K,N)$ space, ${\rm Y}$ be \Cat0 and $\Omega\subset {\rm X}$ open and bounded. Also, let $f \in \mathrm{Lip}({\rm Y})$ be $\lambda$-convex, $\lambda\in\mathbb{R}$ and $u \in {\sf KS}^{1,2}(\Omega,{\rm Y})$ be harmonic. Then, $ f \circ u \in D({\mbox{\boldmath$\Delta$}},\Omega)$ and ${\mbox{\boldmath$\Delta$}}(f\circ u)\restr{\Omega}$ is a (signed) locally finite Radon measure satisfying \begin{equation} {\mbox{\boldmath$\Delta$}}(f\circ u)\restr{\Omega} \ge \frac{\lambda}{d+2} |\d u|_{\sf HS}^2 {\mbox{\boldmath$m$}}. \label{eq:DDeltafu} \end{equation} \end{theorem} \begin{proof} As noticed before Lemma \ref{lem:varEfu}, under the stated assumptions we have $f\circ u\in W^{1,2}(\Omega)$. Now let $g \in \mathrm{Lip}_c(\Omega)^+$ be arbitrary and apply Lemma \ref{lem:varEfu} with these functions $f,g,u$ and define $u_t \in {\sf KS}^{1,2}_{\bar{u}}(\Omega,{\rm Y})$ accordingly. Notice that since ${\rm supp}(g)\subset\Omega$, we have that $u_t$ and $u$ agree on a neighbourhood of $\partial\Omega$ and thus have the same boundary value. Therefore from the fact that $u$ is harmonic and \eqref{eq:dervaru} we deduce \[ -\int_\Omega \la \d(f\circ u),\d g {\big\rangle} \, \d {\mbox{\boldmath$m$}} \ge \frac{\lambda}{d+2}\int_\Omega g|\d u|_{\sf HS}^2\, \d {\mbox{\boldmath$m$}} \qquad \forall g \in \mathrm{Lip}_c(\Omega)^+ \] and the conclusion comes from \eqref{eq:DDeltagenu}. \end{proof} \begin{corollary} Let $\Omega\subset {\rm X}$ be open, ${\rm Y}$ be \Cat0, $\bar{u} \in {\sf KS}^{1,2}(\Omega,{\rm Y})$, $u$ harmonic map with $\bar{u}$ as boundary values and $f \in \mathrm{Lip}({\rm Y})$ be $2$-convex. If $f \circ u$ is constant then $u$ itself is constant map. \end{corollary} \begin{proof} Apply Theorem \ref{thm:nablafu}, then $|\d u|_{\sf HS}$ vanishes and conclude. \end{proof} Let us now discuss a simple and explicit example of Laplacian of a map. \begin{example}\label{ex:s1}{\rm Let ${\rm Y}:=\mathbb{R}^2$, ${\rm X}:=\mathbb{R}/\mathbb{Z}$ equipped with the standard distances and measure, and $\Omega={\rm X}$. Then a direct application of the definitions in Theorem \ref{thm:defks} show that $u=(u_1,u_2)\colon {\rm X}\to{\rm Y}$ is in ${\sf KS}^{1,2}({\rm X},{\rm Y})$ if and only if $u_1\circ{\rm p},u_2\circ{\rm p}\colon \mathbb{R}\to\mathbb{R}$ are in $W^{1,2}_{loc}(\mathbb{R})$, where ${\rm p}\colon \mathbb{R}\to\mathbb{R}/\mathbb{Z}={\rm X}$ is the natural projection, with \[ {\sf E}^{\sf KS}(u)=\tfrac c2\Big(\int_{\rm X} |u_1'|^2(\theta)+ |u_2'|^2(\theta)\,\d\theta \Big), \] for some universal constant $c>0$. Then it is clear that $u\in D(\Delta)$ if and only if $ (u_1\circ{\rm p})'',( u_2\circ{\rm p})''\in L^2_{loc}(\mathbb{R})$ and that in this case \[ \Delta u=c( u''_1,u''_2). \] Now let $u(\theta):=(\cos(2\pi\theta),\sin(2\pi\theta))$ be the canonical embedding of ${\rm X}$ in ${\rm Y}$. Then $\Delta u=-u$ and in particular for any $\theta\in{\rm X}$ we have that $\Delta u(\theta)\in {\rm T}_{u(\theta)}\mathbb{R}^2\sim\mathbb{R}^2$ is orthogonal to the tangent space of ${\rm X}$ seen as a subset of $\mathbb{R}^2={\rm Y}$. This is interesting because one can define the differential $\d u$ of $u$, even in very abstract situations \cite{GPS18}, by means related to Sobolev calculus on the metric measure space $({\rm Y},{\sf d}_{\rm Y},\mu:=u_\sharp(|\d u|_{\sf HS}^2{\mbox{\boldmath$m$}}))$ and tangent vector fields in this metric measure space only see directions which are tangent to the graph of $u$ (this is rather obvious in this example, but see for instance \cite{MLP20} for a discussion of this phenomenon in more general cases). This means that, curiously, $\Delta u$ cannot be computed starting from $\d u$ and using Sobolev calculus in the spirit developed in \cite{Gigli14}, \cite{Gigli17}, simply because $\Delta u$ does not belong to the tangent module $L^2_\mu(T{\rm Y})$ }\fr\end{example} We conclude pointing out that while in the Definition \ref{def:laplacian} of Laplacian of a map we called into play the space $L^2(u^*{\rm T}_G{\rm Y},{\mbox{\boldmath$m$}}\restr\Omega)$ as introduced in Definition \ref{def:l2se2}, in some circumstances it might be useful to deal with a notion of Laplacian related to the Borel $\sigma$-algebra ${\ensuremath{\mathcal B}}(u^*{\rm T}_G{\rm Y})$ - and thus to the characterization given in Proposition \ref{prop:link} -, which however is only available for separable spaces ${\rm Y}$. In this direction it is worth to underline that one can always reduce to such case thanks to the following two simple results: the first says that given $u\in L^2(\Omega,{\rm Y}_{\bar y})$ we can always find a separable \Cat0 subspace $\tilde{\rm Y}$ of ${\rm Y}$ containing the gradient flow trajectory of ${\sf E}^{\sf KS}_{\bar u}$ starting from $u$, the second ensures that this restriction does not affect the notion of minus-subdifferential. \begin{proposition} Let $({\rm X},{\sf d},{\mbox{\boldmath$m$}})$ be a $\mathrm{RCD}(K,N)$ space, $({\rm Y},{\sf d}_{\rm Y},\bar y)$ a pointed \Cat0-space, $\Omega\subset{\rm X}$ open, $\bar u\in {\sf KS}^{1,2}(\Omega,{\rm Y}_{\bar{y}})$ and $u\in L^2(\Omega,{\rm Y}_{\bar y})$. Also, let $(u_t)$ be the gradient flow trajectory for ${\sf E}^{\sf KS}_{\bar u}$ starting from $u$. Then, there exists a separable \Cat0 subspace $\tilde{\rm Y}\subset{\rm Y}$ such that ${\mbox{\boldmath$m$}}(u_t^{-1}({\rm Y}\setminus\tilde{\rm Y}))=0$ for every $t\geq 0$. Similarly for the functional ${\sf E}^{{\sf KS}}$. \end{proposition} \begin{proof} From the fact that geodesics on ${\rm Y}$ are unique and vary continuously with the endpoint it is easy to see that the closed convex hull of a separable set (i.e.\ the smallest closed and convex set containing the given set) is also separable. Use this and the fact that maps in $L^2(\Omega,{\rm Y}_{\bar y})$ are by definition essentially separably valued to find $\tilde{\rm Y}\subset{\rm Y}$ which is $\Cat0$ with the induced metric and such that ${\mbox{\boldmath$m$}}(u_t^{-1}({\rm Y}\setminus\tilde{\rm Y}))=0$ for every $t\in\mathbb{Q}^+$. We claim that $\tilde {\rm Y}$ satisfies the conclusion. To see this, pick $t\geq 0$, let $(t_n)\subset\mathbb{Q}^+$ be converging to $t$ and up to pass to a non-relabeled subsequence assume that $\sum_n{\sf d}_{L^2}(u_{t_{n+1}},u_{t_n})<\infty$. Then from the triangle inequality in $L^2(\Omega)$ and the monotone convergence we see that $\|\sum_n{\sf d}_{\rm Y}(u_{t_{n+1}},u_{t_n})\|_{L^2}\leq \sum_n{\sf d}_{L^2}(u_{t_{n+1}},u_{t_n})<\infty$ so that in particular for ${\mbox{\boldmath$m$}}$-a.e.\ $x\in\Omega$ we have $\sum_n{\sf d}_{\rm Y}(u_{t_{n+1}},u_{t_n})(x)<\infty$ which in turn implies that $(u_{t_n}(x))\subset \tilde{\rm Y}$ is a Cauchy sequence, so that its limit $v(x)$ also belongs to $\tilde{\rm Y}$. The same kind of argument also shows that $(u_{t_n})$ converges to $v$ in $L^2(\Omega,{\rm Y}_{\bar y})$ and since we know, by the continuity of $(u_t)$ as $L^2(\Omega,{\rm Y}_{\bar y})$-valued curve, that $u_{t_n}\to u_t$ in $L^2(\Omega,{\rm Y}_{\bar y})$ we conclude that $u_t=v$, which proves our claim. \end{proof} To present our final result we need a bit of notation. Let ${\rm Y}$ be a \Cat0-space and $\tilde{\rm Y}$ a subspace which is also $\Cat0$ with the induced metric. Call $\mathcal I_{\tilde{\rm Y}}^{\rm Y}\colon \tilde{\rm Y}\to{\rm Y}$ the inclusion map. Then for every $y\in\tilde{\rm Y}$ the tangent space ${\rm T}_y\tilde{\rm Y}$ embeds isometrically into ${\rm T}_y{\rm Y}$ via the continuous extension of the map which sends $\alpha({\sf G}_y^z)'_0\in {\rm T}_y\tilde{\rm Y}$ to $\alpha(\mathcal I_{\tilde {\rm Y}}^{\rm Y}({\sf G}_y^z))'_0\in {\rm T}_y{\rm Y}$. In other words, we can regard a geodesic in $\tilde{\rm Y}$ also as a geodesic in ${\rm Y}$ and this provides a canonical immersion of ${\rm T}_y\tilde{\rm Y}$ in ${\rm T}_y{\rm Y}$ which for trivial reasons is an isometry. Abusing a bit the notation we shall denote such isometry by $\mathcal I_{\tilde{\rm Y}}^{\rm Y}$. \begin{proposition} Let ${\rm Y}$ be a \Cat0-space, ${\sf E}\colon {\rm Y}\to\mathbb{R}\cup\{+\infty\}$ a $\lambda$-convex and lower semicontinuous functional, $(y_t)$ a gradient flow trajectory for ${\sf E}$ starting from $y_0\in{\rm Y}$ and $\tilde{\rm Y}\subset {\rm Y}$ a subset which is also a $\Cat0$-space with the induced metric and such that $(y_t)\subset\tilde{\rm Y}$. Denote by $\tilde{\sf E}$ the restriction of ${\sf E}$ to $\tilde{\rm Y}$ Then, $-\partial^-{\sf E}(y_0)\neq\emptyset$ if and only if $-\partial^-\tilde{\sf E}(y_0)\neq\emptyset$ and letting $v,\tilde v$ be the respective elements of minimal norm we have $\mathcal I_{\tilde{\rm Y}}^{\rm Y}(\tilde v)=v$. Moreover, $(y_t)$ is also a gradient flow trajectory for $\tilde E$. \end{proposition} \begin{proof}Assume that $-\partial^-\tilde{\sf E}(y_0)\neq\emptyset$. Then we know from Theorem \ref{thm:rightD} that $\frac1h({\sf G}_{y_0}^{y_h})'_0\to\tilde v$ as $h\downarrow0$. Then clearly $\mathcal I_{\tilde{\rm Y}}^{\rm Y}(\frac1h({\sf G}_{y_0}^{y_h})'_0)\to \mathcal I_{\tilde{\rm Y}}^{\rm Y}(\tilde v)$ and thus by Theorem \ref{thm:rightD} to conclude it is sufficient to prove that $|\partial^-{\sf E}|(y_0)<\infty$, because in that case we have that $\mathcal I_{\tilde{\rm Y}}^{\rm Y}(\frac1h({\sf G}_{y_0}^{y_h})'_0)$ converges to the element of minimal norm in $-\partial^-{\sf E}(y_0)\neq\emptyset$ (which in particular is not empty) as $h\downarrow0$. Since $\frac1h({\sf G}_{y_0}^{y_h})'_0\to\tilde v$ we have in particular that $\frac{{\sf d}_{\rm Y}(y_0,y_h)}{h}=|\frac1h({\sf G}_{y_0}^{y_h})'_0|_{y_0}\to |v|_{y_0}$ and thus $S:=\sup_{h\in(0,1)}\frac{{\sf d}_{\rm Y}(y_0,y_h)}{h}<\infty$. By the contractivity property \eqref{eq:contr} we deduce that \[ \sup_{t,h\in(0,1)}\frac{{\sf d}_{\rm Y}(y_t,y_{t+h})}{h}<(e^\lambda\vee 1)S=:S' \] and thus letting $h\downarrow0$ we deduce that $|\dot y_t^+|\leq S'$ for every $t\in(0,1)$. Taking into account \eqref{eq:metconv} and the lower semicontinuity of the slope recalled in Lemma \ref{lem:slopelsc} we conclude. Viceversa, assume that $-\partial^-{\sf E}(y_0)\neq\emptyset$. Then by Theorem \ref{thm:rightD} we know that $|\partial^-{\sf E}|(y_0)<\infty$ and since trivially we have $|\partial^-\tilde{\sf E}|\leq |\partial^-{\sf E}|\restr{\tilde {\rm Y}}$ we also have $|\partial^-\tilde{\sf E}|(y_0)<\infty$. Hence by Theorem \ref{thm:rightD} we deduce $-\partial^-\tilde{\sf E}(y_0)\neq\emptyset$ and the first part of the proof applies. The last statement is a consequence of the first applied to $y_t$ for every $t>0$ and of Corollary \ref{cor:eqfor}. \end{proof} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
1,108,101,563,784
arxiv
\section{Introduction} Recent rehabilitation methods utilize brain-computer interface (BCI) to induce brain plasticity for motor control, or achieve some degree of patient self-sufficiency by commanding through thinking \cite{pfurtscheller2008}. This approach gained heightened interest in the research community and opened new possibilities for a considerable number of medical applications. However, there is still a significant work to be performed before this technology can be fully used in practice, as there is still no overwhelming evidence of functional recovery on stroke rehabilitation through BCI \cite{gwentrup2011}. This paper presents a task-oriented training for online stoke rehabilitation by controlling a haptic device through leftward and rightward movement via near-infrared spectroscopy (NIRS)-BCI \cite{coyle2004a}, as shown in Fig.~\ref{fig:setup}. This emerging modality of non-invasive BCI \cite{coffey2010,matthews2008} measures the cortical hemodynamics and oxygenation status through chromophore concentration levels of oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) \cite{zhang2009},\cite{An2013, Abibullaev2013, Abibullaev2014}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{setup.pdf} \caption{A subject commands the haptic device to move rightward and leftward through commands generated by his brain signals and read through near-infrared spectroscopy-BCI. This experiment is performed based on both motor imagery and action observation. A demonstration video of performed experiments is shown here: \url{http://youtu.be/bYdJWdPn\_LI}. } \label{fig:setup} \end{figure} BCI-based rehabilitation has been the focus of many literature studies which include studies of its different aspects on signal, control, and usage \cite{nijholt2008}; interactive feedback and control strategies\cite{mcrespo2009,krepki2007}; progress of rehabilitation strategies \cite{sitaram2007}; motor imagery to facilitate rehabilitation \cite{dickstein2007}; and implications of BCI to rehabilitation \cite{jerbi2009}. More recent literature studies looked into virtual reality and its applications to neuroscience research for neurorehabilitation \cite{bohil2011}; BCI in communication \cite{lance2012}, motor control and neural activities \cite{machado2010}; its dependence on signal acquisition, validation to real-world use, and reliability of function \cite{shih2012}; and recovery of hand motor function \cite{mattia2013}. \begin{figure*}[t] \centering \includegraphics[width=0.72\textwidth]{f1.pdf} \caption{Flowchart of the task-oriented control of a haptic device via NIRS-BCI. Concentrations of oxy-Hb and deoxy-Hb are read from 45-channel of NIRS. Input signals are pre-processed by identifying the more significant channels, and by performing feature extraction through principal component analysis (PCA). Classification is performed via multiple support vector machines (SVMs), whose output is also used for tenfold cross validation (CV). The decoded outputs are then used to control movements of an external haptic device in either leftward or rightward motion. Success or failure of the task required is determined through the visual feedback of the haptic motion.} \label{fig:flowchart} \end{figure*} A considerable number of experiments related to BCI-based rehabilitation have been conducted. These include minimal training and mental stress to patient \cite{bai2010}, rehabilitative intervention for hand plegia \cite{buch2008}, control of 9-degrees-of-freedom (DOF) wheelchair-mounted robotic arm \cite{palankar2008}, and virtual environment to facilitate neuroplasticity \cite{merians2009}. More recent experiments include detection of movement intention \cite{niazi2011}, exoskeleton to control fingers with feedback \cite{rmurguialday2012}, removing artifact in motor imagery \cite{murguialday2010}, calibrating imagery through passive movement \cite{ang2011}, studying motor learning after stroke \cite{meyer2012}, and test of feasibility of single-trial, individually-tuned classifiers \cite{zimmermann2013}. Despite the above efforts, only a few BCI-based rehabilitation studies have included a haptic device in their approach. Interestingly, there are empirical evidences that tactile sensing through haptic feedback \cite{grodriguez2011} and vibro-tactile stimulus \cite{chatterjee2007} show improved rehabilitation results. Our work aims to contribute to the same efforts of including haptic device to BCI-based rehabilitation, and is implemented by using both motor imagery and combined motor imagery-action observation methods. To drive the haptic device, signals are pre-processed and most significant channels are identified, then the output signals are classified to move the haptic device to a desired direction. Visual feedback determines the success or failure of the desired action based on the subject's brain signal command. Our experimental setup will be described and results of training the classifier will be shown. Online and offline test results will be presented which determines the efficacy of our proposed method for stroke rehabilitation. This work proceeds as follows. Section~\ref{sec:matmethods} presents how our classifiers are trained and optimized through offline supervised learning. Once these classifiers are optimized, we test them with offline and online data sets. The offline test data results are shown in Section~\ref{sec:resultsoffline}, while the more challenging case of testing via online data streaming is shown in Section~\ref{sec:resultsonline}. And lastly, Section~\ref{sec:discussion} shows the discussion and comparison between our results and the previously published results in BCI-based rehabilitation. \section{Materials and Methods} \label{sec:matmethods} Offline supervised learning is used to train and optimize our classifiers. First, raw signals are pre-processed using feature extraction through principal component analysis (PCA). This reduces the noise from the raw signal that was read through NIRS-BCI. Furthermore, more significant channels of NIRS-BCI are identified through recursive channel elimination (RCE). This eliminates the non-task-relevant channels, which can be another source of signal noise. From the processed signals that were read through task-relevant channels, classification is performed based on the actions commanded by the subject. Our classification uses multiple support vector machines (SVMs), where a majority voting mechanism is then used to further refine the classification process. Output from SVMs is used for tenfold cross validation (CV) in the signal pre-processing stage. Test data from both offline and online data sets verifies the efficacy of the trained classifiers. The flowchart of the entire experimental process is shown in Fig.~\ref{fig:flowchart}. \begin{figure*}[t] \centering \includegraphics[width=0.7\textwidth]{f2.pdf} \caption{(A) The locations of NIRS emitter-receiver optodes with 30-mm interoptode distance. The red circles represent emitters and blue circles represent receivers. The yellow circles represent the locations of the 45 channels recorded. (B) The timing of a single experimental trial of data acquisition, shown with its corresponding oxy-Hb, deoxy-Hb, and total Hb concentrations at each stage of the task.} \label{fig:session} \end{figure*} \subsection{Data Acquisition} Experimental subjects consist of seven healthy, right-handed males ages $28\pm 4$ years. All study participants gave informed consent. The ethical approval of the research was granted by the research ethics committees of the Daegu-Gyeongbuk Institute of Science and Technology. The NIRS-BCI used in our work has 45-channel optical brain-function imaging system for data acquisition (FOIRE-3000, Shimadzu Co. Ltd., Japan). It uses safe near-infrared light to assess the oxy-Hb and deoxy-Hb concentrations of the brain at wavelengths of 780 nm, 805 nm, and 830 nm. This study uses concentration levels of oxy-Hb for analysis and classification, which are found to be more correlated with the regional cerebral blood flow (rCBF) than deoxy-Hb and total-Hb \cite{gratton2005}. An increase in rCBF reflects an increase in neural activity \cite{jueptner1995}. We placed the optical fiber probes on the frontoparietal regions of the brain cortex to cover an area of $21\times 12$ cm as shown in Fig.~\ref{fig:session}A. The subjects performed three types of mental tasks denoted by $\{t_{right}\}$, $\{t_{left}\}$ and $\{t_{rest}\}$ as follows: \begin{itemize} \item $\{t_{right}\}$ - subjects repetitively performed an imaginary rightward movement of the haptic device, \item $\{t_{left}\}$ - subjects repetitively performed an imaginary leftward movement of the haptic device, and \item $\{t_{rest}\}$ - subjects rest and perform no actual task. \end{itemize} The signals during rest were used as the baseline in a classification process. Each subject performed five-session mental tasks for a total of 35 sessions for all subjects. We split every session into three blocks $[Rest \rightarrow Task\rightarrow Rest]$ as shown in Fig.~\ref{fig:session}B. In the same figure, the corresponding levels of oxy-Hb, deoxy-Hb, and total Hb are also shown during one experimental session of MI task. \subsection{Types of Experiments} This work uses two types of tasks to control the haptic device: 1.) motor imagery (MI) task, and 2.) a combined action observation (AO) and MI tasks. The latter type of task is also referred to as AOMI task. In MI tasks, subjects merely imagine the task without an external cue. In AOMI tasks, the subject performs an AO task followed by an MI task. The AO task consists of watching a video that shows the movements of a subject’s forearm in the intended direction. Our motivation for the AOMI task experiment is based on earlier studies related to the putative human mirror neuron system that describe how predictions and interpretations of the actions of others were exploited for BCI systems \cite{jarvelainen2004} \cite{tkach2007}. We want to investigate whether the combined AOMI task provides higher BCI classification rates than a pure MI task. \subsection{Signal Pre-processing} We consider two significant factors that affect the accuracy of a BCI system: 1.) background noise, and 2.) task-irrelevant channels. The noise interference in hemodynamic signals may arise from instrumental, experimental, or physiological sources. Particularly, physiological noise often overlap in frequency with the expected neural signals \cite{coyle2004a}. In this study, we employ PCA for noise reduction and feature extraction, which has shown to be more reliable in eliminating background noises in NIRS signals \cite{virtanen2009}. Other noise-reduction methods use Weiner filtering \cite{izzetoglu2005}, wavelets \cite{jang2009,abibullaev2012}, and adaptive filtering \cite{zhang2009}. Selecting task-relevant channels may yield the required accuracy with greater convenience \cite{lal2004,schroeter2004}. Unfortunately, optimal channel selection is not a trivial task, in particular for NIRS-BCI, when extracting neurophysiologic knowledge corresponding to a specific mental task. The next section describes our BCI channel selection strategy in more detail. \subsection{Principal Component Analysis} The neural datasets are denoted by $X\in R^{l\times m}$, such that $l$ is the number of channels and $m$ is the number of samples. We denote a single trial dataset as a matrix $X\in R^{l\times m}$ that has its rows $\{{x_1},{x_2},...,{x_l}\}^T$ composed of the channel observations with $m$ features or dimensions. Our goal is to find a new data matrix $X\in R^{l\times k}$ where $k<m$. We employ a PCA method for this purpose. It is based on projecting signal features ${x}\in R^{m}$ onto a subspace defined by a set of orthonormal vectors $u\in R^m$ that maximize the data variance $E$, \begin{eqnarray} \label{eqn:maxE} &&\mbox{maximize}\;\;\; E = u^T X^T Xu\\ &&\mbox{subject to}\;\;\; \|u\| = 1\nonumber \end{eqnarray} Solving the optimization in Eq.~\ref{eqn:maxE} by the Lagrangian method yields the eigenvalue equation $X^T Xu = \lambda u$. It follows that to maximize the variance, the chosen $u$ must be the eigenvector of $X^T X$ corresponding to the largest eigenvalue. In order to compute $k$ directions, we must find eigenvectors $u_1,...,u_k$ corresponding to the $k$ largest eigenvalues given $\lambda_1,...,\lambda_k,$ such that $\lambda_1\geq\lambda_2...\geq\lambda_k$. Algorithm~\ref{alg:pca} shows our method to find the PCA projection directions. The resulting features extracted by PCA are $Xu_1,...,Xu_k$. \begin{algorithm} \KwIn{Data matrix $X^{l\times m}$, dimension $k$} Process: $X_1 =X$\; \ForEach{$i = 1,...,k$}{ Select $u_j$ as the first eigenvector of $X_{i}^T X_j$\; $X_{j+1} = X_{j}\big(I-\frac{u_{j}u_{j}^T }{u_{j}^T u_{j}}\big)$} \KwOut{ Projection directions $u$ and features $Xu$} \caption{PCA Pre-procesing} \label{alg:pca} \end{algorithm} \subsection{Recursive Channel Elimination} We employ a recursive channel elimination (RCE) algorithm \cite{lal2004},\cite{schroder2005} for identifying the recording positions most relevant to cognitive tasks. The method is based on recursive feature elimination \cite{guyon2002}, which is an iterative, embedded, greedy backward method of feature selection. The best channels are determined by training several SVMs and exploring their marginal characteristics. Algorithm~\ref{alg:rce} describes our method for channel selection. The algorithm can can be summarized as follows: \begin{itemize} \item make ten disjoint training datasets (Line 2) for tenfold CV, \item train a linear SVM (Line 9) and estimate the generalization error (Line 10) for each fold, \item estimate the rank of the channels based on a margin ranking criterion (Lines 11-14), and \item eliminate channels with the lowest ranking score criterion (Line 15). \end{itemize} We repeat the procedure until the required number of channels is retained throughout all ten datasets. We define a threshold value for the number of channels that potentially need to be retained. This is done by trial and error on the basis of the test error rate at Line 10. We tried several channel combinations and decided to select only 20 of the 45 channels for subsequent classification. \begin{algorithm} \KwIn{$\{x_{i},y_{i}\}_{i=1}^{l}$, $x_i\in X$, $y_i\in\{\pm1\}$, training set with $l$ channels related to either $\{t_{right}\}$ or $\{t_{left}\}$ tasks.} Perform tenfold, divide the training set (of size) $m$ into $p$ disjoint sets $S_j,...,S_p$ of equal size $p/m$\; \ForEach{$S_j$}{ Initialize: $Ranked = [\emptyset]$\; Surviving channels $Ch_{j} = [1,2,...,l]$ \While{Surviving channel is not empty}{ \ForEach{channel in $Ch_{j} = [1,2,...,l]$}{ Temporarily remove channel $j$ in $Ch_{(j=1,...,l)}$\; Train a linear SVM with the remaining channels of $S/S_j$ and estimate $|w|$ (from Eq.~\ref{eqn:minalpha}, Eq.~\ref{eqn:fz})\; Test it on $S_j\mapsto \{\pm1\}$\; Compute the ranking score: $R_{j} =\frac{1}{|Ch_{j}|}\sum_{l\in Ch_{j}}|w_{l}| $\; } Locate channels with smallest ranking criterion: $RankChan = argmin\{R_{j}\}$\; Update channel rank: $Ranked = [RankChan,Ranked]$\; Eliminate the channel with smallest $R_{j}$ score\;} } \KwOut{Extracted channel list: $Ranked$} \caption{Recursive Channel Elimination} \label{alg:rce} \end{algorithm} \subsection{Classification} Given set of pre-processed training dataset $X:=\{x_1,...,x_m\}$ with corresponding labels $Y:=\{y_1,...,y_m\}$, where $y_i\in\{\pm1\}$ for $i=1,\ldots,m$, our next goal is to estimate a function $f:X\rightarrow \{\pm 1\}$ to predict whether a new signal observation $z\in X^{*}$ will belong to class $+1$ or $-1$. We defined the classes for the mental tasks $[\{t_{right},+1\},\{t_{rest},,-1\}]$ as patterns related to rightward movement ($y=+1$) and the baseline ($y=-1$). Similarly, we define $[\{t_{left},+1\},\{t_{rest},-1\}]$ as patterns related to leftward movement and the baseline ($y=-1$). We estimate a set of SVM functions for classification with a soft margin loss function $L(x,y,f(x)) = \mbox{max}(0,1-yf(x))$. The solution of SVM is based on the following optimization \cite{scholkopf2002}:, \begin{eqnarray} \label{eqn:minalpha} && \operatorname*{min}_{\alpha\in R,b\in R} \bigg\{\frac{1}{C}\sum_{i=1}^{m}\xi_{i}+\sum_{i,j=1}^{m}\alpha_{i}y_{i}K(x_i,z_j)\alpha_{j}y_{j}\bigg\} \nonumber\\ && \mbox{subject to}\;\;\; y_j\bigg(\sum_{i=1}^{m}\alpha_{i}y_{i}K(x_{i},z_{j})+b\bigg)\geq 1-\xi_{i} \nonumber\\ &&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \xi_i\geq 0, \forall i = 1,2,...,n \end{eqnarray} \noindent where $\alpha = (\alpha_{1},...,\alpha_{m})$ are Lagrange multipliers, the $\xi_{i}$ are slack variables and a user defined regularization parameter $C>0$. The corresponding decision function is given by, \begin{equation} \label{eqn:fz} f(z) = \mbox{sign}\bigg[\big(\sum_{i=1}^{m}\alpha_{i}y_{i}K(x_{i},z)\big)+ b\bigg]. \end{equation} From Eq.\ref{eqn:minalpha} and Eq.~\ref{eqn:fz}, the $K(x,z)$ is a reproducing kernel function which gives rise to a Gram matrix $K_{i,j}:=(x_i,z_j)$, $K\in R^{m\times m}$ [\textbf{20}]. This matrix contains all the information available in order to perform data analysis and modeling of SVM algorithm. Note that we use this formulation of the SVM for two different purposes. First, we use SVMs for the recursive channel elimination method in Algorithm~\ref{alg:rce}. Second, SVMs constitute the bases functions of multiple classifiers which we use for decoding of signal features related to MI and AOMI tasks. \begin{algorithm} \textbf{Define}: $\mathcal{E}_{1} = \{f_{1},...,f_{6}\}$ and $\mathcal{E}_{2}=\{f_{1}^{*},...,f_{6}^{*}\}$ \; \KwIn{$\{x_{i},y_{i}\}_{i=1}^{m}$, $x_i\in X$, $y_i\in\{\pm1\}$, training set related to either $\{t_{right}\}$ or $\{t_{left}\}$ mental tasks.} \ForEach{$f_{i}\in\mathcal{E}_{1}$ or $f_{j}^{*}\in\mathcal{E}_{2}$, $i,j = 1,...,6$ }{ Perform tenfold CV and a search on optimal $C$\; Divide the training set (of size) $m$ into $p$ disjoint sets $S_j,...,S_p$ of equal size $p/m$\; \ForEach{$S_j$}{ Train a $f_{i}(x)$ on $S/S_j$\; Test it on $S_j\mapsto AUC(j)$\; \KwOut{Optimized classifier model : $f(\cdot,\alpha,b)$} }} \KwOut{ Set of optimized $\mathcal{E}_{1} = \{f_{1}(\cdot,\alpha_{1}),...,f_{6}(\cdot,\alpha_{6})\}$ $\mathcal{E}_{2}=\{f_{1}^{*}(\cdot,\alpha_{1}),...,f_{6}^{*}(\cdot,\alpha_{6})\}$. } \KwIn{$\{z_{i}\}_{i=1}^{n}$, $z_i\in X$ unseen test patterns related to either $\{t_{right}\}$ or $\{t_{left}\}$ mental tasks.} \ForEach{$f_{i}(\cdot,\alpha_{i})\in\mathcal{E}_{1}$ or $f_{j}^{*}(\cdot,\alpha_{j})\in\mathcal{E}_{2}$, $i,j = 1,...,6$}{ \textbf{Evaluate $\mathcal{E}$}: $f_{i}(x_{j},\alpha_{i})\rightarrow y_{N,l}$, where $y_{j,i}\{\pm1\}$ are the columns of Table 1, $i= 1,...,l$ classifiers and $j = 1,...,n$ test patterns\; (1)-\textit{Majority vote}: define $k$-of-$l$ majority voting classifiers as defined in Eq.~\ref{eqn:FzU}.\; (2)-\textit{Output}: compute the final AUC value for majority classifiers $\mathcal{E}\rightarrow AUC$\;} \KwOut{$\mathcal{E}_{1}\rightarrow AUC_{1}$ and $\mathcal{E}_{2}\rightarrow AUC_{2}$ } \caption{Multiple SVM Training and Testing} \label{alg:msvm} \end{algorithm} \begin{table}[h!b!p!] \caption{Structure of multiple outputs} \centering \begin{tabular}{c|c c c c c} \hline \\[-4pt] ~ & $f_1$ & $\cdots$ & $f_i$ & $\cdots$ & $f_n$ \\ \hline\vspace{4pt} $x_1$ & $y_{1,1}$ & $\cdots$ & $y_{1,i}$ & $\cdots$ & $y_{1,n}$ \\ $\vdots$ & $\vdots$ & ~ & $\vdots$ & ~ & $\vdots$ \\ $x_j$ & $y_{j,1}$ & $\dots$ & $y_{j,i}$ & $\dots$ & $y_{j,n}$ \\ $\vdots$ & $\vdots$ & ~ & $\vdots$ & ~ & $\vdots$ \\ $x_m$ & $y_{m,1}$ & $\dots$ & $y_{m,i}$ & $\dots$ & $y_{m,n}$ \\ \end{tabular} \end{table} \subsection{Multiple SVM Classifiers} Instead of training a single classifier, we study train multiple SVMs with the purpose of further improving the overall BCI accuracy. We consider a multiple $n$ - classifier functions $\{f_1,f_2,...,f_n\}$ and a data set $\{(x_i,y_i)^{m}_{i=1}\},x_i\in X,\; y\in Y$. Each classifier is trained independently to predict $f_{i=1}^{n}:x\rightarrow \{\pm 1\}^{n}$. The outputs from all classifier functions are then defined as an $m$-dimensional binary vector $y = [y_{1,i},...,y_{m,i}]$, such that $y_{j,i} = 1$ if $f_i$ recognizes $x_j$ and 0 otherwise for $i = 1,...,n$. Table 1 shows that the number of correct assignments is $N_1(f_i) = \sum_{j=1}^{m}y_{j,i}$ and the number of mistakes is $N_0(f_i) = m - \sum_{j=1}^{m}y_{j,i}$. In order to make the final decision from the set of functions $\{f_i,...,f_n\}$, we define the following majority voting rule: \begin{equation} \label{eqn:FzU} F(z) = \left\{ \begin{array}{l l} +1 & \quad \textrm{if\;\;\; $\sum_{i}^{n}f_{i}(z)\geq k$}\\ -1 & \quad \textrm{else $\sum_{i}^{l}f_{i}(z)\leq n-k$}\\ U & \quad \textrm{Otherwise}\\ \end{array} \right. \end{equation} \noindent where $k<n$ and $i = 1,...,k$ making similar predictions defined by the $k-\mbox{of}-n$ majority classifier for $k\geq\frac{n}{2}$. In this case, $U$ represents the unknown outputs or failure in predicting both outputs. Thus, we have three possible outcomes from all classifiers $F:X\rightarrow \{+1,-1,U\}$. Algorithm\ref{alg:msvm} gives the details of the proposed multiple SVM classifier training and validation sequentially. This consists of two main phases, namely, the training phase and the testing phase. In both phases, we train and test two different group of multiple classifiers $\mathcal{E}_1$ and $\mathcal{E}_2$. The group $\mathcal{E}_1$ is trained by taking the examples from rightward task $\{t_{right}\}$ as positive and the examples from the rest task $\{t_{rest}\}$ as negative. Likewise, the group $\mathcal{E}_2$ is trained by taking the examples from the leftward task $\{t_{left}\}$ as positive and the examples from the rest task $\{t_{rest}\}$ as negative. Each group consists of six base SVM functions with linear kernels. In the training phase, each individual base SVM function is trained separately using the same input data from the tenfold CV (Algorithm~\ref{alg:msvm}, Lines 1-9). During the testing phase, unseen examples are applied to all base functions simultaneously in real time. Further, a collective decision is obtained on the basis of the majority voting scheme using Eq. 4 (Algorithm~\ref{alg:msvm}, Lines 12-16). In other words, once each of the six base classifiers has cast its vote, the majority voting strategy assigns the test patterns to the class with the largest number of votes and outputs are provided as the final prediction. Then, the final decision on which direction to move the haptic device with the output control command is based on the area under the receiver operating characteristic (ROC) curve. This area under the ROC curve is also referred to as AUC. The AUC is a comparatively robust measure that is insensitive to class distributions and misclassification costs \cite{bradley2009}. For instance, AUC $=1$ indicates perfect classification, whereas AUC $=0.5$ indicates that the result from the classifier is no better than a random guess. In our case, an AUC $>0.70$ moves the haptic device to the desired direction. \begin{figure*}[htbp] \centering \includegraphics[width=0.9\textwidth]{f3.pdf} \caption{Offline classification results from seven subjects for random test patterns. Plots (A) and (B) represent decoding of motor imagery (MI) tasks: (A) for $\{t_{right}\}$ task, and (B) for $\{t_{left}\}$ task. Plots (C) and (D) represent decoding of action observation-motor imagery (AOMI) tasks: (C) for $\{t_{right}\}$, and (D) for $\{t_{left}\}$. The value of AUC $>0.70$ is acceptable.} \label{fig:offlineresults} \end{figure*} \section{Offline Test Results and Analysis} \label{sec:resultsoffline} For each of the seven subjects, we trained subject-specific multiple SVM classifiers (Algorithm~\ref{alg:msvm}) with an input dataset consisting of 20 channels selected using the RCE algorithm. The relevant channel locations varied among subjects and sessions, and were updated every time a subject performs mental tasks. The search for an optimal penalty parameter was conducted to obtain the best CV performance. Then we gathered offline data set to test the performance of our resulting classifier. The resulting UAC values are shown in Fig.~\ref{fig:offlineresults} in moving the haptic device in rightward and leftward direction, in both MI and AOMI task commands. We assign AUC $>0.70$ to be acceptable, such that the haptic device is moved to the desired direction. For the sake of discussion, we introduce the following three classifier performance regions: \begin{itemize} \item $\Theta_{best}\;\;\; := (0.80,...,1]$,\;\;\;\;\; if AUC $\geq 0.80$ , \item $\Theta_{accept} := (0.70,...,0.80]$, if $0.70 < \mbox{AUC} \leq 0.80$ , \item $\Theta_{worst\;} := (0.60,...,0.70]$, if $0.60 < \mbox{AUC} \leq 0.70$ .\\ \end{itemize} The haptic system is the PHANTOM Premium 1.0 haptic device (19.5 cm $\times$ 27 cm $\times$ 37.5 cm workspace, two active degrees of freedom). Real-time neural data were acquired through a LabVIEW - NIRS interface, wherein the proposed algorithms were implemented. Let us consider the decoding results of MI task commands in Figs.~\ref{fig:offlineresults}A (rightward movement) and \ref{fig:offlineresults}B (leftward movement). In particular, for the rightward movement two subjects showed superior results: $S4$(AUC=0.9499) and $S2$(AUC=0.8573). In addition, three more subjects showed satisfactory results: $S1$(AUC=0.8098), $S5$(AUC=0.8077), and $S7$(AUC=0.8213). However, we noted inferior classifier performance for the remaining two subjects: $S3$ (AUC=0.6866) and $S6$ (AUC=0.6909). For the leftward movement, all subject showed satisfactory results, with three subjects $S4$, $S5$, and $S7$ showing superior results. Inconsistency of results in the offline mode can be attributed to BCI intersession variability \cite{gerven2009}. This problem of dramatic variability arises in neural signal measurements obtained during different recording sessions, even when the same subject is used. In addition, many other factors may affect the characteristics of neural signal measurements, resulting in variations. Such factors include the subject's condition, mood, fatigue, and drowsiness or even the subject's level of attention to a particular mental task \cite{blankertz2008}. We subsequently investigated AOMI task commands. The plots of the decoding signal patterns are shown in Figs.~\ref{fig:offlineresults}C for the rightward task command, and Figs.~\ref{fig:offlineresults}D for the leftward task command. Significant improvements in AUC values with AOMI tasks were noticeable. For instance, superior results were observed for four subjects ($S1$, $S2$, $S4$, and $S7$) in the rightward command task, with only one subject ($S3$) that consistently remained with unsatisfactory results in the rightward command task. In addition, all subjects showed improved performance in AOMI compared to MI, with only one subject, $S4$, degrades by AUC=0.0084, but still showed superior results at AUC=0.9415. One subject, $S6$, the previously showed unsatisfactory results with MI now has a much better performance with an AUC=0.8216. For the leftward command task, all the subjects showed satisfactory results with four subjects ($S2$,$S3$,$S4$,$S5$) showing superior results. Degraded results showed for subjects $S1$, $S6$, and $S7$, but remained within satisfactory results. In summary, the offline results showed that our proposed classification method in reading brain signal commands to move the haptic device leftward and rightward, has successfully achieved the desired motion at 25 out of 28 test cases with only three that showed unsatisfactory results. Furthermore, we observed that signal patterns using AOMI task commands produced better classification results than those using pure MI task commands. This offline analysis may seem uninformative so far. However, a more important aspect of our research is stable performance of the derived classification models during the real-time BCI experiments. In the next section we report the online results. \begin{table*}[ht] \caption{Experiment 1: The online performance of classifiers in decoding signals corresponding to pure motor imagery (MI) task commands of $\{t_{right}\}$, $\{t_{left}\}$, and $\{t_{rest}\}$.} \centering \begin{tabular}{l | c c c|c c c| c c c} \hline\hline\\[-5pt] Subjects &~ & Session 1 & ~ & ~ & Session 2 & ~ & ~ & Session 3 & ~\\ \\[-2ex] \rowcolor{LightCyan} ~ & $\{t_{right}\}$ & $\{t_{left}\}$ & $\{t_{rest}\}$ & $\{t_{right}\}$ & $\{t_{left}\}$ & $\{t_{rest}\}$ & $\{t_{right}\}$ & $\{t_{left}\}$ & $\{t_{rest}\}$ \\ \rowcolor{LightCyan} ~ & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} \\ \hline\\[-1ex] Subject 1 & 0.7781 & \textbf{0.8101} & 0.1824 & \textit{0.6987} & 0.7754 & 0.1624 & 0.7141 & \textit{0.6998} & 0.2811 \\ \hline\\[-1ex] \rowcolor{Gray} Subject 2 & \textbf{0.8115} & \textit{0.6755} & 0.3045 & 0.7375 & \textit{0.6589} & 0.1847 & 0.7157 & \textbf{0.8095} & 0.1450 \\ \hline\\[-1ex] Subject 3 & 0.7801 & 0.7787 & 0.2104 & 0.7584 & \textbf{0.8201} & 0.1279 & \textit{0.6590} & 0.7189 & 0.2515 \\ \hline\\[-1ex] \rowcolor{Gray} Subject 4 & 0.7124 & \textbf{0.8441} & 0.1980 & \textbf{0.8014} & 0.7352 & 0.1848 & 0.7684 & 0.7871 & 0.2801 \\ \hline\\[-1ex] Subject 5 & \textbf{0.8380} & 0.7600 & 0.1709 & 0.7412 & 0.7278 & 0.2812 & \textit{0.6971} & 0.7358 & 0.2103 \\ \hline\\[-1ex] \rowcolor{Gray} Subject 6 & 0.7312 & 0.7112 & 0.1782 & 0.7813 & 0.7177 & 0.2104 & 0.7300 & 0.7784 & 0.2081 \\ \hline\\[-1ex] Subject 7 & \textit{0.6819} & 0.7211 & 0.1541 & 0.7987 & 0.\textbf{8380} & 0.1784 & 0.7630 & \textit{0.6798} & 0.2765 \\ \hline\hline\\[-1ex] \rowcolor{LightCyan} \textbf{Mean} & 0.7618 & 0.7572 & 0.1997 & 0.7596 & 0.7533 & 0.1899 & 0.7210 & 0.7441 & 0.2360 \\ \hline\\[-1ex] \textbf{S.D.} & 0.0557 & 0.0590 & 0.0496 & 0.0371 & 0.0622 & 0.0475 & 0.0378 & 0.0484 & 0.0509 \\ \hline\hline\\[-5pt] \end{tabular} \label{table:TABLE 1:Experiment 1} \end{table*} \section{Online Test Results and Analysis} \label{sec:resultsonline} In this experiment, NIRS reads input brain signals from the subjects to move the haptic device in real-time. As in the offline case, we use both MI and AOMI task commands. Communication between NIRS and the haptic device is established through the user datagram protocol. The online experimental steps are listed starting from Line 12 onwards of Algorithm~\ref{alg:msvm}. The input data defined in Line 12 correspond to streaming data given in real-time to the optimized classifiers. The final output command is attained on the basis of the AUC value. We conducted both experiments in at least five sessions, not exceeding one session per day or eight sessions in total for a given subject. The sessions were organized by inserting AO from the data acquisition protocol, shown in Fig.~\ref{fig:session}B. We set the timing of session blocks as shown in Fig.~\ref{fig:protocol}. The input consisted of test points 3-5$~sec$ long for $\mathcal{E}_1$ and $\mathcal{E}_2$, which were equivalent to 42 samples and 20 channels, that is, ${X}\in R^{42\times 20}$. The classifier performance is measured during MI task execution periods as shown in Fig.~\ref{fig:protocol}. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{f4.pdf} \caption{Online experimental protocol for data acquisition. (A) Pure motor imagery (MI) task command. (B) Action observation-motor imagery (AOMI) task command. In both experiments, the classifiers decode a user's intent every MI execution.} \label{fig:protocol} \end{figure} \subsubsection{Experiment 1 (MI Task)} Table 2 lists classification results from three different sessions corresponding to the pure MI task command. In general, lower classification accuracies were obtained in the online experiment than in the offline experiments. A strong variability is observed in the performances of classifiers across different subjects, sessions, and tasks. The mental tasks were more recognizable in some subjects than in other subjects, resulting in larger deviations in AUC values. Classifier performance in the $\Theta_{accept}$ are shown in boldface, while $\Theta_{worst}$ are italicised. The trained classifier was successful in 55 out of the total 63 cases with eight unacceptable cases On average, the classifier performances were equivalent to ($AUC = 0.74\pm 0.2$) within the $\Theta_{accept}$ region. We obtained the maximum possible accuracy ($AUC=0.8441$) in classifying $\{t_{left}\}$ data from Subject 4. It is noted that the case of $\Theta_{best}$ performances were not consistent when classifying the same mental task by the same subject. This emphasizes the major BCI problem of inter-subject and inter-session variabilities with large standard deviations as shown in the table. Even if a participant performed well in one session, the performance within the session may have varied greatly among the $\Theta_{best}$, $\Theta_{accept}$ and $\Theta_{worse}$ regions. Let us consider the rest task $\{t_{rest}\}$ in all three sessions. We note that during the first 3~$s$ of a task period the classifiers produced increased false positive rates by detecting task-relevant signals as baseline signals. This is because of the high inherent latency of the brain hemodynamic response, which occurs over the interval 4-8~$s$ after the task onset \cite{coffey2010},\cite{gratton2005}. Moreover, we have observed the occurrence of the $U$ case in Eq. 4 when multiple classifiers did not detect any of the mental activity. \begin{table*}[ht] \caption{Experiment 2- The online performance of classifiers in decoding signals corresponding to action observation-motor imagery (AOMI) tasks commands of $\{t_{right}\}$, $\{t_{left}\}$ and $\{t_{rest}\}$.} \centering \begin{tabular}{l| c c c| c c c| c c c} \hline\hline\\[-5pt] Subjects &~ & Session 1 & ~ & ~ & Session 2 & ~ & ~ & Session 3 & ~\\ \\[-2ex] \rowcolor{LightCyan} ~ & $\{t_{right}\}$ & $\{t_{left}\}$ & $\{t_{rest}\}$ & $\{t_{right}\}$ & $\{t_{left}\}$ & $\{t_{rest}\}$ & \{$t_{right}\}$ & $\{t_{left}\}$ & $\{t_{rest}\}$ \\ \rowcolor{LightCyan} ~ & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} & {\tiny (AUC)} \\ \hline\\[-1ex] Subject 1 &\textbf{ 0.8103} & 0.7811 & 0.1441 & 0.7217 & \textbf{0.8125} & 0.1341 & \textbf{0.8974} & 0.7489 & 0.1481 \\ \hline\\[-1ex] \rowcolor{Gray} Subject 2 & 0.7357 &\textit{0.6982} & 0.2105 & 0.7875 & 0.7510 & 0.1671 &\textbf{0.8300} & 0.7712 & 0.1901 \\ \hline\\[-1ex] Subject 3 & 0.7508 & 0.7517 & 0.2100 & 0.8780 & 0.7982 & 0.1569 & 0.7124 & 0.7680 & 0.2074 \\ \hline\\[-1ex] \rowcolor{Gray} Subject 4 & \textbf{0.8142} & 0.7802 & 0.1870 & 0.7984 & \textit{0.6815} & 0.1908 & \textbf{0.8670} & 0.7046 & 0.2011 \\ \hline\\[-1ex] Subject 5 & 0.7918 & \textit{0.6901} & 0.2650 & 0.7550 & 0.7201 & 0.2233 & 0.7919 & \textbf{0.9308} & 0.1030 \\ \hline\\[-1ex] \rowcolor{Gray} Subject 6 & \textbf{0.8301} & 0.7011 & 0.1982 & \textbf{0.8183} & 0.7870 & 0.1789 & 0.7710 & 0.7118 & 0.1900 \\ \hline\\[-1ex] Subject 7 & 0.7416 & 0.7809 & 0.1455 & 0.7004 & 0.7909 & 0.1399 & 0.7321 & 0.7808 & 0.1987\\ \hline\hline\\[-1ex] \rowcolor{LightCyan} \textbf{Mean} & 0.7820 & 0.7407 & 0.1923 & 0.7799 & 0.7630 & 0.1701 & 0.8002 & 0.7737 & 0.1769 \\ \hline\\[-1ex] \textbf{S.D.} & 0.0387 & 0.04255 & 0.0417 & 0.0603 & 0.0477 & 0.0309 & 0.0683 & 0.0752 & 0.0379 \\ \hline\hline\\[-5pt] \end{tabular} \label{table:TABLE 1} \end{table*} \subsubsection{Experiment 2 (AOMI Task)} Table 3 lists the decoding results of signals corresponding to AOMI task commands. Compared to the MI task commands, AOMI task commands achieved superior accuracy. AOMI task commands were successful in 60 out of the total 63 cases with only three cases of failure. Let us compare some specific results between MI and AOMI experiments. For instance, consider the Table 2 entries for Subject 1 during Session 2 $\{t_{right}\}$, and Session 3 $\{t_{left}\}$ when the classifier accuracies were in the $\Theta_{worst}$ region. In contrast, the corresponding entries of Table 3 show much improvement in the AUC values from the AOMI experiments. By comparative analysis, we conclude the following. First, the average AUC values in the AOMI experiment were not significantly better than those of pure MI tasks. However, individual comparisons show improvements by subjects between sessions. Second, the standard deviations of AUC values were in a range similar to that of the last experiment. And lastly, the dominating inter-subject and inter-session variabilities were observed in terms of AUC values on both MI and AOMI experiments. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{f5.pdf} \caption{Representative cortical mapping of oxy-Hb level changes related to MI and AOMI tasks. The data were obtained from Subject 1 over three sessions. The color scale indicates the coordinates of concentration changes in oxy-Hb with t-values.} \label{fig:mapping} \end{figure*} \subsubsection{Brain Mapping Analysis} We assumed that the classifier performances were affected by the variability in the task-relevant cortical activation areas. Because the exact locations of the task-relevant channels were not always the same, we further analyzed a topographic cortical mapping of task-relevant oxy-Hb level changes by using a general linear model (GLM) algorithm explained in \cite{schroeter2004}. The significance thresholds for the statistical parametric maps were set to $p<0.05$. We separated the topographic map into nine regions of interest according to the functional anatomy of the premotor and prefrontal regions including the sensorimotor cortex (SMC), supplementary motor area (SMA), presupplementary motor area (preSMA), dorsal premotor cortex (PMC), and dorsolateral prefrontal cortex (PFC). The right lateral SMC was covered by Channels 1, 7, 8, and 9; the left lateral SMC by Channels 5, 6, 11, 12, and 13; the SMA by Channels 16, 17, 22, and 23; the preSMA by Channels 35, 36, 42, and 43; the right PMC by Channels 14, 15, 20, and 28; the left PMC by Channels 18, 19, 24, 25, and 26; the left PFC by Channels 27, 33, 34, 40, and 41; and the right PFC by Channels 31, 32, 37, 38, 39, 44, and 45. \begin{table*}[ht] \caption{Analysis methods used in previous NIRS-BCI studies. The asterisks denote online classification results} \centering \begin{tabular}{l| l| l| l| l} \hline\hline\\[-5pt] \\[-2ex] Author (Ref) & Brain region & Input features & Classifier & Performance \\ \hline\\[-1ex] \textbf{Our study} & Prefrontal, & OxyHb & Ensemble SVM &\textbf{76\%}-\textbf{93\%}$^{*}$ \\ ~ & Sensorimotor & PCA & Classifiers & ~ \\ ~ & cortex & & & ~ \\ \hline \\[-1ex] Abdelnour (\cite{abdelnour2009}) & Motor cortex & OxyHb after & Linear discriminant & 68\%-100\%$^{*}$ \\ ~ & ~ & Kalman filtering & analysis (LDA) & ~ \\ \hline\\[-1ex] Coyle (\cite{coyle2004b}) & Motor cortex & mean OxyHb & OxyHb amplitude & 80\%$^{*}$ \\ ~ & ~ & & threshold detector & ~ \\ \hline \\[-1ex] Utsugi (\cite{utsugi2007}) & Prefrontal & OxyHb & Artificial Neural & 70\%-90\%$^{*}$ \\ ~ & cortex & ~ & Networks & ~ \\ \hline\\[-1ex] Abibullaev (\cite{abibullaev2012}) & Prefrontal & OxyHb wavelet & Linear discriminant & LDA 81\%-95\%,\\ ~ & cortex & coefficients & analysis , Artificial & ANN 69\%-91\% \\ ~ & ~ & ~ & Neural Networks, SVM & SVM 94\%-97\% \\ \hline\\[-1ex] Coyle (\cite{coyle2004a}) & Motor cortex & mean OxyHb & Simple threshold & 75\% \\ ~ & ~ & of 20sec data & detector & ~ \\ \hline\\[-1ex] Cui (\cite{cui2010}) & Motor cortex & different features & Support Vector & 70\% - 90\% \\ ~ & ~ & OxyHb, deOxyHb & Machines & ~ \\ \hline\\[-1ex] Fazli (\cite{fazli2012}) & Prefrontal, & EEG\& fNIRS & Linear Discriminant & 78.6\%-92.9\% \\ ~ & Motor cortex & Hybrid features & Analysis & ~ \\ \hline\\[-1ex] Sassaroli (\cite{sassaroli2008}) & Prefrontal & OxyHb, deOxyHb & K-means & 55.6\%-72.2\% \\ ~ & cortex & raw features & algorithm & ~ \\ \hline\\[-1ex] Sitaram (\cite{sitaram2007}) & Frontal & OxyHb intensity & Hidden Markov & SVM 73\% \\ ~ & cortex & ~ & Models (HMM), SVM & HMM 89\% \\ \hline\\[-1ex] Tai (\cite{tai2009}) & Prefrontal & OxyHb intensity & LDA, SVM & 75\%-96\% \\ ~ & cortex & ~ & ~ & ~ \\ \hline\\[-1ex] Truong (\cite{truong2009}) & Prefrontal & OxyHb wavelet & Artificial Neural & 95\% \\ ~ & cortex & decomposition & Networks (ANN) & ~ \\ \hline \hline \end{tabular} \label{table:TABLE 1} \end{table*} The location of a Cz reference point is represented by Channel 10 (see Fig. 2(A)). Fig. 5 shows a distinct cortical activation pattern reconstructed from data on Subject 1 across different sessions for both MI and AOMI tasks. Task-relevant increases of oxy-Hb were prominent in the prefrontal regions but were strongly dependent on the task type and the session type. For instance, with repetition of the session the increase of oxy-Hb appeared to intensify for the channels covering the right and left PMC for both MI and AOMI while performing the $\{t_{left}\}$ task. The oxy-Hb was augmented in the channels covering the SMA and remained unchanged in the channels covering the left SMC. For the MI-based $t_{right}$ task, we have observed the reverse case; that is, with repetition of the session the oxy-Hb concentration levels were observed to decrease in the pre-SMA and PFC regions. By using Algorithm\ref{alg:rce} each time, we tend to select for a classifier only those task-relevant channels with higher activations. Therefore, each time the locations of task-relevant channels vary, the performance of a classifier is affected. In general, cortical activation in the pre-SMA and PFC remained relatively unchanged among most of the subjects within a session. We visually inspected all changes in the regional activation with respect to the subject, task, and session type. We briefly summarize our findings of the mapping analysis as follows: \begin{itemize} \item A session-dependent cortical activation was seen for both MI and AOMI tasks. \item In some subjects, the cortical activation levels increased with the number of sessions. \item The AOMI task produced higher cortical activation than the MI task. The major activation locations for the AOMI task included the PFC, PMC, SMA, and pre-SMC regions. \item The effect size calculated by using oxy-Hb levels showed no significant difference in either the $\{t_{right}\}$ $(p = 0.203)$ or the $\{t_{left}\}$ $(p = 0.535)$. In terms of the course of oxy-Hb changes during the AOMI period, two tasks showed comparable intervals between the start of the $\{t_{right}\}$ task and the peak of oxy-Hb in the $\{t_{left}\}$. \item No strong correlation between the MI and AOMI tasks was observed. The effect of preparation on the increases in oxy-Hb level during the MI task and AOMI task was evaluated by calculating effect sizes. In terms of oxy-Hb levels, a one-way ANOVA showed a significant main effect for site during both the MI period $(p < 0.05)$ and in the AOMI period $(p < 0.05)$. \end{itemize} \section{Discussion} \label{sec:discussion} We have shown that it is possible to command a haptic device to move in opposing directions by detecting oxy-Hb signal reading from NIRS-BCI system. This study proposes that such capability can be used for neurorehabilitation to induce brain plasticity for stroke patients, or possibly provide them with some degree of self-sufficiency. MI and AOMI task commands were implemented in both online and offline modes. Feature extraction and channel localization reduced noise in the input signals for classification of multiple SVMs. The online BCI classification of pure motor imagery tasks was 76\% accurate on average. And we observed a significant improvement in the BCI accuracies of up to 93\% when using signals from AOMI task compared to pure MI. Compared to other studies, ours has obtained improved classification rates, as shown in Table 4. Note that only a few online classification results achieved performances in the range of 70\%-90\% \cite{coyle2007}, \cite{utsugi2007}. Except for the results of Abdelnour et al. \cite{abdelnour2009} whose online classification rate ranges from 68.8\% to 100\%, our work showed better results. However, it is noted that \cite{abdelnour2009} used real finger tapping which is more discriminable than pure mental task. The methodology proposed in this paper differs from that in the other studies by virtue of the following attributes: \begin{itemize} \item We use 45-channels recordings which cover the most important regions of the brain cortex (SMC, SMA, and PFC). This is in contrast with other NIRS-BCI methods, which usually cover minimal locations of the brain cortex . We then perform an automated channel selection method which allowed us to localize 20 most task relevant channels for subsequent classification. Using multiple channels can be seen as a disadvantage, in general. However, our motivation was to accurately localize task relevant channels each time when subjects perform a specific mental task. Further, because we conduct two different experiments with new tasks (the imaginary directional movement tasks are not common BCI research) there was a need to investigate multiple channel recordings. Moreover, before each online experiment we perform a channel localization within minutes in each session and for each subject. Such localization helps to capture the session or subject specific neural activation within the few relevant channels. For instance, we have noticed that the classifier uses different channel combinations to discriminate between the imaginary rightward movement and the imaginary leftward movement. In case the fixed number of channels over a specific brain region were used then the classification accuracy could be very unsatisfactory. This is a different mechanism which allowed us to automatically switch the strong task relevant channels among various mental tasks. \item We performed different BCI experiments that included directional hand movement tasks based on pure motor imagery (MI) tasks and combined action observation and motor imagery (AOMI) tasks. We found that AOMI tasks were more classifiable compared to the pure MI tasks. Our idea of designing the directional movement tasks are intended for application in stroke rehabilitation physical therapy, which is envisaged to combine BCI with therapeutic devices for upper limb exercises. We have extracted the tasks after reviewing the important tasks used in stroke rehabilitation to improve the activities of daily living (ADL). The example of other tasks include "reaching", "pulling", "flexion" and "extension" of upper limb. Among them in our initial phase we implemented ``leftward'' and ``rightward'' movements. The haptic device was just a test platform, however it can be easily replaced with the available upper limb physical therapy devices (e.g. MIT Manus). The limitation of our study is that we study only normal subjects at present study. Moreover, the number of subjects are limited to seven. Because our initial goal was to verify the feasibility of our BCI approach. Nonetheless, we measured the data from extensive number of sessions to support the potential of the present approach. Another important question is that whether the AOMI task is effective for effective neuro-rehabilitation (e.g. improved cortical re-organization or neuroplasticity). We focus to study these question in our future study. Due to the possible application of our study, we put higher priority in the detection of MI tasks in a direct (non-interpretive) way to provide natural BCI outputs. \item We presented a different classification approach which is robust against the major BCI classification problems. Because we have optimized the classifiers in the offline settings from vast data from as many sessions as possible, they performed robust throughout online experiments. There were only few exceptions of lower classification results. To date, most NIRS-BCI studies used standard classifiers such as SVMs, LDA, HMM, or ANNs from Table 3. This study presents another classifier which achieves higher accuracies by using a multiple learning strategy. However, it is not appropriate to compare and judge the research results in terms of classification accuracy, because many factors influence the difficulty and the accuracy in a particular BCI study. For instance, such factors include the type of BCI paradigm and whether the NIRS signal characteristics used are raw, preprocessed, or transformed. In addition, the type of the system used to acquire the NIRS signals is a factor. In our study, the signal sampling frequency was 14.28 Hz, whereas other NIRS-BCIs use signals with sampling rates from 2 Hz to 10 Hz. The lower the sampling frequency, the lower the signal quality and harder the extraction of the true neural signals from background noise. \end{itemize} One known limitation of the present NIRS-BCI approach is the delay in operating a haptic device because of the intrinsic latency of the brain hemodynamics. With an EEG-BCI system, an operation can be performed over a few milliseconds. We plan to experiment two possible ways of overcoming the slowness of the NIRS-BCI that we aim to research in the next step. The first is based on exploring fast hemodynamic responses as was done by Cui et al. (2010) . The other is to develop a hybrid BCI paradigm that combines EEG and NIRS signals for rapid detection of mental state as in (Fazli et al., 2012). In addition, we plan to extensively study the influence of the various feedback types (visual, auditory, or haptic) and their effects on the improvement of the overall classification accuracies. In general, the NIRS-BCI may not be suitable for a fast translation of mental intent, however we believe that it has potential for a neurorehabilitation and motor learning of post-stroke patients that involves slow operations.
1,108,101,563,785
arxiv
\section{Introduction}\label{sec:intro} \ytableausetup{boxsize=2em}\ It has been well established that the existence of a UV complete quantum gravity theory puts strict constraints on the set of quantum field theory that can appear in the low-energy. Such consistency conditions are said to divide the consistent quantum field theories to those in the Landscape (consistent when coupled to gravity) and those not consistent which are said to belong to the Swampland. The Swampland program \cite{Vafa:2005ui} attempts to identify such conditions independently of the particular UV theory. Therefore, it would be interesting to evaluate the validity of the String Lamppost Principle, i.e. the idea that all consistent quantum gravitational theories are part of the string theory landscape. An interesting first question one could ask is if the string Swampland condition that there is an upper bound on the number of massless modes \cite{Vafa:2005ui} is a consequence of consistency of arbitrary consistent theories of quantum gravity. This conjecture is motivated from the fact that only a finite number of Calabi-Yau compactifications \cite{Yau:1991} is believed to exist for a specific dimension and amount of supersymmetry and hence leading to a finite set of supersymmetric quantum gravitational theories with a bounded set of massless modes. To try and answer this questions we can firstly look at the simplest example of such theories. Our first stop are the maximal supersymmetric theories with 32 supercharges whose massless modes are determined by supersymmetry considerations and lead to a finite massless spectrum. In fact all these theories are realized in string theory and hence provide an example of the String Lamppost Principle(SLP). The next stop are supersymmetric theories with 16 supercharges {\cite{Adams:2010zy,Green:1984sg, Kim:2019aa}}, which were recently shown \cite{Kim_2020d} to enjoy an upper bound on their rank $r\leq 26-d$, a bound also first suggested by string theory. This therefore completes the finiteness arguments for the number of massless modes in supergravity theories with 16 supercharges. To further check this finiteness hypothesis, it is only natural to move to our next stop: theories with 8 supercharges. Such an amount of supersymmetry firstly appears in 6 dimensions which also constitutes the simplest dimension to study also because of the constraints imposed by chiral anomaly cancellations. Over the past decades a lot of effort on analyzing the Landscape of such theories \cite{Kumar_2010,Taylor_2019, Kumar_2009, Kumar_2011, Morrison_2012, Morrison_2012b, Kumar_2010b, raghuram2020automatic, Kim:2019aa, Lee_2019, Park_2012} has led to a better understanding of the possible consistent theories, although not yet complete. As mentioned above it is crucial to wonder whether at least the boundedness of the massless modes is true in such theories. In this paper we will first summarize the classes of potential theories for which this is known to be true and then extend it to conclude that at least for all the proposed classes the number of massless modes are bounded. A second step to understanding the SLP is whether all theories in the Landscape can also be constructed in string theory. In particular in the case of 32 supercharges one can show that the only consistent theories are unique (except 10d, IIB and IIA) in each dimension and can be constructed in string theory. In theories with 16 supercharges progress has been made to show that the ranks of gauge groups that could consistently appear \cite{Cveti__2020,Dierigl_2021} indicate a tendency to match those coming from string theory. In particular, in 9 dimensions the only ranks that appear are $1,9$ and $17$ while for 8 dimensions the only ranks that appear are $2,10$ and $18$. In \cite{AlvarezGaume:1983ig} it is shown that indeed the rank should be odd in 9d and even in 8d otherwise the theory would suffer from a global gravitational anomaly. A more refined work was done in \cite{Montero_2021} to explain this exact pattern of numbers appearing using the cobordism conjecture. The next question one could ask is what specific gauge groups could appear and whether they match those coming from string theory. In particular, in \cite{Garc_a_Etxebarria_2017} it is shown that $f_4,so(2N+1)$ gauge groups which have no string theory constructions are in fact anomalous in 8d. More recently, \cite{hamada20218d} shows that the gauge group $g_2$ which also has no string theory construction, in fact belongs to the Swampland by considering 3-brane probes. A further analysis of possible gauge groups in 8d has been conducted in \cite{Cveti__2020,font2021exploring,Font_2020}. Moreover, It would be natural to ask whether this is also the case for theories with 8 supercharges. We thus need to understand more conditions to further constrain the Landscape. In this work we will try to add another consistency condition that the quantum field theories need to satisfy coming from the completeness of spectrum hypothesis. In particular we identify constraints coming from unitarity constraints on the 2d worldsheet of BPS string on the type of bulk matter representations that can appear. This is interesting because apart from anomaly arguments, there are no previous analyses that constrain the type of matter that can appear. Of course in the string landscape one sees strong restrictions on possible matter content coming from F-theory \cite{Klevers_2017,Kumar_2010b, Morrison_2012f}. The organization of this paper is as follows: In section 2 we review known consistency conditions for 6d ${\cal N}=1$ supergravity theories. In section 3 we review known examples and construct new classes of potential anomaly free 6d theories with 8 supercharges, with an eye towards families which are not bounded in the number of massless modes. We then move on to show that all such theories are restricted to a finite range based on unitarity of BPS string probes. In section 4 we discuss novel restrictions coming from unitarity of BPS string probes on the matter representations and use that to rule out one theory which did not have any known string theory construction. In section 5 we discuss some directions for future research. Lastly, some technical aspects are presented in the Appendices. \section{Review of 6d $\mathcal{N}=1$ Supergravity }\label{sec:1} \ytableausetup{boxsize=0.7 em,aligntableaux = center} In this section we review various features of 6d $(1,0)$ supergravity theories. In addition, we provide a review of the conditions that have been conjectured to be necessary for the consistency of these theories. The set of all such conditions provide Swampland constraints that severely limit the possible quantum field theory that could arise in consistent quantum gravity theories. \textbf{Anomaly Cancellation consideration :} A six-dimensional supergravity with 8 supercharges consists of four types of massless supermultiplets: a gravity multiplet, vector multiplets, tensor multiplets and hypermultiplets. The chiral fields of those multiplets contribute to the anomalies produced in such a theory characterized by an 8-form anomaly polynomial $I_8$. Such anomalies can be cancelled by the Green-Schwarz-Sagnotti mechanism \cite{Sagnotti:1992qw} if the anomaly polynomial $I_8$ factorizes as \begin{eqnarray} I_8(R,F)={1\over 2 }\Omega_{\alpha\beta}X^\alpha_4 X^\beta_4, \ \ X_4^\alpha={1\over 2 }a^\alpha trR^2 +\sum_ib_i^\alpha {2\over \lambda_i }trF_i^2 \end{eqnarray} where $a^\alpha, b_i^\alpha$ are vectors in $\mathbb{R}^{1,T}$, $\Omega_{\alpha \beta }$ is the metric on this space and $ \lambda_i $ are normalization factors of the gauge groups $G_i$. The anomaly factorization conditions for gravitational, gauge and mixed anomalies are summarized as follows: \begin{itemize} \item $\itemEq{\label{eqn:R4} R^4: \ \ H-V=273-29T}$ \item$ \itemEq{\label{eqn:F4}F^4: \ \ 0=B^i_{Adj}-\sum n_R^i B^i_R}$ \item $\itemEq{\label{eqn:R22}(R^2)^2: \ a\cdot a=a^\alpha\Omega_{\alpha \beta }a^\beta =9-T}$ \item $\itemEq{\label{eqn:F2R2} F^2R^2: \ a\cdot b_i=a^\alpha\Omega_{\alpha \beta }b_i^\beta ={1\over 6 }\lambda_i (A^i_{Adj}-\sum_Rn_R^iA^i_R)} $ \item $\itemEq{\label{eqn:F22}(F^2)^2: \ b_i \cdot b_i =b_i^\alpha\Omega_{\alpha \beta }b_i^\beta ={1\over 3}\lambda_i^2 (\sum_R n_R^iC^i_R-C^i_{Adj}) }$ \item $\itemEq{\label{eqn:F2F2}F^2_iF^2_j: \ b_i \cdot b_j=b_i^\alpha\Omega_{\alpha \beta }b_j^\beta = \sum_{R,S}\lambda_i \lambda_jn_{RS}^{ij}A^i_RA^j_S \ \ \ i\neq j }$ \end{itemize} where $H, V, T$ denote the number of hypermultiplets, vectors multiplets and tensor multiplets in the theory respectively. The number $n_R^i$ represents the number of hypermultiplets in the representation $\textbf{R}$ of the gauge group $G_i$ and $A_R^i,B_R^i,C_R^i$ are the following group theory coefficients: \begin{eqnarray} tr_{\text{R}}F^2=A_R trF^2, \quad tr_{\text{R}}F^4=B_R trF^4+C_R(trF^2)^2 \end{eqnarray} the values of those coefficients for various representations and the normalization factors $\lambda_i$ are summarized in \cite{Kumar_2009}. In addition as shown in \cite{Kumar_2010} the vectors $a^\alpha,b_i^\alpha \in \mathbb{R}^{1,T}$ are constrained to have integer inner products $ a\cdot a , a\cdot b_i , b_i \cdot b_j \in \mathbb{Z}$ with respect to the bilinear form $\Omega_{\alpha \beta}$, we call this the anomaly lattice. The anomaly lattice as described in \cite{Seiberg_2011} needs to be embedded in the full string lattice of the 6d supergravity. Moreover, it was shown in \cite{Monnier_2019} that the vector $a$ is a characteristic vector of the lattice $\Gamma $, meaning that for any $x\in \Gamma$ we have $a\cdot x+x^2\in 2\mathbb{Z}$. \textbf{Moduli space consideration:} The moduli space of the 6d $(1,0)$ supergravity locally takes the form $SO(1,T)/SO(T)$ which is parametrized by the a vector $j^\alpha \in \mathbb{R}^{1,T}$ with positive norm $j \cdot j >0$ representing the positivity of the metric on the moduli space. As discussed in \cite{Kumar_2010,Sagnotti:1992qw} consistency of the theory requires $j \cdot b_i >0, \ j\cdot a<0$. The first set of conditions are required for the positivity of the gauge kinetic terms and the latter condition is associated to the positivity of the Gauss-Bonnet term in gravity \cite{Cheung_2017,Hamada_2019} which has been conjectured to hold. \textbf{BPS string consideration:} The 6D theory has gravity/gauge dyonic strings with charges $a,b_i$. Those charges span the anomaly lattice which is contained in the full string lattice of the 6d theory. Therefore, as discussed in \cite{Seiberg_2011} the anomaly lattice is required to have a unimodular embedding into a self-dual lattice and this fact provides a constraint on possible theories. Furthermore, the existence of the two-form fields $B_2^\alpha$ implies the existence of string sources in accordance with the hypothesis that the spectrum of a gravitational theory needs to be complete \cite{Banks:2010zn,Polchinski:2003bq}. Therefore, according to \cite{Kim:2019aa}, a non-instantonic BPS string with charge Q and non-negative tension provides the following constraints: \begin{eqnarray}\label{uni} \begin{matrix} &j\cdot Q\geq 0, Q\cdot Q\geq -1\\ & k_\ell =Q\cdot Q+Q\cdot a+2 \geq 0, \ k_i=Q\cdot b_i\geq 0 \\& \ \sum_i c_{G_i}\leq c_L=3Q\cdot Q-9 Q\cdot a+2\end{matrix} \end{eqnarray} where $k_i$ is the level of $G_i$ and $c_{G_i}$ the central charge associated with the current algebra of $G_i$. In addition, $k_\ell $ is the level of the current algebra associated with $SU(2)_\ell$ which arises from the normal bundle $SO(4)=SU(2)_R\times SU(2)_\ell$ for the transverse $\mathbb{R}^4$, where $SU(2)_R$ is the R-symmetry of the IR (0,4) SCFT and $SU(2)_\ell$ appears as a left current algebra. \textbf{Geometric conditions:} Lastly, we have conditions that arise from string theory considerations and do not seem to have an obvious origin independently. For example, in F-theory it is required that the vectors $a, b_i$ satisfy the Kodaira conditon\cite{Kumar_2010}: \begin{eqnarray} j\cdot (-12 a -\sum_i \nu_i b_i)\geq 0 \end{eqnarray} where $\nu_i$ is the multiplicity of the respective singularity or equivalently the number of 7 branes needed for the non-abelian gauge group $G_i$(e.g. $\nu=N$ for $SU(N)$). Additional constraints are imposed from F-theory considerations regarding the irreducibility and effectiveness of divisors. Moreover, for all odd lattices $a$ is primitive and for $b_i^2\leq 0 $ also $b_i$ is primitive in F-theory and in that case the former can also be brought to the form $a=(-3,1^T)$ \cite{Kumar_2010}. \section{Towards a finite Landscape } \label{sec:finite Landscape} \ytableausetup{boxsize=0.7 em,aligntableaux = center} In \cite{Kumar_2009,Kumar_2010} it was shown that a large subset of all possible distinct combinations of non-abelian gauge groups and matter representations that can appear in a 6d ${\cal N }=1$ supergravity is finite for $T<9$. \ However, their arguments in some cases do not generalize to $T\geq 9$. In particular, \cite{Schwarz_1996,Kumar_2009} provide 5 potentially infinite families with two simple gauge group factors that are not constrained to have an upper bound in the number of massless modes and 3 with three simple gauge factors for $T\geq 9 $. We will argue that those theories are in fact restricted to a finite subset, and we will extend the finiteness condition for more classes of non-abelian theories. In this section, we will be making some useful assumptions which we wish to start by justifying. Firstly, as discussed in section \ref{sec:1} a 6d ${\cal N }=1$ supergravity contains two-form fields which could imply the existence of string sources. In particular, the completeness of spectrum hypothesis will require that all charges compatible with the Dirac quantization condition appear in the theory \cite{Polchinski_2004} and form the string lattice $\Gamma$ of the 6d theory. This statement can be supported from studying black holes \cite{Banks_2011} or in the context of AdS/CFT \cite{harlow2019symmetries}. Moreover, we can generalize this statement and argue that the lattice of all states should be generated by BPS states because any black hole in the theory could eventually decay to a collection of BPS/anti-BPS states and hence these charges should be in the lattice too. Even though this is a heuristic argument, it is a motivation for this assumption. Therefore we will assume that each charge in $\Gamma$ is a $\mathbb{Z}$-linear combination of the BPS charges and hence they generate the lattice. In fact we believe this assumption is more general than the setup we are studying in this paper. Namely the lattice of allowed BPS charges are generated by BPS generators in all cases. We are not aware of any counterexamples to this assumption in the string landscape. Therefore, the assumption we will be using can be summarized as follows: \begin{framed} The string charge lattice $\Gamma $ always has a basis of BPS charges that span the entire lattice. \end{framed} Secondly, another assumption we will be making is: \begin{framed} There are only finitely many inequivalent theories with a given gauge group $G$ and matter $M$. \end{framed} Although, we do not have a proof of this statement it constitutes a reasonable physical assumption. It would be rather strange to have a fixed low energy matter content be represented by infinitely many inequivalent theories. Lastly, apart from the above assumptions we will also be using the fact that in the case that a particular theory has enough matter to be Higgsed then the string lattice $\Gamma$ of that supergravity does not get affected by the process. This is because the Higgsing process only involves the hypermultiplets and vector multiplets of the theory and does not affect the tensor multiplets and hence the dyonic string charge lattice $\Gamma$ should remain unaffected. In addition, one should note that the vectors $a,b_i$ provide the coupling of the two form $B$ fields to the spacetime curvature $B\cdot a trR^2$ and the coupling to the field strengths $B\cdot b_i trF_i^2$. Therefore, since the 6d theory contains strings of charges $b_i$ associated to gauge instantons \cite{Duff_1996} we know that $b_i$ should belong to $\Gamma$, similarly it has been argued that the vector $a$ should also belong to the lattice corresponding to a gravitational instanton and hence should also be unaffected by the Higgsing process. We can now move on to constructing potential infinite families which we wish to exclude. Let us recall that in order to construct infinite families of unbounded size one can start by identifying gauge groups that satisfy the $trF^4$ anomalies for arbitrarily large size. The simple gauge groups that can have unbounded dimension are $SU(N),SO(N), Sp(N/2)$. For example, a theory with an $SU(N)$ factor should satisfy: \begin{eqnarray} B_{adj}=2N=\sum_R n_R B_R \end{eqnarray} As discussed in \cite{Kumar_2009,Kumar_2010} for large $N$ the only representations that can appear have $B_R$ at most linear in $N$. Those are the fundamental, adjoint, two-index antisymmetric and symmetric representations. The set of possible such theories including the groups $SO(N),Sp(N/2)$ is summarized in Table \ref{table:1}. \begin{table}[h!] \begin{tabular}{|l|l|l|} \hline Group &Matter & $H-V$\\ \hline $SU(N)$ & \begin{tabular}[c]{@{}l@{}} $1\ \text{Adj}$ \\ $1\ \ydiagram{ 2}+ 1 \ \ydiagram{ 1,1}$ \\ $2N\ \ydiagram{ 1}$ \\ $(N+8) \ \ydiagram{ 1}+1 \ $ { \ydiagram{ 1,1}}\\$ (N-8)\ \ydiagram{ 1}+1 \ \ydiagram{ 2}$\\ $16\ \ydiagram{ 1}+ 2 \ \ydiagram{ 1,1}$\\\end{tabular} & \begin{tabular}[c]{@{}l@{}} $ 0 $ \\ $1$ \\ $N^2+1$ \\ ${1\over 2}N^2+{15\over 2}N+1 $ \\ ${1\over 2}N^2-{15\over 2}N+1 $\\ $15N+1$\\\end{tabular} \\ \hline $SO(N)$ & $(N-8)\ \ydiagram{ 1}$ & $ {1\over 2} N^2-{7\over 2}N $ \\ & \ $1\ \ydiagram{ 1,1}$ & $ 0 $ \\ \hline $Sp(N/2)$ & \begin{tabular}[c]{@{}l@{}}$(N+8)\ \ydiagram{ 1} $\\ $16 \ \ydiagram{ 1}+1 \ \ydiagram{ 1,1}$\end{tabular} & \begin{tabular}[c]{@{}l@{}}$ {1\over 2} N^2+{7\over 2}N $ \\ $15N-1$\end{tabular} \\ & \ $1\ \ydiagram{ 2}$ & $ 0 $ \\ \hline \end{tabular} \caption{Most theories have $H-V\to \infty $ as $N\to \infty $ except those with $H-V=0, 1 $ for which $T\leq 9 $ and there is no obstruction to having an infinitely large gauge group from anomalies alone.} \label{table:1} \end{table} In particular, we note from Table \ref{table:1} that the only theories that satisfy the gravitational anomaly for arbitrary $N$ are $SU(N) \text{ with }\text{Adj} / 1\ydiagram{2}+1\ydiagram{1,1}$ and $SO(N)/Sp(N/2)$ with $\ydiagram{1,1}/\ydiagram{2} $ with $T\leq 9$. As discussed in \cite{Kumar_2010} $T<9$ are excluded since there is no solution for the vectors $a,b$ satisfying $a^2>0,b^2=0, a\cdot b =0$. However, for $T=9$ both $a,b$ are null vectors with $a\cdot b =0$ and hence parallel, i.e. $b=\lambda a $ with $\lambda <0$ (such that $j\cdot a<0 \text{ and }j\cdot b>0$). Specifically, in this case it is simple to find solutions $a,b$ and in fact such an example is constructed later in this section. Therefore, for $T=9$ this theory constitutes a potentially infinite family with unbounded size. \begin{table}[h!] \begin{tabular}{|l|l|} \hline $SU(N)\times SU(N)$ &2 (\ydiagram{1},${\ydiagram{1}}$) \\ \hline $SO(2N+8)\times Sp(N)$ & $(\ydiagram{ 1},\ydiagram{1})$ \\ \hline $SU(N)\times SO(N+8)$ & $(\ydiagram{ 1},\ydiagram{1})$ $+(\ydiagram{ 1,1},1)$ \\ \hline $SU(N)\times SU(N+8)$ & $(\ydiagram{ 1},\ydiagram{1})$ $+(\ydiagram{ 1,1},1)$ $+(1,\ydiagram{ 2})$ \\ \hline $Sp(N)\times SU(2N+8)$ & $(\ydiagram{ 1},\ydiagram{1})$ $+(1,\ydiagram{ 2})$ \\ \hline \end{tabular} \caption{Potentially infinite families with two simple gauge group factors. } \label{table:2} \end{table} Next we consider theories with gauge groups of the form $G_1\times G_2$ with $G_i$ drawn from Table \ref{table:1}. In \cite{kumar2009string,Kumar_2009,Schwarz_1996} they identify 5 potentially infinite families with arbitrarily large dimension given in Table \ref{table:2} and composed of two simple gauge factors from Table \ref{table:1}. The expectation is that even though each individual factor may not satisfy the gravitational anomaly, we can arrange such that introducing matter charged under both gauge groups reduces $H-V$ enough to make it possible. Furthermore, it is important to note that the only matter charged under two gauge groups is bifundamental matter. This can be justified by considering the fact that for 6d ${\cal N}=1$ gauge theories we know that all theories are Higgsable until one reaches the Non-Higgsable Clusters(NHC) \cite{Morrison_2012} or the gauge group gets completely Higgsed away and hence we expect that any family of theories should be Higgsable to some minimal gauge group. As discussed earlier Higgsing does not affect the string lattice and consequently the vectors $b_i$ of the instantonic strings of the gauge theory, which implies that their inner products defined through the anomaly cancellations condition (\ref{eqn:F2F2}) should be independent of the size $N$ of the gauge group which gets reduced by the process. In particular, for two vectors $b_1,b_2$ their inner product is given by $b_1\cdot b_2=\sum_{R,S}\lambda_i \lambda_j n^{ij}_{RS}A^i_RA^j_S$ which as noted in \cite{Kumar_2009, Kumar_2010} can only be independent of $N$ if both $R$ and $S$ are the fundamental representations. But more specific to the theories from Table \ref{table:1}, one can see that no theory has enough matter to gauge any of the $Adj, \ydiagram{2},\ydiagram{1,1}$ because for example a theory of the form $SU(N)\times G_2(N)$ with $(Adj,R )$ would require that $SU(N)$ has $dim(R)$ number of $Adj$ representations but any theory has at most one. Therefore, no $G_i$ factor can be $SU(N) \text{ with }\text{Adj} / 1\ydiagram{2}+1\ydiagram{1,1}$ or $SO(N)/Sp(N/2)$ with $\ydiagram{1,1}/\ydiagram{2} $. One could consider $k$ gauge groups from Table \ref{table:1} with matter charged under only one factor and constant $H-V$. But the gravitational anomaly would then become $(H_{ch}-V)k\leq 273-29 T$ with $(H_{ch}-V)\geq 0 $ and hence restricting the number of terms. Therefore, we only need to focus on excluding the theories of Table \ref{table:2}\footnote{We note that we do not present the full set of theories since exchanging matter with its conjugate when gauging it provides distinct theories but this does not affect our calculations.}. The first theory is valid for $T\leq 9 $ and the rest for $T\leq 10$. For $T<9$ it was shown in \cite{Kumar_2010,Kumar_2009} that no solution exists for $a,b_i$ for any of the theories that satisfy all the consistency conditions studied earlier and specifically all $b_i$'s be associated with positive kinetic terms. Similarly, for all theories except the first one, there is also no solution for vectors $a,b_i$ when $T=9$. This is easy to verify for example for $SO(2N+8)\times Sp(N)$, which has vectors $a,b_i\in \mathbb{R}^{1,T}$ that satisfy: \begin{eqnarray}\label{ex1} a\cdot b_1=2,\ a\cdot b_2=-1, \ b_1^2=-4, \ b_2^2=-1, \ b_1\cdot b_2=2 \end{eqnarray} There are two null vectors $a, (b_1+2b_2)$ that satisfy $a\cdot (b_1+2b_2)=0$ and hence need to be parallel $b_1+2b_2=\lambda a \Longrightarrow b_1=\lambda a -2b_2$ for some $\lambda \in \mathbb{R}$. However, since $b_1\cdot b_2=2$ then $\lambda=0$ implying that $ j\cdot b_1=-j\cdot 2b_2$, meaning that we can not find vector $j$ ensuring the positivity of both kinetic terms. In an identical fashion one can show the same result by considering the null vectors $a$ and $2b_1+b_2$(for the third) or $b_1+b_2$ (for the fourth and fifth). \begin{table}[h!] \begin{tabular}{|l|l|} \hline $SU(N-8)\times SU(N)\times SU(N+8)$ & $(\ydiagram{ 1}\otimes \ydiagram{ 1}\otimes 1)$ +$(1\otimes \ydiagram{ 1}\otimes \ydiagram{ 1})$\\ & $+(\ydiagram{ 1,1}\otimes1\otimes 1)+(1\otimes 1\otimes \ydiagram{ 2}) $\\ \hline $Sp((N-8)/2)\times SU(N)\times SO(N+8)$ & $(\ydiagram{ 1}\otimes \ydiagram{ 1}\otimes 1)$ +$(1\otimes \ydiagram{ 1}\otimes \ydiagram{ 1})$ \\ \hline $SU(N-8)\times SU(N)\times SO(N+8)$ & $(\ydiagram{ 1}\otimes \ydiagram{ 1}\otimes 1)$ +$(1\otimes \ydiagram{ 1}\otimes \ydiagram{ 1})+(\ydiagram{1,1}\otimes1\otimes1)$ \\ \hline $Sp((N-8)/2)\times SU(N)\times SU(N+8)$ & $(\ydiagram{ 1}\otimes \ydiagram{ 1}\otimes 1)$ +$(1\otimes \ydiagram{ 1}\otimes \ydiagram{ 1})+(1\otimes1\otimes\ydiagram{2})$ \\ \hline \end{tabular} \caption{Potential infinite families with three simple gauge factors. } \label{table:3} \end{table} Furthermore, as described in the Appendix anomalies permit classes of infinite families with more than two simple gauge factors. For example, the gauge group theories described in Table \ref{table:3} where first introduced in \cite{Kumar_2010}. More generally, one can construct theories which satisfy all the anomalies with arbitrarily large number of gauge factors. Specifically, linear chains of such theories are presented in Table \ref{table:k}. In addition, in Appendix \ref{appnon} we discuss theories that have gauge groups connected in a non-linear fashion. For example, one can construct theories where one gauge group is connected to multiple others. Specifically, in \ref{appnon} we find that a large class of these theories have inner products $b_i\cdot b_j$'s corresponding to the affine ADE, where each $b_i$ represents a node on the Dynkin diagram. Each $b_i$ is associated with a gauge group $SU(a^{\vee}_i N )$ where $a^\vee$ is the dual coxeter label and the matter is bifundamentals according to the links of the affine Dynkin diagram. However, an interesting observation is that even though anomalies permit the $A,D$ type inner products to have arbitrarily many factors, a more careful analysis shows that this is not possible. This is because the vectors $a, b_i$ form the anomaly lattice which needs to be embedded in the string lattice $\Gamma$. The anomaly lattice is generated by at most $k+1$ vectors $a, b_i$ and the full lattice $\Gamma$ is of signature $(1,(-1)^T)$ and generated by $T+1$ vectors. This implies, that $k\leq T$ and since the gravitational anomaly cancellation requires $T\leq 9 $ then $k\leq 9$. Moreover, one can show that $V=\sum_i^k b_i$ is a null vector which satisfies $a\cdot V=0$. Hence for $T<9$ we have $a^2>0$ which implies that $V=0$ but now one can notice that not all $j\cdot b_i>0$ can be satisfied simultaneously. Therefore, the only case for which solutions could be found is for $T=9$. Similarly, for the theories of Table \ref{table:k} with $k$ factors the anomaly lattice has signature $(1,(-1)^{k-1})$ for $T<8+k$ which means that $k\leq T+1$. Therefore, using the last column of Table \ref{table:k} one can show that all the theories have a finite number of gauge factors and more specifically the second theory has $T\leq 138$ while the rest $T\leq 137$. Furthermore, we note that the theories of Table \ref{table:k} with $T< 8+k$ have no consistent solutions. In particular, the first and last theories have the same anomaly lattice and hence can be considered together, same holds for the second and third. For the first and last theory one can consider $T<9$ and note that $a^2>0$ and $(b_1+\cdots + b_k)^2=0$ with $a\cdot (b_1+ \cdots b_k)=0$ from which it follows that $b_1+ \cdots b_k=0$ and hence cannot satisfy $j\cdot b_i>0$ simultaneously for all $i$. We can extend this for $8+k> T\geq 9$ by considering the following vectors: \begin{eqnarray} V_1=a+\sum _{i=k-T+9}^{k-1} b_i (i-k+T-8)+(T-9) b_k, \ V_2=\sum_{i=1}^kb_i \end{eqnarray} It is simple to verify that $V_1^2=V_2^2=V_1\cdot V_2=0 $ from which it follows that $ V_2= \lambda V_1$. Now consider the product $\underbrace{b_k\cdot V_2}_{=0}=\underbrace{b_k\cdot \lambda V_1}_{=\lambda}\Longrightarrow \lambda =0$. Therefore, $ V_2=0$ and hence not all $j\cdot b_i >0$ conditions can be satisfied. This method though does not constrain the theories that have $T=8+k$ which arise when $k\leq 6 $ for the first theory and when $k\leq 7 $ for the last and solutions can be found as we will see later. Similarly, one can note that also for the second and third cases there are no solutions for $T< 8+k$. \begin{table}[h!] \hskip-1.0cm \scalebox{0.75}{ \begin{tabular}{|l|l|l|} \hline Gauge group& Matter & Tensors\\\hline $SU(N-8)\times SU(N)\times SU(N+8)\times \cdots \times SU(N+8(k-2))$ & $\ydiagram{ 1,1}\otimes1\cdots \otimes 1+1\otimes1 \cdots \otimes \ydiagram{ 2}$&$T\leq \frac{27 k}{29}+\frac{245}{29}$ \\ \hline $Sp((N-8)/2)\times SU(N)\times SU(N+8)\times \cdots \times SO(N+8(k-2))$& &$T\leq \frac{27 k}{29}+\frac{247}{29}$\\ \hline $SU(N-8)\times SU(N)\times SU(N+8)\times \cdots \times SO(N+8(k-2))$& $\ydiagram{ 1,1}\otimes1\cdots \otimes 1$ &$T\leq \frac{27 k}{29}+\frac{246}{29} $\\ \hline $Sp((N-8)/2)\times SU(N)\times SU(N+8)\times \cdots \times SU(N+8(k-2))$& $1\otimes1 \cdots \otimes \ydiagram{ 2}$&$T\leq \frac{27 k}{29}+\frac{246}{29}$\\ \hline \end{tabular}} \caption{Each theory has bifundamental matter between any adjacent groups and the matter indicated in the table is matter charged under only one gauge group. The last column indicates the upper bound on $T$ that the gravitational anomaly imposes.} \label{table:k} \end{table} We will now provide a general argument that restricts all the theories presented above to a finite set. The argument is based on \cite{Kim:2019aa} where they use completeness of spectrum as evidence for the existence of BPS strings with some charge $Q=(q_1,\cdots q_{10})$ and $q_i\in \mathbb{Z}$ satisfying consistency conditions (\ref{uni}). Those consistency conditions will then provide us with an upper bound on the size $N$ of the gauge group. All the theories above have a gauge group with a finite number of non-abelian simple gauge groups and their size is controlled by the parameter $N$ which is not bounded by the arguments already presented. However, one can notice that each family of theories labelled by $N$ is connected through Higgsing. For example, $SU(N)+1Adj $ can be Higgsed to $SU(N-1)+1Adj $ by making $2N-1 $ full hypermultiplets massive. However, as discussed earlier the Higgsing process does not affect the string lattice which implies that any vectors in the lattice is independent of the size $N$. Specifically, by considering $\{Q_i\}$ as the BPS string states that generate $\Gamma$ and satisfy the conditions (\ref{uni}) then one has that these charges are also independent of $N$. This therefore implies that there should exist a minimal choice of BPS charge $Q\in \{Q_i\}$ that is also independent of $N$. Therefore, for an infinite family drawn from the examples above and using that $Q^2+Q\cdot a \geq -2$ the unitarity bound becomes (for at least one non-zero $k_j$): \begin{eqnarray} & k_j{\dim G_j\over k_j+h^\vee_j}\leq c_\ell+\sum_i k_i{\dim G_i\over k_i+h^\vee_i}\leq c_L=3Q^2 -9Q\cdot a+2\leq 12Q^2+20 \end{eqnarray} where $k_i$ is the level of the $G_i$ current algebra and $c_\ell ={3k_{\ell}\over 2+k_{\ell} }$ the central charge of $SU(2)_{\ell}$. We note that if $k_i\neq 0 $ then for $G_i=SU(N_i) $ we have that $N_i-1={\dim G_i\over 1+h^\vee_i}\leq k_i{\dim G_i\over k_i+h^\vee_i}$, for $G_i=Sp(N_i) $ we have that $2N_i-3\leq {\dim G_i\over 1+h^\vee_i}\leq k_i{\dim G_i\over k_i+h^\vee_i}$, for $G_i=SO(N_i) $ we have that ${N_i\over 2}={\dim G_i\over 1+h^\vee_i}\leq k_i{\dim G_i\over k_i+h^\vee_i}$, where all $N_i$ are a linear function of $N$. This implies that left-hand side of the inequality is always a linear function of $N$'s. Moreover, since $Q^2$ is independent of $N$ then this provides a finite upper bound for the size $N$. This is clear if there is one chain of theories related by Higgsing for arbitrarily large $N$. However, there is a slight loophole in this argument: it may be that there is no such infinite chain, but that there are infinitely many finite Higgs chains each of which start from a maximal $N_{max}$. Then by Higgsing them down to a given $N$ we see that for a fixed $N$ we have to have infinitely many inequivalent theories with the same massless matter content which we assumed can never happen. In the above argument we assumed that at least one of the levels $k_i$ can be chosen to be non-zero. We now argue that some $Q$ can always be chosen to have at least one non-zero $k_i$. Let us assume that there is no charge $Q$ such that $b_i\cdot Q=k_i>0$. In this case $b_i\cdot Q=0$ for any of the $Q$'s. But for any $b_i$ there exists a vector in the lattice which has a non-vanishing inner product with it, by the requirement of the self-duality of the charge lattice. However, since $Q$ generates the lattice this leads to a contradiction. And so there are some BPS states $Q$ with non-vanishing $k_i$. Even though our argument above does restrict the infinite families to only a finite consistent set under reasonable assumptions, it does not provide us with a concrete upper bound of the size of the gauge groups for each theory. We will therefore devote the remaining of the section to go through some of the theories presented above and find particular solutions for $a,b_i$ such that we can illustrate using unitarity the exact upper bound for the size $N$ in those cases. Let us begin by considering the single gauge group infinite families: $SU(N)+1\text{Adj} \text{ or } 1\ydiagram{2}+1\ydiagram{1,1}$ with $T= 9$. In order to ensure that the theory is unitary the following inequality needs to hold: \begin{eqnarray} c_\ell+{k(N^2-1)\over k+N}\leq c_L \end{eqnarray} where $k$ is the level of the $SU(N)$ current algebra and $c_\ell ={3k_{\ell}\over 2+k_{\ell} }$ being the central charge of $SU(2)_{\ell}$. The inequality would be strongest for a minimal choice of $c_L$ which depends on the choice of charge $Q$. Let us consider a representation of the $a,b$ vectors. In particular in the integral basis let us take $ \Omega =\text{diag}(1,(-1)^{9})$, and $ a=(-3, 1^{9}), $ and we choose a string with minimal charge $Q=(1,-1,-1,0\dots ,0)$ which gives $c_L=8$ and $Q^2=-1, k=-\lambda, k_\ell=0$. Therefore with this realization, one can easily check that the only possible $k,N$ that satisfy the unitarity bound are: \begin{align} &(k\geq 1, N=0,1,2,3),(4\geq k\geq 1,N=4)\\&(2\geq k\geq 1,N=5),(k=1,N=6,7,8,9) \end{align} Therefore, the size of the gauge group for this theory is bounded by $N\leq 9$ at least for this realization of vectors. In the next section we will show that the theories with $k=1$ belong to the Swampland and hence the size is bounded by $N\leq 5$. To show that these are general Swampland bounds we need to show that these results hold independently of possible inequivalent realizations of the $(a,b)$ vectors in the lattice. One potential issue is that for $N\leq 3$ there are infinitely many potential solutions for the vector $b$ but the number of massless modes is still finite. However, in this work, as was discussed earlier, we will assume that there are only finitely many theories with a fixed gauge group and matter, and therefore such issues are avoided. Furthermore, according to \cite{Kumar_2010} both vectors $a,b$ need to be primitive in F-theory and hence theories with $\lambda>1$ can not have an F-theory construction. Combining this with our conjecture of the next section we expect that no F-theory construction should be possible also for $\lambda=1$. A more general worry is that the above result was deduced with the assumption that $\Omega =\text{diag}(1,(-1)^{9}),a=(-3,1^9)$ while one could imagine other inequivalent choices for these. In fact, since $T\equiv1\ (\mod 8)$ in this case we could either have the lattice be odd and isomorphic to $\mathbb{Z}^{T+1}$ or it can be an even lattice isomorphic to $U\otimes E_8(-1)$ with $U=\begin{pmatrix} 0&& 1\\1&&0 \end{pmatrix}$. One needs to ensure that these as well as other choices for $a$ provide finite size too. Therefore, our previous general argument ensures the finiteness of these theories independently of the type of lattice or particular solution. Next we move to theories with two simple gauge group factors summarized in Table \ref{table:2}. \begin{itemize} \item $SU(N)\times SU(N)$ \end{itemize} For $T=9$ in \cite{Kim:2019aa} it was shown that for a particular choice of $\Omega, a, b_i$'s all theories with $N>9$ belong to the Swampland because they contain non-unitary strings. More general solutions can be found by noticing that $a,b_1+b_2 $ are null with $a\cdot (b_1+b_2 )=0$ and hence satisfy $-a=m (b_1+b_2)$ with $m>0$. In this case the general argument translates into the equations $N\leq 12Q^2+20$ with $Q^2$ some constant. \begin{itemize} \item $SO(2N+8)\times Sp(N)$ \end{itemize} The anomaly cancelation conditions dictate the following inner products between the vectors $a,b_i\in \mathbb{R}^{1,T}$: \begin{eqnarray}\label{ex} a\cdot b_1=2,\ a\cdot b_2=-1, \ b_1^2=-4, \ b_2^2=-1, \ b_1\cdot b_2=2 \end{eqnarray} For $T=10 $ solutions $a,b_i$ exist but we show that unitarity ensures the finiteness of the theory. We may choose a presentation of these such that the bilinear form $\Omega$ and the vectors $a,b_1,b_2$ are given as follows: \begin{eqnarray} \begin{matrix} \Omega =\text{diag}(1,(-1)^{10}), & a=(-3, 1^{10})\\ b_1=-2 a, &b_2=(1,-1,-1,0^{8}) \end{matrix} \end{eqnarray} In this presentation one can choose $j=(1,0^{10})$ which satisfies $j\cdot a <0$ and $j \cdot b_i>0$ as desired. Considering a BPS string with charge $Q=(q_1,\cdots q_{11})$ satisfying conditions (\ref{uni}) then unitarity of the string worldsheet requires that: \begin{eqnarray} {k_1((2N+8)(2N+7)/2)\over k_1+2N+6} + {k_2(2N(2N+1)/2)\over k_2+(N+1)} \leq c_L \end{eqnarray} One can easily check that a minimal string charge solution can be $Q=(1,-1,0^8,-1)$, which has levels $k_1=2,k_2=0$ and central charge $c_L=8$. The unitarity bound for this string configuration reduces to: \[ {2((2N+8)(2N+7)/2)\over 2+2N+6} \leq 8\Longrightarrow N\leq 1/2 \] This seems to be reassuring because it does not rule out the theories at $ N=0$ with a single $SO(8)$ which do have known string theory realizations\cite{Morrison_2012b, martini20156d,Taylor_2012h}. As for the case of $N=1/2$ one has a single $SO(9)$ with 1 fundamental hypermultiplet which is the unHiggsed version of the $SO(8)$ theory and if it exists could have the same base. \begin{itemize} \item $SU(N)\times SO(N+8)$ \end{itemize} We note that the gravitational anomaly restricts $T\leq 10$ and hence we need to ensure finiteness of the theory for $T=10$ as before. In this case the charge lattice is given by \begin{eqnarray}\label{ex4} a\cdot b_1=-1,\ a\cdot b_2=2,\ b_1^2=-1, \ b_2^2=-4, \ b_1\cdot b_2=2 \end{eqnarray} It seems that the anomaly charge lattice is identical to the one before and hence we can use those results. In other words, for $T=10$ the vectors are identical as in the previous example but with $b_1\leftrightarrow b_2$. Therefore this string configuration with $k_2=2, k_1=0$ implies that \begin{eqnarray} {k_2((N+8)(N+7)/2)\over k_2+N+6} \leq 8\Longrightarrow N\leq 1 \end{eqnarray} Therefore, as expected this bound does not rule out the single $SO(8)\text{ or } SO(9)$ theories as discussed above. \begin{itemize} \item $SU(N)\times SU(N+8)$ \end{itemize} This family has charge lattice vectors satisfying: \begin{eqnarray}\label{ex2} a\cdot b_1=-1,\ a\cdot b_2=1,\ b_1^2=-1, \ b_2^2=-1, \ b_1\cdot b_2=1 \end{eqnarray} Similarly to the previous example for $T=10$ such vectors exist but there are finitely many consistent unitary solutions. One such representation is given by the choice \begin{eqnarray}\label{vecs1} \begin{matrix} \Omega =\text{diag}(1,(-1)^{10}), & a=(-3, 1^{10})\\ b_1=(1,-1,-1,0^8), &b_2=-a \end{matrix} \end{eqnarray} One can easily check that $Q=(1,-1,0^8,-1)$ is a minimal string charge which satisfies eq.(\ref{uni}) with levels $k_1=0,k_2=1$ and $c_L=8$. For the string configuration to be unitary we need to satisfy: \begin{eqnarray} {((N+8)^2-1)\over 1+(N+8)} \leq 8\Longrightarrow N\leq 1 \end{eqnarray} This bound potentially allows for $N=0,1$ corresponding to $SU(8)+\ydiagram{2}$ and $SU(9))+\ydiagram{1}+\ydiagram{2}$. However, such string theory realizations are not known and as we will argue in the next section these theories belong to the Swampland. \begin{itemize} \item $Sp(N)\times SU(2N+8)$ \end{itemize}This theory has the same anomaly lattice as (\ref{ex4}) and hence we can reuse those results. For $T=10$ vectors $a,b_i$ can be found as in (\ref{vecs1}). Therefore, for $k_1=0,\ k_2=1$ we see that unitarity implies \begin{eqnarray} {k_2((2N+8)^2-1)\over k_2+2N+8} \leq 8\Longrightarrow N\leq 1/2 \end{eqnarray} This bound allows for $N=0,1/2$ corresponding to $SU(8)+\ydiagram{2}$ and $SU(9)+\ydiagram{1}+\ydiagram{2}$ but as was discussed these theories will be ruled out in the next section. More generally, since for the last two examples $(a+b_2),(b_1+b_2)$ are null and orthogonal, the most general vectors needed are given by the family of solutions $a=\lambda \ b_1+(\lambda-1)b_2$ with $\lambda\leq 0$ in order to ensure positivity of the kinetic terms(for the first two examples one can replace $ b_1\to 2b_1$). Similarly to the first example, since the above theories can be Higgsed from $N$ to $N-1$ then unitarity would imply the finiteness of each family of theories as was discussed earlier. Next we move on to theories with three simple gauge factors. For example, the set of theories from Appendix \ref{appnon} have $b_i$'s form inner products according to the affine ADE algebras. For example, the $\hat{A}_2$ type theory with $SU(N)^3$ and $T=9$ has the anomaly lattice: \begin{eqnarray}\label{anlat} \Lambda=\left( \begin{array}{cccc} a^2& -a\cdot b_1& -a\cdot b_2&-a\cdot b_3 \\ -a\cdot b_1&b_1^2 & b_1\cdot b_2 & b_1\cdot b_3 \\ -a\cdot b_2&b_1\cdot b_2 & b_2^2 & b_2\cdot b_3\\ -a\cdot b_3&b_1\cdot b_2 & b_2\cdot b_2 & b_3^2 \\ \end{array} \right)= \left( \begin{array}{cccc} 0& 0& 0&0 \\ 0&-2 & 1 & 1 \\ 0&1 & -2 & 1 \\ 0&1 & 1& -2 \\ \end{array} \right) \end{eqnarray} These inner products can be solved for vectors satisfying the linear relation $a=\lambda(b_1+b_2+b_3)$ for $\lambda<0$. For example, a solution to the anomaly lattice (\ref{anlat}) is given by: \begin{eqnarray} \begin{matrix} \Omega =\text{diag}(1,(-1)^{9}), & a=(-3, 1^{9})\\ b_1=(1,-1,-1,-1,0^6), &\quad b_2=(1,0^3,-1,-1,-1,0^3),& \quad b_3=(1,0^6,-1,-1,-,1) \end{matrix} \end{eqnarray} One can choose $j=(1,0^9)$ and charge $Q=(1,-1,0,0,-1,0^5)$ which gives $k_1=k_2=0$ , $k_3=1$ and $c_L=8$. Therefore, worldsheet unitarity implies: \begin{eqnarray}\label{sun3} {(N^2-1)\over 1+N}\leq 8 \Longrightarrow N\leq 9 \end{eqnarray} Therefore, for the particular choice of anomaly vectors this theory is finite and theories with $N>9$ belong to the Swampland. For more general possible representations of the vectors, the argument works exactly as discussed earlier. Furthermore, other types of theories with three gauge groups can be found in Table \ref{table:3} but they are all particular cases of those in Table \ref{table:k} for $k=3$ and hence can be handled together. \begin{itemize} \item $SU(N-8)\times SU(N)\times SU(N+8)\times \cdots \times SU(N+8(k-2))$ \end{itemize} For this theory the maximum number of tensor multiplets arises for $k\leq6$ and has $T_{max} = 8+k$ and these constitute the only case we need to consider as the other values of $T$ were ruled out earlier. The anomaly charge lattice of the strings determined by the type of gauge group and matter is given by : \begin{align}\Lambda=\left( \begin{matrix} a^2 & -a\cdot b_1 & -a\cdot b_2& -a\cdot b_3 &\cdots & -a\cdot b_k \\ -a\cdot b_1 & b_1^2 & b_1\cdot b_2 & b_1\cdot b_3 &\dots & b_1\cdot b_k \\ -a\cdot b_2& b_1\cdot b_2 & b_2^2& b_2\cdot b_3 &\dots & b_2\cdot b_k \\ \vdots &\vdots & \vdots &\vdots & \ddots &\vdots \\ -a\cdot b_k& b_1\cdot b_k & b_2\cdot b_k & b_3\cdot b_k &\dots & b_k\cdot b_k \end{matrix} \right)= \left( \begin{matrix} 9-T & 1 & 0 & 0 & \cdots & -1 \\ 1 & -1 & 1 & 0 & \cdots & 0 \\ 0 & 1 & -2 & 1 & \dots & 0 \\ 0 & 0 & 1 & -2 & \cdots & 0 \\ \vdots & \vdots & \vdots &\vdots &\ddots & \vdots\\ -1 & 0 & 0 & 0 & 1 & -1 \\ \end{matrix} \right) \end{align} We may consider a particular solution for the vectors $a,b_i$ given by: \begin{eqnarray} \begin{matrix} a=(-3,1,1,1\cdots, 1),&b_1=(1,-1,-1,0\cdots, 0),&b_2=(0,0,1,-1\cdots, 0)\\ b_3=(0,0,0, 1,-1\cdots, 0),&b_{i}=(0\cdots, 1,-1,\cdots, 0),& b_k =-a+\sum _{i=1}^{k-2} b_i (-i+k-1)\\ \end{matrix} \end{eqnarray} Moreover, we also need to identify a consistent K\"ahler form $j$ and we would like to make a minimal choice of string charge $Q$ for each $k$. For $k=3 $ we may choose: $j=(2,0,0,1,0^8), Q=(1,-1,0^9,-1)$ for which $k_1=0,k_2=0,k_3=1,c_L=8$. Therefore, string unitarity can be expressed as: \begin{eqnarray} {k_3((N+8)^2-1)\over k_3+N+8} \leq 8\Longrightarrow N<=1 \end{eqnarray} For $k=4 $ we may choose: $j=( 3,0,0,1,2,0^8 ),Q=(1,-1,0^{10},-1)$ with $k_1=0,k_2=0,k_3=0,k_4=1,c_L=8$ \begin{eqnarray} {k_4((N+8(4-2))^2-1)\over k_4+N+8(4-2)} \leq 8\Longrightarrow N\leq -7 \end{eqnarray} For $k=5 $ we may choose: $j=(4,0,0,1,2,3,0^8),Q=(1,-1,0^{10},-1)$ with $k_1=0,k_2=0,k_3=0,k_4=0,k_5=1,c_L=8$ \begin{eqnarray} {k_5((N+8(5-2))^2-1)\over k_5+N+8(5-2)} \leq 8\Longrightarrow N\leq -15 \end{eqnarray} For $k=6 $ we may choose: $j=(6,0,0,1,2,3,4,0^8),Q=(1, -1, 0^{12}, -1)$ with $k_1=0,k_2=0,k_3=0,k_4=0,k_5=0,k_6=1,c_L=8$. \begin{eqnarray} {k_5((N+8(6-2))^2-1)\over k_5+N+8(6-2)} \leq 8\Longrightarrow N\leq -23 \end{eqnarray} We therefore, conclude that the above inequalities suggest that only $SU(9)+1\ydiagram{1}+1\ydiagram{2}$ and $SU(8)+1\ydiagram{2}$ are allowed which as was discussed earlier will be ruled out in the next section. \begin{itemize} \item $ Sp((N-8)/2)\times SU(N)\times SU(N+8)\times \cdots \times SO(N+8(k-2)) $ \end{itemize} The anomaly lattice is the same as in the previous theory except with $b_k\to2 b_k$ and the maximum $T_{max}=k+8$ attained for $k\leq 7$. For $k=3$ we have $k_1=0,k_2=0,k_3=2,c_L=8$ giving us: \begin{eqnarray} {2((N+8)(N-7)/2)\over 2+N+6} \leq 8\Longrightarrow N\leq 1 \end{eqnarray} For $k=4$ we have $k_1=0,k_2=0,k_3=0,k_4=2,c_L=8$ giving us: \begin{eqnarray} {2((N+8(4-2))((N+8(4-2)-1)/2)\over 2+N+8(4-2)-2} \leq 8\Longrightarrow N\leq -7 \end{eqnarray} For $k=5$ we have $k_1=0,k_2=0,k_3=0,k_4=2,k_5=2,c_L=8$ giving us: \begin{eqnarray} {k_5((N+8(5-2))^2-1)\over k_5+N+8(5-2)} \leq 8\Longrightarrow N\leq -15 \end{eqnarray} For $k=6$ we have $k_1=0,k_2=0,k_3=0,k_4=0,k_5=0,k_6=2,c_L=8$ giving us: \begin{eqnarray} {k_6((N+8(6-2))^2-1)\over k_6+N+8(6-2)} \leq 8\Longrightarrow N\leq -23 \end{eqnarray} For $k=7$ we $j=(8,0,0,1,2,3,4,5,0^8),Q=(1, -1, 0^{13} -1)$ with \begin{eqnarray} k_1=0,k_2=0,k_3=0,k_4=0,k_5=0,k_6=0,k_7=2,c_L=8 \end{eqnarray} \begin{eqnarray} {k_7((N+8(7-2))^2-1)\over k_7+N+8(7-2)} \leq 8\Longrightarrow N\leq -31 \end{eqnarray} Therefore, unitarity implies that the only theories that survive are $SO(9)+1\ydiagram{1}$ or $SO(8)$ which have been discussed earlier in this section. Finally, there are two more infinite families of this type that can be found by replacing $Sp\to SU$ or $SO\to SU$ giving us identical results to those above. \begin{itemize} \item $SU(N)^k$ \end{itemize} Earlier in this section the cases $k=2,3$ where shown to be finite and hence we need to focus on $k>3$ for $T=9$. The anomaly lattice of this theory is determined by the inner products: \begin{eqnarray} a^2=0, a\cdot b_i=0, b_i \cdot b_{i+1}=1,b_i^2=-2, b_1\cdot b_k=1 \end{eqnarray} Consider the quadratic form to be $\Omega=\text{diag}(1,(-1)^9)$ then a solution to the anomaly lattice for $k\leq 9$ (the upper bound was determined by requiring the anomaly lattice to embed into $\Gamma$) is: \begin{eqnarray} \begin{matrix} a=(-3,1,1,1\cdots, 1),&b_1=(0,1,-1,0\cdots, 0),&b_2=(0,0,1,-1\cdots, 0)\\ b_3=(0,0,0, 1,-1\cdots, 0),&b_{i}=(0\cdots, 1,-1,\cdots, 0),& b_k =-a-\sum _{i=1}^{k-1} b_i \\ \end{matrix} \end{eqnarray} A compatible K\"ahler form can be found for example $j=(4,1,2,\cdots ,9)$. For $k<9$ a minimal choice of BPS string charge is $Q=(1,-1,0\cdots, 0 ,-1)$ which has $Q^2=-1$ and $Q\cdot a=-1$ and $k_1=1,k_{i}=0, k_k =0$ and hence \begin{eqnarray} {(N^2-1)\over 1+N }\leq c_L=8\Longrightarrow N\leq 9 \end{eqnarray} which is the same result we found previously for $k=3$. For $k=9$ a minimal choice of BPS string charge is $Q=(1,-1,0\cdots, 0 ,0)$ which has $Q^2=0$ and $Q\cdot a=-2$ and $k_1=1,k_{i}=0, k_k =1$ and hence \begin{eqnarray} {2(N^2-1)\over 1+N }\leq c_L=20\Longrightarrow N\leq 11 \end{eqnarray} As presented earlier in the section and in the Appendix one can see that there are more theories that we could analyze but the methods are parallel to those already discussed. Therefore, the general argument in the beginning of the section applies to those infinite families too and similar choice of solutions as to those already made would reveal potential upper bounds for the sizes of the gauge groups. We note that our general argument resticted the dimension of each gauge group to be finite. Additionally, we were able to show that a number of theories with $SU(N),SO(N),Sp(N/2)$ type gauge groups may only have finitely many simple gauge groups by studying the lattice embedding of the anomaly lattice to the full 6d string lattice. However, more theories can be constructed with bounded dimension and unbounded number of tensor multiplets allowed by anomalies. Recall that the gravitational anomaly is given by \begin{eqnarray}\label{gr2} H_{ch}-V\leq 273-29 T \end{eqnarray} As we have seen before constructing theories with arbitrarily many gauge factors not restricted by anomalies requires $H-V<0$ so that eq. (\ref{gr2}) is always satisfied. Therefore, if one could choose more theories of finite dimension and minimal matter that satisfy the anomaly conditions but have negative $H-V$ then it could be possible to have an unbounded number of those. Additionally, assuming that $H_{ch}-V<0$ for a given simple gauge group we can rearrange eq. (\ref{gr2}) to write it as \begin{eqnarray} T\leq {273\over 29}-{(H_{ch}-V)k\over 29} \end{eqnarray} where $k$ is the number of simple gauge factors. However, as was discussed earlier in this section one needs to be able to embed the anomaly lattice in the full string lattice of the 6d theory and hence satisfy $k\leq T$. This is possible only if $(H_{ch}-V)\leq -29$. Examples of theories with minimal matter include the NHC's and more found in \cite{Morrison_2012, Heckman_2019}. For example, pure $SO(8)$ has $H_{ch}-V=-28$ and $SO(9)+1\ydiagram{1}$ has $H_{ch}-V=-27$. However, neither satisfy $H_{ch}-V\leq -29$ and hence one can not have an infinite number of those. Likewise, also $SU(3)^k$ is bounded because $H_{ch}-V=-8$ and hence $k\leq 17$. Also, for $(g_2\times SU(2))^k$ one has $H_{ch}-V=-9$ and for $(SU(2)\times SO(7)\times SU(2))^k$ one has $H_{ch}-V=-11$. Therefore, from the NHC's the following are compatible with $(H_{ch}-V)\leq -29$: \begin{itemize} \item $f_4$ with $b_i\cdot b_i=-5$. The gravitational anomaly determines that $T\leq\frac{52 k}{29}+\frac{273}{29} $. For example when $T=k+9$ we can find solutions of the form: \begin{align} \hskip -1 cm & a=(-3,1^T)\\ & b_{1}=(-1,-1,-1,2,0^{T - 3 }) \\ & b_{2}=(0,0,-2,-1,0^{T - 3 }) \\ & \vdots \\ & b_{i}=(-1,-1,0^{2(i-1)},-1,2,0^{T - 1- 2 i }) \\ & b_{i+1}=(0^{2{ i}},-2,-1,0^{T - 1- 2 { i }}) \\ & \vdots \\ & b_{k-1}=(-1,-1,0^{2(k/2-1)},-1,2,0^{T - 1- 2 k/2}) \\ & b_{k}=(0^{2 k/2},-2,-1,0^{T - 1- 2 { k/2 }}) \end{align} If $k$ is odd just replace $k/2\to \lfloor k/2\rfloor$ and $k\to k-1$ and add as the last vector: $ b^{odd}_{k}=(-1,-1,0^{2(\lfloor{k/2}\rfloor)},-1,2,0^{T - 3- 2 \lfloor{k/2}\rfloor }) $. As for K\"ahler class we can choose: \begin{eqnarray} j=(-j_0,1^T) \text{ for } {T\over 3}\geq j_0>\sqrt{T } \ e.g \ j_0 =\lfloor{k/3}\rfloor -1 \text{ and } k\geq 21 \end{eqnarray} where the upper bound is chosen such that $-j\cdot a >0$ and the lower bound to ensure $j^2>0$. Moreover, it is also simple to check that $j\cdot b_i>0.$ One could find more solutions for small $k$ but we are only interested in this work for large $k$ and hence we will not attempt to enumerate those. This choice of vectors shows that anomalies permit to have unbounded many such gauge groups. However, one could consider a string with minimal charge $Q=(-q,0^T)$. This choice of charge has: $k_{i=odd}=q,k_{i=even}=0$, $k_\ell=q^2+3q+2\geq 0, c_R=3q^2-9q\geq 0$ true for $q \geq 3 $. However imposing worldsheet unitarity \begin{eqnarray} {3(q^2+3q+2)\over 2+(q^2+3q+2) }+ \lceil{k\over 2}\rceil{52\over q+9}\leq 3q(q-9)+2 \end{eqnarray} one can note that the inequality cannot be satisfied when $3\leq q\leq 9$ for any $k$(with lower bound as discussed above). In particular, more generally these solutions are valid for any $k+2\leq T\leq\frac{52 k}{29}+\frac{273}{29} $ and hence similarly restrict $k$ just as we saw above. \item $e_6$ with $b_i\cdot b_i=-6$ The gravitational anomaly imposes that $T\leq \frac{78 k}{29}+\frac{273}{29}$. For example a solution can be found when $T=2k+9$: \begin{align} \hskip -1 cm & a=(-3,1^T)\\ & b_{1}=(-1,-1,1,-2,1,0^{T - 4 }) \\ & b_{2}=(0^3,-1,-2,-1,0^{T-5 })\\ & \vdots \\ & b_{i}=(-1,-1,0^{4(i-1)},1,-2,1,0^{T - 4 i }) \\ & b_{i+1}=(0^3,0^{4(i-1)},-1,-2,-1,0^{T-4 i-1 })\\ & \vdots \\ & b_{k-1}=(-1,-1,0^{4(k/2-1)},1,-2,1,0^{T - 4 k/2 }) \\ & b_{k}=(0^3,0^{4(k/2-1)},-1,-2,-1,0^{T-4 k/2-1 }) \end{align} If $k$ is odd as before we can replace $k/2\to \lfloor k/2\rfloor$ and $k\to k-1$ and add as the last vector: $ b^{odd}_{k}=(-1,-1,0^{4(\lfloor k/2\rfloor)},-1,-2,-1,0^{T - 4 (\lfloor k/2\rfloor+1) }) $. As for K\"ahler class we can choose: $ j=(-j_0,1^T) \text{ for } {T\over 3}\geq j_0\geq \sqrt{T } $. However, just as we saw above strings with charge $Q=(-q,0^T)$ and $3\leq q<10$ satisfy $c_R\geq 0, k_\ell\geq 0 $ but are none unitary because unitarity relation cannot be satisfied: \begin{eqnarray} {3(q^2+3q+2)\over 2+(q^2+3q+2) }+ \lceil{k\over 2}\rceil{78\over q+12}\leq 3q(q-9)+2 \end{eqnarray} More, generally these solutions can be adjusted and used for any $ 2k+2\leq T$. Apart from the NHC one can note that also $e_6 $ with $1$ fundamental hypermultiplet is possible \cite{Heckman_2019}. This theory has $b_i^2=-5, a\cdot b_i=3$ and $-51 k \leq 273-29T$. The analysis of this is very similar to $f_4$ above so we will not repeat it. \item $e_7$ with $b_i\cdot b_i=-7$ with ${1\over 2}56$ matter The gravitational anomaly imposes that $T\leq \frac{105 k}{29}+\frac{273}{29}$. For example a solution can be found when $T=3k+9$: \begin{align} \hskip -1 cm & a=(-3,1^T)\\ & b_{1}=((-1)^2,-1,-2,(1)^2,0^{T-5}), \\& b_{2}=(0^2,(-1)^2,-2,-1,0^{T-5 }) \\& \vdots \\ & b_{i}=((-1)^2,0^{5(i-1)},-1,-2,(1)^2,0^{T-5 i }), \\& b_{i+1}=(0^2,0^{5(i-1)},(-1)^2,-2,-1,0^{T-5 i }) \\& \vdots \\ & b_{k-1}=((-1)^2,0^{5(k/2-1)},-1,-2,(1)^2,0^{T-5 k/2 }), \\& b_{k}=(0^2,0^{5(k/2-1)},(-1)^2,-2,-1,0^{T-5 k/2 }) \end{align} Similarly, as above strings with charge $Q=(-q,0^T)$ where $3\leq q<10$ are none unitary because they do not satisfy unitarity relation: \begin{eqnarray} {3(q^2+3q+2)\over 2+(q^2+3q+2) }+ \lceil{k\over 2}\rceil{133\over q+18}\leq 3q(q-9)+2 \end{eqnarray} These solutions can be used for any $T$ such that $3k+1\leq T$ giving the same result. \item $e_7$ with $b_i\cdot b_i=-8$ The gravitational anomaly imposes that $T\leq \frac{133 k}{29}+\frac{273}{29}$. For example the following solutions can be found when $T=4k+9$: \begin{align} \hskip -1 cm & a=(-3,1^T)\\ & b_{1}=((-1)^2,(-1)^2,-2,(1)^2,0^{T - 6 }) \\ & b_{2}=(0^3,(-1)^3,-2,-1,0^{T-7 })\\ &\vdots \\ & b_{i}=((-1)^2,0^{6(i-1)},(-1)^2,-2,(1)^2,0^{T - 6 i }) \\ & b_{i+1}=(0^3,0^{6(i-1)},(-1)^3,-2,-1,0^{T-6 i-1 }) \\ & \vdots\\ & b_{k-1}=((-1)^2,0^{6(k/2-1)},(-1)^2,-2,(1)^2,0^{T - 6 k/2 }) \\ & b_{k}=(0^3,(-1)^{6(k/2-1)},-2,-1,0^{T-6 k/2-1 }) \end{align} For the stings of charge $Q=(-q,0^T)$ one has $k_{i=odd}=q,k_{i=even}=0$, $k_\ell \geq 0, c_R\geq 0$ true for $q \geq 3 $ but the unitarity bound: \begin{eqnarray} {3(q^2+3q+2)\over 2+(q^2+3q+2) }+ \lceil{k\over 2}\rceil{133\over q+18}\leq 3q(q-9)+2 \end{eqnarray} shows that strings with $3\leq q\leq 10$ are non-unitary. Generically, we can find such solutions for all $3k+3\leq T$. Apart from the two NHCs we studied one can note that also $e_7 $ with $1,{3\over 2}$ fundamental hypermultiplet are possible. These theories have $b_i^2=-6/-5, a\cdot b_i=4/3$ and $-77 k /-49 k \leq 273-29T$ respectively. The analysis of this is very similar to $f_4,e_6$ as above and hence we will not repeat. \item $e_8$ with $b_i\cdot b_i=-12$ A specific solution for this theory for large $T$ is discussed in \cite{Kumar_2010, Kim:2019aa} where in the latter work they show that $k$ can not be arbitrarily large for that solution. \end{itemize} Even though in the last four cases we do not have a more general way to show that there can only be finitely many terms, the solutions above seem to suggest so. To sum up, in this section we have shown that certain theories which could potentially be allowed to have arbitrarily large size or arbitrarily many gauge factors, have in fact an upper bound or a more careful analysis reveals that they do not exist. This gives a positive answer to the assumption of the Lampost principle that there should be an upper bound on the number of massless modes in a theory of quantum gravity at least for the majority of the proposed infinite families of anomaly free matter content. \section{A bound on the matter representations} \ytableausetup{boxsize=0.7 em,aligntableaux = center} In this section we will propose further consistency conditions that need to be imposed for a consistent 6d supergravity theory. In the previous section we summarized how the existence of BPS strings strongly constrains the bulk theory. In particular, the bulk gauge groups emerge as current algebras on the 2d worldsheet and together with unitarity on the worldsheet, one can impose constraints on the rank of the gauge groups. Such techniques were used in \cite{Kim_2020d} to put an upper bound on the rank of gauge groups for all supergravity theories with 16 supercharges. Moreover, BPS strings imposed constraints on theories with 8 supercharge in 5d \cite{Katz_2020} and 6d \cite{Lee_2019, Kim:2019aa}, for abelian and non-abelian theories respectively. Moreover, we extensively used such techniques in the previous section. Here we will introduce another constraint that the 6D theory needs to satisfy associated with consistency of the 2d worldsheet with specific type of bulk matter. In particular, we now argue that massless matter hypermultiplets in the bulk correspond to relevant/marginal vertex operators on the string. Evidence to support this claim comes from the fact that at least, when giving a vev to a charged massless hypermultiplet it can Higgs the bulk gauge group, the worldsheet theory of the BPS string for which there is a flavor current associated to the group should get deformed. This is because the gauge symmetry in the bulk induces the flavor symmetry on the BPS string and consequently the Higgsing process also reduces the flavor symmetry on the BPS string. This means that there must exist a relevant/marginal deformation of the BPS worldsheet associated to a primary field in representation $\textbf{R}$ of the matter field on the worldsheet (note that non-primary fields except from the current itself will always have dimension bigger than 1). Since the current is on the left-moving sector of the string which is non-supersymmetric, this means that there is an operator of left-moving dimension less than or equal to 1 associated to a primary field of representation $\textbf{R}$. This argument can be extended to all massless representations regardless of whether they can Higgs the gauge group: Having massless fields in the representation $\textbf{R}$ of a gauge group should lead to at most marginally irrelevant deformations. In other words giving a vev to them is obstructed by more than quadratic terms in the bulk theory. So at the quadratic/leading level they behave as if they are Higgsing the bulk theory and so should be at most marginally irrelevant, i.e. dimension no more than 1. A simple example of this condition is realized in the heterotic string on $K3$, where the massless charged fields are represented by primary fields with (left,right) dimension $(1,1/2)$ of the $(0,4)$ supersymmetric theory on the worldsheet. To summarize we have argued that the hypermultiplets transforming in a particular representation $\textbf{R}$ need to satisfy the following conditions: \begin{framed} \begin{enumerate} \item The vertex operator of the massless modes with representation $\textbf{R}$ of $G$ with conformal weight $\Delta_ \textbf{R}= {C_2(\textbf{R})\over 2 (k+h^v)}$ where $C_2(\textbf{R})$ is the second Casimir of the $\textbf{R}$ must obey: \begin{eqnarray} \Delta_ \textbf{R}\leq 1 \end{eqnarray} \item The representation $\textbf{R}$ of a primary with highest weight $\mathbf{\Lambda}=(\Lambda_i,\cdots,\Lambda_{r})$ where $r$ is the rank of the Lie algebra must satisfy : \begin{eqnarray}\label{condition1} \sum_i^r \Lambda_i \leq k \end{eqnarray} where $k$ is the level of the current algebra of G on the worlsheet. \end{enumerate} \end{framed} The first condition as discussed above requires the hypermultiplet states of the spacetime theory to appear as vertex operators in the WZW model and in particular they need to be relevant/marginal primary fields. Therefore, the conformal dimension associated to the hypermultiplets can be at most $1$. The second condition is a standard result of the highest-weight representation in Kac-Moody algebras \cite{DiFrancesco:1997nk}. In addition, these inequalities are independent of the dimension of spacetime and can also be extended to BPS strings in 5d and 4d. For example, in 5d $N=1$ we have monopole strings which need to satisfy the above consistency conditions in the presence of bulk matter and hence constraining the possible representations that can appear. For example, consider the 5d $N=1$, $SU(2)\times U(1) $ theory constructed in \cite{Katz_2020} with the geometry being the singular quintic with $A_1$ singularity along a curve of degree $d$ and genus $g$. Assuming that $H$ is the proper transform of the hyperplane class of the quintic, and E the exceptional divisor of the blowup then the following relations are true: \begin{eqnarray} H^3=5,H^2 E=0,HE^2=-2d,E^3=4-4g-5d \end{eqnarray} In this case the t'Hooft anomaly of the non-abelian gauge symmetry is given by: \begin{eqnarray} {-1\over 4 }k_i trF_i^2 \end{eqnarray} with $k_i=-h_{i,a}q^a$, where $h_{i,a}$ are the the coefficients in the gauge coupling $h_i $ for $G_i$ in the bulk effective action and $q^a$ the string charges. Therefore, the levels of U(1) , SU(2) with divisors $H,E$ respectively and $q=(1,0)$ are: \begin{eqnarray} k_0 =C_{000}=H^3, k_1=-{3\over 6 } C_{011}={-3\over 6}HE^2=d \end{eqnarray} which implies that condition (\ref{condition1}) is given by: \begin{eqnarray} \sum_i \Lambda_i\leq d \end{eqnarray} Therefore, for a degree $d=1$ curve we can only have fundamental matter in $\textbf{2}$ of $SU(2)$. This is in accordance with the fact that \begin{eqnarray} E^3=4-4g-5d=-1 \text{\ for\ }d=1,g=0 \end{eqnarray} was interpreted as having $N=9$ fundamental hypers rather than $1 $ adjoint and $1$ fundamental since the genus was zero. Geometrically, this is the fact that there is no degree 1 genus 1 curve. However, if $d=2$ we have \begin{eqnarray} \sum_i \Lambda_i\leq 2 \end{eqnarray} and say $E^3=-6$ could be either $N=14$ fundamental hypers or $N=6$ fundamental hypers and 1 adjoint. In other words our inequality does not restrict which case it is. From geometry we know that the first case is correct in this example because $E^3=4-4g-5d=-10$ for a genus 1 and degree 2 curve. Returning to 6d we are interested in seeing how these inequalities can help us as Swampland conditions. Let us start by considering the 6d supergravity theory coupled to $SU(N)$ with $ (N-8)\ \ydiagram{ 1}+1 \ \ydiagram{ 2}$. The gravitational anomaly restricts these theories to exist up to $T=10$ and the gauge/gravitational anomalies are cancelled for $a\cdot b =1, b \cdot b =-1$. We can choose a basis such that the bilinear form and the vectors $a, b$ are given by: \begin{eqnarray} \Omega =\text{diag}(1,(-1)^T),\ a=(-3,1^{T}), \ b=(0^{T},-1) \ \end{eqnarray} In this particular basis we can choose the K\"ahler form to be $J=(n ,0^{T-1},1)$ which satisfies $J^2\geq 1$ for $n\geq 1$ and $J\cdot a < 0 , J \cdot b > 0 $. Now we consider a BPS string with charge $Q=(q_0,\cdots q_T )$ which must satisfy eq.(\ref{uni}): \begin{eqnarray} q_0^2-\sum_{i=1}^Tq_i^2\geq -1, \ q_0^2-\sum_{i=1}^Tq_i^2-3 q_0 -\sum_{i=1}^Tq_i\geq -2, k =Q\cdot b \geq 0 \end{eqnarray} A string charge consistent with these inequalities is $Q=(3,0^{T-1},1)$ which gives level $k=1$ for any $T$. We can now use eq.(\ref{condition1}) which states that every representation should satisfy: \begin{eqnarray} \sum_i \Lambda_i \leq k =1 \end{eqnarray} However, the symmetric representation has highest weight $\Lambda=(2,0^{N-2})$ and therefore does not satisfy this inequality. We conclude that this theory belongs to the Swampland. This is consistent with the observation in \cite{Kumar_2010} that for $T=1$ this theory has no F-theory realization. Another example, that was also discussed in the previous section is: $SU(N)+1\textbf{Adj} \text{ or } 1\ydiagram{2}+1\ydiagram{1,1}$ with $T= 9$. We found that the following choices of $(k,N)$ are consistent by using unitarity considerations: \begin{align} &(k\geq 1, N=0,1,2,3),(4\geq k\geq 1,N=4)\\&(2\geq k\geq 1,N=5),(k=1,N=6,7,8,9) \end{align} However, if we apply condition (\ref{condition1}) we see that $k=1$ is not a consistent choice because both the adjoint and symmetric representation have $\sum_i\Lambda_i=2$. Therefore in particular all theories with $N>5$ belong to the Swampland. \ytableausetup{boxsize=0.3 em,aligntableaux = center} Consequently, the second condition has helped us rule out theories that do not have string theory realizations but methods such as unitarity bounds of the previous section did not exclude them. However, the first condition even though non-trivial we did not find useful in these examples. The issues are that for simple representations that we consider here this is automatically satisfied (for example $\Delta_{\ydiagram{1}}={(N^2-1)\over 2N(k+N)}$, $\Delta_{\ydiagram{1,1}}=\frac{(N-2) (N+1)}{N (k+N)}$, $\Delta_{\textbf{Adj}}={N\over N+k}\leq 1$). Therefore, this condition could have a chance to be useful for higher index symmetric and antisymmetric representations and exotic ones. However, most such examples constructed are for $T=0$ \cite{Kumar_2011}, but those theories tend to have a very large level $k$ since $a,b$ are scalars. Therefore, we would expect this to be more useful if a full 6d supergravity classification is considered and more exotic representation are considered for large $T$. \section{Future directions} In this work we have argued, using a combination of anomaly conditions and unitarity on BPS strings, that at least all the proposed 6d supergravity theories have an upper bound on the number of massless modes. Furthermore, completeness of spectrum led us to a new Swampland constraint that helps restrict the types of representations that can appear in a consistent theory of gravity. Those constraints helped us exclude theories that have no string theory realization and hence strengthening the validity of the SLP. It would be interesting to generalize the finiteness argument for representation types as well. In particular for abelian theories, the charged fields, even though finite in number, are known to have an infinite family of allowed charges \cite{Taylor_2018,raghuram2020automatic}. We expect these, with the exception of a finite number of them, to belong to the Swampland. Therefore, it is important to develop further techniques to rule these out. Finally, another interesting direction would be to try and enumerate all the gauge groups that actually do appear in the string landscape and provide an explanation of their appearance using only Swampland principles without specifying the particular UV completion. \section{Acknowledgments} We would like to thank Washington Taylor, Sheldon Katz, Hee-Cheol Kim, Guglielmo Lockhart and Noam D. Elkies, for valuable discussions. The research of HCT and CV is supported in part by the NSF grant NSF PHY-2013858 and by a grant from the Simons Foundation (602883, CV).
1,108,101,563,786
arxiv
\section{Introduction} Rough Differential Equations (RDE) are natural extensions of Ordinary Differential Equations (ODE) to equations driven by rough signals~\cite{lyons98a,lyons02b,friz,friz14a}. More precisely, RDE are equations of type \begin{equation} \label{eq:rde:intro} y_{t,s}=a+\int_s^t f(y_{r,s})\,\mathrm{d} \mathbf{x}_{r},\ t\in[s,T], \end{equation} where $\mathbf{x}$ is a $p$-\emph{rough path} lying above a continuous path $x$ of finite $p$-variation living in a Banach space~$\mathrm{U}$. The order $\floor{p}$ determines the tensor space in which $\mathbf{x}$ lives in and the iterated integrals of $x$ to use. The minimal regularity of the vector field $f$ also depends on $p$. The solution~$y$ is itself of finite $p$-variation living in a finite or infinite Banach space~$\mathrm{V}$. One of the main feature of the theory of rough paths is the continuity of the \emph{Itô map} $\mathbf{x}\mapsto y$. When $x$ is differentiable, \eqref{eq:rde:intro} is understood as the ODE $y_t=a+\int_0^t f(y_s)\dot{x}_s\,\mathrm{d} s$. As for ODE, we recover Cauchy-Peano and Cauchy-Lipschitz (or Picard -Lindelöf) type results, where existence follows from Schauder fixed point theorem or from Picard fixed point theorem under stronger regularity conditions on the vector field $f$. The later case implies uniqueness of solutions as well as extra properties. Existence of solutions to \eqref{eq:rde:intro} were first proved by T.~Lyons using a fixed point theorem~\cite{lyons98a}. In \cite{davie05a}, A.M. Davie proposed an alternative approach based on discrete approximations so that solutions are constructed as limit of numerical schemes based on Taylor developments. P.~Friz and N.~Victoir \cite{friz2008,friz14a} have proposed another approximation based on sub-Riemannian geodesics, yielding again the convergence of numerical schemes. More recently, I.~Bailleul have developed a framework in which the central tools are flows associated to \eqref{eq:rde:intro} and their approximations \cite{bailleul12a,bailleul13b,bailleul17a}. By flows, we mean the family of solutions $a\in\mathrm{V}\mapsto y_{t,s}(a)$ when the later satisfies $y_{t,s}\circ y_{s,r}=y_{t,r}$ for any $r\leq s\leq t$. The approximation of the flow proposed by I.~Bailleul, A.~M. Davie and P.~Friz-N.~Victoir are all different, although giving rise to the same flow. In \cite{brault1,brault2}, we have proposed an \textquote{agnostic} framework for dealing directly with flows without referring to a particular approximation. Only a broad condition is given on the approximations of the flows, called \emph{almost flows}, to obtain a \emph{non-linear sewing lemma}, a natural extension of the additive and multiplicative sewing lemmas~\cite{lyons98a,feyel}. When the underlying space $\mathrm{V}$ is finite dimensional, a measurable flow may exist even when several solutions to \eqref{eq:rde:intro} are known to exist \cite{brault1}. When the flow is Lipschitz, it is uniquely associated to any almost flow in the same quotient class called a \emph{galaxy}, a notion which reflects the \textquote{closeness} between the two objects. In \cite{brault2}, we have studied the properties of \emph{stable almost flows}, a condition ensuring that compositions of the almost flows over small times remains Lipschitz, uniformly in the choice of the composition. The limiting flow is thus Lipschitz. We have also studied the relationship between stable almost flows and solutions to~\eqref{eq:rde:intro}, which are unique in this case. The goal of this article is threefold: \begin{itemize}[leftmargin=1em] \item We extend the notion of almost flow. We also continue our study of \emph{D-solutions}, that are paths $z$ solutions to \eqref{eq:rde:intro} satisfying \begin{equation} \label{eq:intro:1} \abs{z_t-\phi_{t,s}(z_s)}\leq C\abs{t-s}^\theta,\ \forall s\leq t\text{ with } \theta>1, \end{equation} for an almost flow $\phi$. This notion of solution was introduced by A.~M.~Davie in \cite{davie05a}. Here, we focus on continuity and approximations of D-solutions when $\phi$ is a stable almost flow. Besides, we construct a functional $\Phi$ such that any D-solution solves the fixed point problem $z=\Phi(z)$. From this, we develop in our context the classical notions of \emph{consistency} and \emph{stability} \cite{lax,chartres} which we relate to convergence. More precisely, we construct a functional $\Phi$ such that any D-solution solves the fixed point problem $z=\Phi(z)$. At the difference with the classical setting for fixed point, $\Phi$ is defined \emph{only} on D-solutions. For a partition $\pi$, we also define a functional $\Phi^\pi$ such that any solutions to $z^\pi=\Phi^\pi(z^\pi)$ are discrete D-solutions, that is $z^\pi$ solves \eqref{eq:intro:1} for times $s,t$ in the partitions. Such discrete D-solutions are constructed explicitly through the numerical scheme $z^\pi_{t_{k+1}}=\phi_{t_{k+1},t_k}(z^\pi_{t_k})$ when $\pi=\Set{t_k}_{k=0}^n$. By \emph{consistency}, we mean that any D-solution solves $z=\Phi^\pi(z)+\epsilon^\pi$ for a perturbative term $\epsilon^\pi$ that converges to $0$ when the mesh of the partition converges to $0$. By \emph{stability}, we means that roughly $(\mathrm{Id}-\Phi^\pi)$ is invertible with an inverse uniformly bounded with respect to $\pi$. Seen as a principle \cite{chartres}, the Lax equivalence theorem \cite{lax} is valid in many situations, including ours. It provides a simple way to assert convergence through the study of consistency and stability. We then show that the notion of \emph{stable almost flow}, introduced in \cite{brault2}, leads to the stability of $\Phi^\pi$. The various estimates obtained in this part are the keys to fulfill our second objective. \item We prove \emph{generic properties} associated to RDEs. When solved in an infinite dimensional space, solutions to ODE are not necessarily unique \cite{dieudonne}, nor the Euler scheme converges. Nevertheless, following some results due to W.~Orlicz \cite{orlicz} and developed later by several authors, the set of vector fields and starting point points for which non-uniqueness/non-convergence of the Euler scheme hold are of Baire first category. The key point is that discrete approximations are uniformly approximated by discrete approximations in which vector fields is Lipschitz continuous. We develop a similar approach for solutions to Young (when the driving path is of $p$-variation with $p<2$) and rough (when the driving path is a rough path of finite $p$-variation with $2\leq p<3$). Such results exploit properties developed in the first part of this article regarding stable almost flows. \item We apply these results to Brownian flows to pursue the study of~\cite{davie05a} by mixing them with considerations from H.~Kunita~\cite{kunita_saint_flour}. In particular, we show that for any vector field, the solution to the Itô SDE $X_t=a+\int_0^t \sigma(X_s)\,\mathrm{d} B_s$ for $\sigma\in\mathcal{C}_{\mathrm{b}}^{1+\gamma}$, $\gamma>0$, is also the unique D-solution to the corresponding RDE and is then associated to a Lipschitz flow. The notable points are that $\sigma$ is assumed to be less regular than for proving uniqueness through a Banach fixed point theorem; and that properties of stable almost flows are not used here. Besides, A.M. Davie proved that for almost any choice of a Brownian rough paths, with suitable conditions on the underlying space, there exists a vector field for which several D-solutions exist. To summarize, there exist Lipschitz flows which are not related to stable almost flows. This question was left open in \cite{brault2}. \end{itemize} \noindent\textbf{Outline. } In Section~\ref{sec:def}, we introduce objects and notations that we use through all the article. In Section~\ref{sec:D-sol}, we define D-solutions, and show that they are solutions to a fixed point problem involving suitable functionals whose consistency, stability and convergence is studied. Generic properties are studied in Section~\ref{sec:generic}. In Section~\ref{sec:br-flow}, we study Brownian flows and show that it is fitted for our frameworks. We end with an appendix with general considerations on unbounded flows, boundedness of solutions as well as uniqueness of D-solutions. \section{Definitions and notations} \label{sec:def} We introduce some notations and global hypotheses (in force throughout the whole article) which follows (partly) the ones of \cite{brault1,brault2}. \begin{notation}[Simplex] For $C$ an interval of $\mathbb{R}$, we set $C^2_+\mathbin{\vcentcolon=}\Set{(s,t)\in C^2\given s\leq t}$ and $C^3_+\mathbin{\vcentcolon=}\Set{(r,s,t)\in C^3\given r\leq s\leq t}$. \end{notation} \begin{notation} We use \begin{itemize}[leftmargin=1em] \item Two non-decreasing functions $\delta$ and $\varpi$ from $\mathbb{R}_+$ to $\mathbb{R}_+$ with $\delta(0)=\varpi(0)=0$. We write indifferently $\delta_t$ or $\delta(t)$, $t\geq 0$, whenever it is convenient. \item A \emph{time horizon} $T>0$ and $\mathbb{T}:=[0,T]$. \item A map $\omega:\mathbb{T}_+^2\to\mathbb{R}_+$ (a \emph{control}) which is super-additive, ($\omega_{r,s}+\omega_{s,t}\leq \omega_{r,t}$ for any $(r,s,t)\in\mathbb{T}_+^3$) and continuous close to its diagonal and such that $\omega_{s,s}=0$ for all $s\in\mathbb{T}$. \end{itemize} \end{notation} \begin{ghypothesis}[Controls over growth and remainder] \label{hyp:4} For some $\varkappa\in(0,1)$, $2\varpi(x/2)\leq \varkappa\varpi(x)$ for any $x\geq 0$. \end{ghypothesis} \begin{remark} \label{rem:3} Since $\varkappa<1$, $\varpi(x)/x$ converges to $0$ as $x$ converges to $0$. \end{remark} \begin{ghypothesis}[Time horizon] \label{hyp:time} The time horizon $T$ satisfies \begin{equation} \label{eq:time} \varkappa+2\delta_T<1. \end{equation} \end{ghypothesis} Let $\mathrm{V}$ be a Banach space with the norm $\abs{\cdot}$ and $\mathfrak{i}$ be the identity map from $\mathrm{V}$ to~$\mathrm{V}$. \begin{notation}[Modulus of continuity, Lipschitz and Hölder norm] The \emph{modulus of continuity} of a function $f:\mathrm{V}\to\mathrm{V}$ is \begin{equation*} \osc(f,\delta)\mathbin{\vcentcolon=} \sup_{\substack{a,b\in\mathrm{V}\\\abs{a-b}\leq \delta}}\abs{f(a)-f(b)} \text{ for any $\delta>0$.} \end{equation*} Its $\alpha$-Hölder semi-norm ($0<\alpha\leq 1$) and its Lipschitz semi-norm are defined as \begin{equation*} \normhold{\alpha}{f}\mathbin{\vcentcolon=} \sup_{a\neq b}\frac{\abs{f(a)-f(b)}}{\abs{a-b}^\alpha} \text{ and } \normlip{f}\mathbin{\vcentcolon=} \sup_{a\neq b}\frac{\abs{f(a)-f(b)}}{\abs{a-b}} \end{equation*} when these quantities are finite. Moreover, if $f$ is bounded, we denote $\normsup{f}:=\sup_{a}\abs{f(a)}$. \end{notation} We consider several families $\chi$ of objects indiced by $\mathbb{T}_+^2$ (almost flows, control, ...). When these objects are functions from $\mathrm{V}$ to $\mathrm{V}$, we write the pair $(r,t)\in\mathbb{T}_+^2$ in reverse order, that is $\chi_{t,r}$, as the composition of functions is usually written from right to left. Other objects are written with indices in order, that is $\chi_{r,t}$. \begin{definition}[Functions of class $\mathcal{O}$] \label{not:6} A function $\chi$ from $\mathbb{T}_+^2$ to $\mathcal{C}(\mathrm{V},\mathrm{V})$ is said to be \emph{of class $\mathcal{O}$} if there exists a constant $C\geq 0$ such that \begin{equation} \label{eq:defO} \osc(\chi_{t,s},L\varpi(\omega_{r,s}))\leq C\delta_T (1+L)\varpi(\omega_{r,t}), \ \forall (r,s,t)\in\mathbb{T}_+^3,\ \forall L\geq 0. \end{equation} The smallest constant $C$ such that \eqref{eq:defO} holds is denoted by $\normO{\chi}$. \end{definition} \begin{definition}[Semi-norm on functions of class $\mathcal{O}$] We define \begin{equation*} \mathcal{O}(\mathrm{V},\mathrm{V})\mathbin{\vcentcolon=} \Set{\chi:\mathbb{T}_+^2\to\mathcal{C}(\mathrm{V},\mathrm{V})\given \chi\text{ is of class }\mathcal{O} }, \end{equation*} which is a vector space with a semi-norm $\normO{\cdot}$. \end{definition} \begin{example} \label{ex:1} Let $\chi_{t,r}$ be Lipschitz with $\normlip{\chi_{t,r}}\leq K$ for any $(r,t)\in\mathbb{T}_+^2$. Then $\chi\in\mathcal{O}(\mathrm{V},\mathrm{V})$ with $\normO{\chi}=K/\delta_T$. \end{example} \begin{example} \label{ex:2} Let $x:\mathbb{T}\to\mathrm{U}$ be $\alpha$-Hölder continuous and $f:\mathrm{V}\to L(\mathrm{U},\mathrm{V})$ be $\gamma$-Hölder continuous with $\theta\mathbin{\vcentcolon=} \alpha(1+\gamma)>1$. Let $\varpi(x)\mathbin{\vcentcolon=} x^\theta$ and $\omega_{s,t}=t-s$. For $a\in\mathrm{V}$ and $(s,t)\in\mathbb{T}_+^2$, set $\chi_{t,s}(a)\mathbin{\vcentcolon=} f(a)x_{s,t}$, where $x_{s,t}\mathbin{\vcentcolon=} x_t-x_s$. Then \begin{equation} \label{eq:51} \abs{\chi_{t,s}(a)-\chi_{t,s}(b)} \leq \normhold{\gamma}{f}\cdot\normhold{\alpha}{x} \abs{a-b}^\gamma(t-s)^\alpha,\ \forall (s,t)\in\mathbb{T}_+^2. \end{equation} With $\delta_T\mathbin{\vcentcolon=} \normhold{\gamma}{f}\cdot\normhold{\alpha}{x}T^{\alpha\gamma^2}$, it follows from \eqref{eq:51} that $\chi$ is of class $\mathcal{O}$ with $\normO{\chi}\leq\gamma^\gamma/(1-\gamma)^{1-\gamma}$. \end{example} \begin{notation} \label{not:1} Let $\mathcal{F}[\delta]$ be the class of families $\phi\mathbin{\vcentcolon=}\Set{\phi_{t,s}}_{(s,t)\in\mathbb{T}_+^2}$ of functions from~$\mathrm{V}$ to~$\mathrm{V}$ which satisfy \begin{gather} \label{eq:def:1} \phi_{t,s}=\mathfrak{i}+\widehat{\phi}_{t,s}\text{ with }\widehat{\phi}\in\mathcal{O}(\mathrm{V},\mathrm{V}) \text{ and }\normO{\widehat{\phi}}\leq 1,\\ \label{eq:def:3} \normsup{\widehat{\phi}_{t,s}}\leq \delta_{t-s}, \ \forall (s,t)\in\mathbb{T}_+^2. \end{gather} The set $\mathcal{F}\mathbin{\vcentcolon=}\bigcup_{\delta}\mathcal{F}[\delta]$, union over all the functions $\delta$ as in Global Hypothesis~\ref{hyp:4} (which is stable under addition), is equipped with the distance \begin{equation} \label{eq:def:5} d_\infty(\phi,\psi)\mathbin{\vcentcolon=}\sup_{(s,t)\in\mathbb{T}_+^2} \sup_{a\in\mathrm{V}} \abs{\phi_{t,s}(a)-\psi_{t,s}(a)}. \end{equation} \end{notation} \begin{remark} In Appendix~\ref{sec:aflinear}, we justify that assuming that $\widehat{\phi}$ is bounded unlike in~\cite{brault1} can be done without losing generality. \end{remark} \begin{definition}[Galaxy] \label{def:gal:1} Let $\phi,\psi\in\mathcal{F}$. We say that $\phi$ and $\psi$ are in the same \emph{galaxy} if there exists $K\geq 0$ such that \begin{equation} \label{eq:gal:1} \normsup{\phi_{t,s}-\psi_{t,s}}\leq K\varpi(\omega_{s,t}),\ \forall(s,t)\in\mathbb{T}_+^2. \end{equation} \end{definition} \begin{definition}[Almost flow] \label{def:almost-flow} We fix $M\geq 0$. Let $\mathcal{A}[\delta,M]$ be the set of $\phi\in\mathcal{F}[\delta]$ such that \begin{gather} \label{eq:def:2} \normsup{\mathfrak{d}\phi_{t,s,r}}\leq M\varpi(\omega_{r,t}), \ \forall (r,s,t)\in\mathbb{T}^3_+\\ \text{ with } \label{eq:phitsr} \mathfrak{d}\phi_{t,s,r}\mathbin{\vcentcolon=} \phi_{t,s}\circ\phi_{s,r}-\phi_{t,r}. \end{gather} We write $\mathcal{A}\mathbin{\vcentcolon=}\bigcup_{\delta,M}\mathcal{A}[\delta,M]$. An element of $\mathcal{A}$ is called an \emph{almost flow}. \end{definition} \begin{remark} Combining Examples~\ref{ex:1} and \ref{ex:2}, it is easily seen that this definition generalizes the one of \cite{brault1}. \end{remark} \begin{definition}[Flow] A flow is a family $\psi:\mathbb{T}_+^2\times\mathrm{V}\to\mathrm{V}$ which satisfies $\mathfrak{d}\psi_{t,s,r}(a)=0$ for any $a\in\mathrm{V}$ and any $(r,s,t)\in\mathbb{T}_+^3$. \end{definition} \begin{remark} For $i=1,2,3$, let us define $\mathfrak{M}_i$ be the maps from $\mathbb{T}_+^i\times \mathrm{V}$ to $\mathrm{V}$. The operator $\mathfrak{d}$ transforms maps in $\mathfrak{M}_2$ to maps in $\mathfrak{M}_3$. It is a non-linear generalization of the sewing operator introduced by M.~Gubinelli in \cite{gub04}. We use it as a shorthand. Yet it has also the following meaning. For a family of invertible maps $\alpha$ in $\mathfrak{M}_1$, we set $\mathfrak{d}\alpha_{t,s}=\alpha_t\circ\alpha_s^{-1}$, $(s,t)\in\mathbb{T}_+^2$ so that $\mathfrak{d}\alpha\in\mathfrak{M}_2$. Conversely, for an invertible flow $\psi\in\mathfrak{V}$, we set $\alpha_t\in\psi_{t,0}$, $t\in\mathbb{T}$ so that $\mathfrak{d}\alpha=\psi$. Hence, invertible flows belong both to the range of $\mathfrak{d}:\mathfrak{M}_1\to\mathfrak{M}_2$ and the kernel of $\mathfrak{d}:\mathfrak{M}_3\to\mathfrak{M}_2$. When $\mathfrak{d}\phi$ is \textquote{close} to $0$ for an almost flow $\phi$, a \emph{non-linear sewing map} projects $\phi$ to a flow $\psi$, which thus satisfies $\mathfrak{d}\psi=0$. \end{remark} \begin{notation} \label{not:partition} The elements of a partition $\pi=\Set{t_i}_{i=0}^{n}$ of $\mathbb{T}$ are written either as the points $t_i$ or as the close intervals $[t_i,t_{i+1}]$ of successive points. For a family $\Set{y_t}_{t\in\mathbb{T}}$, we write $y_i\mathbin{\vcentcolon=} y_{t_i}$ when no ambiguity arises. We use the same convention for functions over $\mathbb{T}_+^2$ or $\mathbb{T}_+^3$. For a family $\Set{f_{s,t}}_{(s,t)\in\mathbb{T}_+^2}$, we write either $\sum_{i=0}^{n-1} f_{i,i+1}$ or $\sum_{[u,v]\in\pi} f_{u,v}$ instead of $\sum_{i=0}^{n-1} f_{t_i,t_{i+1}}$ when there is no ambiguity. \end{notation} \begin{definition}[Solution in the sense of Davie, or D-solution] Let $n\geq 1$. For an almost flow $\phi\in\mathcal{A}$, a partition $\pi=\Set{t_k}_{k=0}^n$ of $\mathbb{T}$ and $K\geq 0$, we denote by $\mathcal{P}_\pi[\phi,a,K]$ the set of $V$-valued families $\Set{y_{t_k}}_{k=0,\dotsc,n}$ such that $y_0=a$ and \begin{equation} \label{eq:1disc} \abs{y_j-\phi_{j,i}(y_i)}\leq K \varpi(\omega_{i,j}),\ \forall0\leq i\leq j\leq n. \end{equation} We also set $\mathcal{P}_\pi[\phi,a]\mathbin{\vcentcolon=}\bigcup_{K\geq 0}\mathcal{P}_\pi[\phi,a,K]$. Similarly, we denote by $\mathcal{P}[\phi,a]$ the set of paths $y\in\mathcal{C}(\mathbb{T},\mathrm{V})$ with $y_0=a$ and \begin{equation} \label{eq:1} \abs{y_t-\phi_{t,s}(y_s)}\leq K\varpi(\omega_{s,t}),\ \forall (s,t)\in\mathbb{T}_+^2, \end{equation} for some constant $K\geq 0$. The elements of $\mathcal{P}_\pi[\phi,a]$ and $\mathcal{P}[\phi,a]$ are called \emph{solutions in the sense of Davie}, which we shorten by \emph{D-solutions}. \end{definition} \begin{definition}[Numerical scheme] \label{def:numerical-scheme} Given a partition $\pi=\Set{t_i}_{i=0}^n$ of $\mathbb{T}$, the \emph{numerical scheme} of an almost flow $\phi\in\mathcal{A}$ is the sequence $\Set{y_{t_k}}_{k=0,\dotsc,n}$ constructed iteratively by \begin{equation*} y_0=a\text{ and }y_{t_{k+1}}=\phi_{t_{k+1},t_{k}}(y_{t_k}),\ k=0,\dotsc,n-1. \end{equation*} \end{definition} We now define the notion of convergence of partitions. \begin{definition}[Mesh and convergence] \label{def:mesh} For a partition $\pi=\Set{t_i}_{i=0}^n$ of $\mathbb{T}$, we define its \emph{mesh} by $\mesh\pi\mathbin{\vcentcolon=}\max{i=0,\dotsc,n-1}\Set{t_{i+1}-t_i}$. This define an order on partitions: $\sigma\leq \pi$ if $\mesh\sigma\leq \mesh\pi$. A family $\Set{a_\pi}_{\pi}$ with values in a metric space $(\mathrm{V},d)$ is said to \emph{converge} to $a\in\mathrm{V}$ whenever for any $\epsilon>0$ there exists a partition $\pi$ such that for any $\sigma\leq \pi$, $d(a_\sigma,a)\leq \epsilon$. \end{definition} \begin{remark} Inclusion defines another partial order on partitions \cite{mcshane52a}. We do not use it, except as some tool in some proofs. \end{remark} \section{Stability results on D-solutions} \label{sec:D-sol} \subsection{Space of D-solutions} We start by giving some precisions on the discrete and continuous spaces of D-solutions. \begin{lemma}[{The spaces $\mathcal{P}_{\pi}[\phi,a]$ are not empty}] \label{lem:2} For any almost flow $\phi\in\mathcal{A}[\delta,M]$, for any partition $\pi$ of $\mathbb{T}$ and any $a\in\mathrm{V}$, the numerical scheme~$y^\pi$ associated to $\phi$ with $y^\pi_0=a$ belongs to $\mathcal{P}_\pi[\phi,a,L]$ with \begin{equation} \label{eq:L} L\mathbin{\vcentcolon=} \frac{2(\delta_T+M)}{1-\varkappa-2\delta_T}. \end{equation} Moreover, if $\psi$ is in the same galaxy as $\phi$, then $\mathcal{P}_\pi[\psi,a]=\mathcal{P}_\pi[\phi,a]$. \end{lemma} The proof of this result is a variant of the one of the Davie lemma given in \cite{brault1,brault2}. \begin{proof} We set $U_{i,j}\mathbin{\vcentcolon=} \abs{y_j-\phi_{j,i}(y_i)}$ for $i\leq j$. Following \cite{davie05a,brault1}, we proceed by induction on $j-i$. First, we remark that $U_{i,i}=U_{i,i+1}=0$. Second, for $i\leq j\leq k$ with $i<k$, \begin{multline} \label{eq:39} y_k-\phi_{k,i}(y_i) \\ =y_k-\phi_{k,j}(y_j) +\phi_{k,j}(y_j)-\phi_{k,j}(\phi_{j,i}(y_i)) +\phi_{k,j}(\phi_{j,i}(y_i))-\phi_{k,i}(y_i) \\ = y_k-\phi_{k,j}(y_j) +y_j-\phi_{j,i}(y_i) +\widehat{\phi}_{k,j}(y_j)-\widehat{\phi}_{k,j}(\phi_{j,i}(y_i))\\ +\phi_{k,j}(\phi_{j,i}(y_i))-\phi_{k,i}(y_i). \end{multline} Our induction hypothesis is that $U_{i,j}\leq L\varpi(\omega_{i,j})$ when $\abs{j-i}\leq m$ for some level $m$, where $L$ is defined in \eqref{eq:L}. This is true for $m=0,1$. Assume that the induction hypothesis is true whenever $j-i\leq m$ for a level $m\geq 1$. We fix $i<k$ such that $\abs{k-i}\leq m+1$. We are going to show that $U_{i,k}\leq L\varpi(\omega_{i,k})$. If $\omega_{i,k}=0$, it follows by super-additivity of the control $\omega$ that $\omega_{i,k-1}=\omega_{k-1,k}=0$. This implies according to induction hypothesis that $U_{i,k-1}=U_{k-1,k}=0$. Then, using \eqref{eq:39} with $(i,j,k)=(i,k-1,k)$ and \eqref{eq:defO}, we get \begin{equation} \label{eq:5} U_{i,k}\leq \delta_T(1+L)\varpi(\omega_{i,k})+M\varpi(\omega_{i,k}). \end{equation} It follows that $U_{i,k}=0$, therefore $U_{i,k}\leq L\varpi(\omega_{i,k})$ holds. If $\omega_{i,k}>0$, let us define $j^*\mathbin{\vcentcolon=} \inf\left\{j\in \Set{i+1,\dotsc,k}\textrm{ such that } \omega_{i,j}>\frac{1}{2}\omega_{i,k}\right\}$. It follows of our definition of $j^*$ and from the super-additivity of $\omega$ that $\omega_{j^*,k}\leq \frac{1}{2}\omega_{i,k}$ and $\omega_{i,j^*-1 }\leq \frac{1}{2}\omega_{i,k}$. We consider two cases : either $j^*<k$ or $j^*=k$. For the first case, using the fact that $\phi$ is an almost flow, \eqref{eq:defO} for $(r,s,t)=(i,j^*,k)$ and the equality~\eqref{eq:39} when $j=j^*$, \begin{align} \label{eq:2} U_{i,k}&\leq U_{i,j^*}+U_{j^*,k}+\osc\left(\widehat{\phi}_{k,j^*},U_{i,j^*}\right)+M\varpi(\omega_{i,k})\\ &\leq U_{i,j^*}+U_{j^*,k}+C\delta_T (1+L)\varpi(\omega_{i,k})+M\varpi(\omega_{i,k}). \end{align} Then, we control $U_{i,j^*}$ in \eqref{eq:2} using \eqref{eq:39} with $(i,j,k)=(i,j^*-1,j^*)$, \begin{equation} \label{eq:Uj*} U_{i,k}\leq U_{i,j^*-1}+U_{j^*,k}+\delta_T(1+L)(\varpi(\omega_{i,j^*})+\varpi(\omega_{i,k}))+M\varpi(\omega_{i,j^*})+M\varpi(\omega_{i,k}). \end{equation} We now applying the induction hypothesis to $U_{i,j^*-1}$, $U_{j^*,k}$ in \eqref{eq:Uj*}, and we use Global Hypothesis~\ref{hyp:4} to get \begin{equation} \label{eq:case1} U_{i,k}\leq \kappa L\varpi(\omega_{i,k})+2\delta_T(1+L)\varpi(\omega_{i,k})+2M\varpi(\omega_{i,k}). \end{equation} Thus, with $L$ given by \eqref{eq:L}, $U_{i,k}\leq L\varpi(\omega_{i,k})$. In the second case, when $j^*=k$, we use \eqref{eq:39} with $j=k-1$ and \eqref{eq:defO} to get \begin{equation} \label{eq:3} U_{i,k}\leq U_{i,k-1}+\delta_T(1+L)\varpi(\omega_{i,k})+M\varpi(\omega_{i,k}). \end{equation} Thus, applying the induction hypothesis in \eqref{eq:3} to $U_{i,k-1}$, \begin{equation} \label{eq:4} U_{i,k}\leq \frac{\kappa L}{2}\varpi(\omega_{i,k})+\delta_T(1+L)\varpi(\omega_{i,k})+M\varpi(\omega_{i,k}). \end{equation} Eq.~\eqref{eq:4} implies \eqref{eq:case1}. It follows from the first case that $U_{i,k}\leq L\varpi(\omega_{i,k})$ with the same constant $L$. This concludes the induction. Therefore, the numerical scheme associated to $\phi$ belongs to~$\mathcal{P}_\pi[\phi,a,L]$. That $\mathcal{P}_\pi[\phi,a]=\mathcal{P}_\pi[\psi,a]$ is immediate from \eqref{eq:gal:1}. \end{proof} The next result is a direct consequence of the continuous time Davie lemma \cite[Lemma~10]{brault2}. \begin{lemma}[Uniform control on D-solutions] \label{lem:1} Consider $\phi\in\mathcal{A}[\delta,M]$. Assume that for some $A>0$, $y\in\mathcal{P}[\phi,a,A]$. Then $y\in\mathcal{P}[\phi,a,L]$ with $L$ given by \eqref{eq:L}. Therefore, $\mathcal{P}[a,\phi]=\bigcup_{A\leq L}\mathcal{P}[a,\phi,A]$. \end{lemma} \begin{notation}[Projection and interpolation] \label{not:5} Let $\pi$ and $\sigma$ be two partitions of $\mathbb{T}$ with $\sigma\subset \pi$. Any path $y$ in $\mathcal{P}_\sigma[\phi,a]$ or in $\mathcal{P}[\phi,a]$ is naturally projected onto $\Set{y_{t_i}}_{i=0}^n$ in $\mathcal{P}_\pi[\phi,a]$. Conversely, any element $y\in\mathcal{P}_\pi[\phi,a]$ is extended through a linear interpolation as an element of $\mathcal{C}([0,T],\mathrm{V})$. Again, we still denote this element by~$y$. \end{notation} Using the above convention on projection and extension, we endow $\mathcal{P}_\pi[\phi,a]$ with the uniform norm $\normsup{\cdot}$. The proofs of the next lemmas are then immediate. \begin{lemma}[Convergence] \label{lem:convergence} Let $K\geq 0$. Let $\Set{y^\pi}_{\pi}$ be a sequence of paths in $\mathcal{P}_\pi[\phi,a,K]$ and $y\in\mathcal{C}([0,T],\mathrm{V})$ such that $y^\pi$ converges in $\normsup{\cdot}$ to $y$. Then $y\in\mathcal{P}[\phi,a,K]$. \end{lemma} \begin{lemma}[Convergence II] \label{lem:convergence2} Let us consider $K,M\geq 0$. For each $n\in\mathbb{N}$, let us consider $\phi^n\in\mathcal{A}[\delta,M]$, $a^n\in\mathrm{V}^n$ and $y^n\in\mathcal{P}[a^n,\phi^n,K]$. Let $\phi\in\mathcal{A}[\delta,M]$ and $a\in\mathrm{V}$. Assume that for some path $y\in\mathcal{C}(\mathbb{T},\mathrm{V})$, \begin{equation*} d_\infty(\phi^n,\phi)+\abs{a^n-a}+\normsup{y^n-y}\xrightarrow[n\to\infty]{}0. \end{equation*} Then $y\in\mathcal{P}[\phi,a,K]$. \end{lemma} \subsection{From discrete to continuous functionals on D-solutions} In this section, we construct functionals on $\mathcal{P}_\pi[\phi,a]$ and thus on $\mathcal{P}[\phi,a]$ using a limit argument. These functionals are to be seen as integrals that are defined only on D-solutions, unlike Young or rough integrals. \begin{proposition} \label{prop:2} Let $\phi\in\mathcal{A}[\delta,M]$ and $\pi$ be a partition of $\mathbb{T}$. Recall that $\widehat{\phi}$ is defined by~\eqref{eq:def:1}. Let us set for $i,j\in\pi^2_+$, \begin{equation*} \Phi^\pi_{i,j}(y)\mathbin{\vcentcolon=}\sum_{k=i}^{j-1} \widehat{\phi}_{k+1,k}(y_{k})\text{ for } y=\Set{y_{i}}_{i=0,\dotsc,n}\in\mathrm{V}^{n+1}. \end{equation*} For $y\in\mathcal{P}_\pi[\phi,a,K]$, \begin{gather} \label{eq:37} \abs{\Phi^\pi_{i,j}(y)-\widehat{\phi}_{j,i}(y_i)}\leq A \varpi(\omega_{i,j})\text{ for any }(i,j)\in\pi_+^2 \\ \label{eq:38} \text{ with }A\mathbin{\vcentcolon=} \frac{2(\delta_T(1+K)+M)}{1-\varkappa}. \end{gather} \end{proposition} \begin{remark} \label{rem:5} We saw in Lemma~\ref{lem:2} that the numerical scheme $y^\pi$ associated to $\phi$ with $y^\pi_0=a$ belongs to $\mathcal{P}_\pi[\phi,a,L]$ with $L$ given by \eqref{eq:L}. Therefore, from the very construction of $y^\pi$: $y^\pi_{j}=a+\Phi_{0,j}^\pi(y^\pi)$. \end{remark} \begin{proof} From the very definition of $\Phi^\pi$, \begin{equation} \label{eq:20} \Phi_{i,j}^\pi(y)+\Phi_{j,k}^\pi(y)=\Phi_{i,k}^\pi(y) \text{ for }(i,j,k)\in\pi_+^3, \end{equation} meaning that $\Phi^\pi$ is additive on the partition $\pi$. For any $(r,s,t)\in\mathbb{T}_+^3$ and $a\in\mathrm{V}$, \begin{equation*} \mathfrak{d}\phi_{t,s,r}(a)= \phi_{t,s}(\phi_{s,r}(a))-\phi_{t,s}(a) = \widehat{\phi}_{s,r}(a) + \widehat{\phi}_{t,s}(a+\widehat{\phi}_{s,r}(a)) -\widehat{\phi}_{t,r}(a). \end{equation*} Thus, for $(i,j,k)\in\pi_+^3$, \begin{equation} \label{eq:20bis} \widehat{\phi}_{k,j}(y_j) +\widehat{\phi}_{j,i}(y_i) -\widehat{\phi}_{k,i}(y_i) = \widehat{\phi}_{k,j}(y_j) -\widehat{\phi}_{k,j}(y_i+\widehat{\phi}_{j,i}(y_i)) +\phi_{k,j,i}(y_i). \end{equation} Note that $y_j-y_i-\widehat{\phi}_{j,i}(y_i)=y_j-\phi_{j,i}(y_i)$. Since $\phi\in\mathcal{A}[\delta,M]$ and $y\in\mathcal{P}_\pi[\phi,a,K]$, \eqref{eq:defO}, \eqref{eq:def:1}, \eqref{eq:def:3} yield \begin{multline} \label{eq:19} \abs{\widehat{\phi}_{k,j}(y_j) +\widehat{\phi}_{j,i}(y_i) -\widehat{\phi}_{k,i}(y_i) } \leq \varpi(\omega_{i,j})\delta_T\normO{\widehat{\phi}}\Paren*{1+K} +M\varpi(\omega_{i,k}) \\ \leq (\delta_T(1+K)+M)\varpi(\omega_{i,k}), \end{multline} because $\normO{\widehat{\phi}}\leq 1$. Combining \eqref{eq:20} with \eqref{eq:19} implies that $V^\pi_{i,j}\mathbin{\vcentcolon=}\abs{\Phi_{i,j}^\pi(y)-\widehat{\phi}_{j,i}(y_i)}$ satisfies \begin{equation*} V_{i,k}^\pi\leq V_{i,j}^\pi+V_{j,k}^\pi+(\delta_T(1+K)+M)\varpi(\omega_{i,k}). \end{equation*} Hence, \eqref{eq:37} stems from the Davie lemma \cite[Lemma~9]{brault2}. \end{proof} \begin{notation} \label{not:3} For a partition $\pi=\Set{t_i}_{i=0,\dotsc,n}$ of $\mathbb{T}$, we set \begin{equation} \label{eq:mu} \mu_{s,t}(\pi)\mathbin{\vcentcolon=} \sup_{[t_i,t_{i+1}]\in \pi\cap [s,t]} \frac{\varpi(\omega_{t_i,t_{i+1}})}{\omega_{t_i,t_{i+1}}}. \end{equation} \end{notation} \begin{remark} \label{rem:3:bis} With Remark~\ref{rem:3}, $\mu_{s,t}(\pi)\to 0$ when $\mesh{\pi}\to 0$. \end{remark} Let us consider a partition $\pi=\Set{t_i}_{i=0}^n$ of $\mathbb{T}$. Using a linear interpolation, $\Phi^\pi_{s,t}(y)$ is naturally extended from $\pi_+^2$ to $\mathbb{T}^+_2$. Therefore, we extend to $\mathbb{T}_+^2$ the family $\Phi^\pi$ as functionals on $\mathcal{P}[\phi,a]$ or on $\mathcal{P}_\sigma[\phi,a]$ with $\pi\subset\sigma$. \begin{corollary}[Consistency] \label{cor:3} Assuming Hypothesis~\ref{hyp:time} and $\phi\in\mathcal{A}[\delta,M]$, there exists $\Phi:\mathcal{P}[\phi,a]\to\mathcal{C}([0,T],\mathrm{V})$ such that for any partition $\pi$ of $\mathbb{T}$, any $y\in\mathcal{P}[\phi,a,K]$ and any $K\geq 0$, \begin{gather} \label{eq:34} \abs{\Phi_{s,t}(y)-\widehat{\phi}_{t,s}(y_s)}\leq A\varpi(\omega_{s,t}), \ \forall (s,t)\in\mathbb{T}_+^2,\\ \label{eq:35} \abs{\Phi_{s,t}(y)-\Phi^\pi_{s,t}(y)}\leq A \mu_{s,t}(\pi)\omega_{s,t}, \ \forall (s,t)\in\mathbb{T}_+^2,\\ \label{eq:chasles} \text{and } \Phi_{r,s}(y)+\Phi_{s,t}(y)=\Phi_{r,t}(y)\text{ for }(r,s,t)\in\mathbb{T}_+^3, \end{gather} with $A$ given by \eqref{eq:38}. Condition \eqref{eq:35} means that $\Phi^\pi$ is \emph{consistent}. \end{corollary} \begin{remark} This result does not claim that $\mathcal{P}[\phi,a]\neq\emptyset$. When $\mathrm{V}$ is finite dimensional, the Ascoli-Arzelà theorem and thus Lemma~\ref{lem:convergence} apply: the equi-continuity and boundedness of $\Set{y^\pi}_\pi$ with $y^\pi\in\mathcal{P}_\pi[\phi,a,L]$ is a direct consequence of~\eqref{eq:def:3} and~\eqref{eq:1disc}. When $\mathrm{V}$ is infinite dimensional, we discuss this point in Section~\ref{sec:generic}. \end{remark} \begin{proof} Let $\sigma$ and $\pi$ be two partitions such that $\pi\subset\sigma$. For $(s,t)\in\pi_+^2$ and $y\in\mathcal{P}[\phi,a,L]\subset\mathcal{P}_\sigma[\phi,a,L]\subset\mathcal{P}_\pi[\phi,a,L]$ (using the identification of Notation~\ref{not:5}), \begin{multline*} \abs{\Phi_{s,t}^\sigma(y)-\Phi^\pi_{s,t}(y)} = \abs*{\sum_{[u,v]\in\pi} \left[\sum_{[u',v']\in\sigma\cap[u,v]} \widehat{\phi}_{v',u'}(y_{u'}) -\widehat{\phi}_{v,u}(y_{u})\right]} \\ = \abs*{\sum_{[u,v]\in\pi}\Paren{\Phi^{\sigma\cap[u,v]}_{u,v}(y) -\widehat{\phi}_{v,u}(y_{u})}} \\ \leq \sum_{[u,v]\in\pi} A\varpi(\omega_{u,v}) \leq A\mu_{s,t}(\pi) \omega_{s,t} \xrightarrow[\mesh{\pi}\to 0]{}0 \end{multline*} for $A$ given by \eqref{eq:38}. From this, it is easily deduced that $\Set{\Phi^\pi_{s,t}(y)}_{\pi}$ is a Cauchy sequence et for any $(s,t)\in\mathbb{T}_+^2$ with respect to the nest of nested sequence of partitions. We set $\Phi_{s,t}(y)\mathbin{\vcentcolon=} \lim_{\mesh{\pi}\to 0} \Phi_{s,t}^\pi(y)$. For any partition $\pi$ , \eqref{eq:35} is satisfied and so is \eqref{eq:34} by taking $\pi=\Set{0,s,t,T}$. We then set $\Phi_t(y)\mathbin{\vcentcolon=}\Phi_{0,t}(y)$. The Chasles relation \eqref{eq:chasles} is satisfied because $\Phi^\pi$ satisfies the discrete Chasles relation \eqref{eq:20}. Combining \eqref{eq:chasles} and \eqref{eq:34}, $\Phi_{s,t}(y)$ is uniquely defined thanks to the Additive Sewing Lemma~(see \textit{e.g.} \cite{lyons98a,gub04} or~\cite[Theorem~1, p.~25]{feyel} or~\cite[Lemma~4.2 p.~51]{friz14a}). \end{proof} \begin{proposition} \label{prop:3} We assume Hypothesis~\ref{hyp:time} and $\phi$ an almost flow in $\mathcal{A}$. A path $y\in\mathcal{C}(\mathbb{T},\mathrm{V})$ satisfies $y_{t}=\Phi_{0,t}(y)$, $t\in\mathbb{T}$, if and only if $y\in\mathcal{P}[\phi,a]$. \end{proposition} \begin{proof} If $y\in\mathcal{P}[\phi,a]$, both $\Set{y_{s,t}\mathbin{\vcentcolon=} y_t-y_s}_{(s,t)\in\mathbb{T}^+_2}$ and $\Set{\Phi_{s,t}}_{s,t\in\mathbb{T}_+^2}$ are additive functionals satisfying $\abs{z_{s,t}-\widehat{\phi}_{t,s}(z_s)}\leq C\varpi(\omega_{s,t})$ for $(s,t)\in\pi_+^2$. From the Additive Sewing Lemma (see \textit{e.g.} \cite{lyons98a,gub04} or \cite[Theorem~1, p.~25]{feyel} or \cite[Lemma~4.2 p.~51]{friz14a}), they are equal. Conversely, if $y=\Phi(y)$, then with~\eqref{eq:34}, $\abs{y_{s,t}-\widehat{\phi}_{s,t}(y)} \leq A\varpi(\omega_{s,t})$ meaning that $y\in\mathcal{P}[\phi,a,A]$. \end{proof} \subsection{Stability and convergence of discrete approximations} We recover the general principle that consistency and stability yield convergence, as well as existence and uniqueness. For this, we need a stronger hypothesis on $\Phi^\pi$. We will show in Section~\ref{sec:stable-almost-flows} that this hypothesis is satisfied in presence of \emph{stable almost flows}, as defined in \cite{brault2}. \begin{hypothesis}[Stability] \label{hyp:stability} Let $\phi\in\mathcal{A}[\delta,M]$. Let $\Phi$ and $\Set{\Phi^\pi}_{\pi}$ be the associated functionals given in Corollary~\ref{cor:3} and Proposition~\ref{prop:2}. Assume that for each partition $\pi$ of $\mathbb{T}$, $\Phi^\pi$ is Lipschitz continuous on $\mathcal{P}_\pi[\phi,a,L]$ with a constant $\ell<1$ which is uniform in $\pi$. \end{hypothesis} Thanks to the Lipschitz inverse function theorem, Hypothesis~\ref{hyp:stability} implies that $\mathrm{Id}-\Phi^\pi$ is invertible with a bounded inverse which is uniform in $\pi$. This is \emph{stability}. We use in Corollary~\ref{cor:stability} below such a property on perturbations. We now give the rate of convergence of numerical scheme. Applied to YDE and RDE (see \cite{brault1,brault2}), we recover the already found rates of convergence: \begin{itemize} \item In \cite{davie05a}, $\varpi(x)=x^{\gamma/p}$ for a vector field in $\mathcal{C}^\gamma$ and $x$ of finite $p$-variation, $2\leq p<3$ and $1+\gamma>p$, see Remarks~1 and~3. Our estimate is a upper bound for the right-hand side of (9), namely a rate of $\gamma/p-1$. \item In \cite[Theorem~10.3.3]{friz}, a high order expansion of order $n$ for a rough path of finite $p$-variation, $2\leq p<3$, is given with $\varpi(x)=x^{(n+1)/p}$ for a vector field of class $\mathcal{C}^{\gamma}$, $\gamma>p$ and $n=\lfloor \gamma\rfloor\geq\lfloor p\rfloor$. The rate of convergence is $(n+1)/p-1$. \item In \cite[Sect.~5, p.1789]{lejay10a}, for YDE ($1\leq p<2$) with a vector field of class $\mathcal{C}^{\gamma}$, $1+\gamma>p$, the rate of convergence is $2/p-1$ with $\varpi(x)=x^{2/p}$. \end{itemize} \begin{proposition}[Rate of convergence of approximations] \label{prop:convergence} Assume Hypothesis~\ref{hyp:stability} on stability. For each partition $\pi$ of $\mathbb{T}$, let $y^\pi$ be the numerical scheme associated to~$\phi$ with respect to~$\pi$ (see Definition~\ref{def:numerical-scheme}). Then, for any $y\in\mathcal{P}[\phi,a,L]$, \begin{equation} \label{eq:36b} \normsup{y^\pi-y}\leq \frac{A}{1-\ell}\mu_{0,T}(\pi)\omega_{0,T}, \end{equation} where $\mu$ is defined in \eqref{eq:mu} and $A$ defined by \eqref{eq:38} with $K=L$. Besides, $\Set{y^\pi}_{\pi}$ is a Cauchy sequence with respect to $\normsup{\cdot}$ as $\mesh{\pi}\to 0$ with \begin{equation} \label{eq:36} \normsup{y^\sigma-y^\pi}\leq \frac{2A}{1-\ell}\max\Set{\mu_{0,T}(\pi),\mu_{0,T}(\sigma)}\omega_{0,T} \end{equation} for any two partitions $\sigma$ and $\pi$ of $\mathbb{T}$. In consequence, $\mathcal{P}[\phi,a]=\Set{y}$ with $y=\lim_\pi y^\pi$. \end{proposition} \begin{proof} From Proposition~\ref{prop:3}, Definition~\ref{def:numerical-scheme} and Remark~\ref{rem:5}, $y\in\mathcal{P}[\phi,a,L]$ and $y^\pi\in\mathcal{P}_{\pi}[\phi,a,L]$ are respectively fixed point solutions to \begin{equation*} y_{s,t}=\Phi_{s,t}(y),\ \forall (s,t)\in\mathbb{T}_+^2 \text{ and }y^\pi_{s,t}=\Phi_{s,t}^\pi(y^\pi),\ \forall (s,t)\in\pi_+^2, \end{equation*} where $\Phi$ is given by Corollary~\ref{cor:3}. For $(s,t)\in\pi_+^2$, as $\mathcal{P}[\phi,A,L]\subset \mathcal{P}_\pi[\phi,A,L]$ (recall Notation~\ref{not:5}), \begin{equation*} y^\pi_{s,t}-y_{s,t}=\Phi^\pi_{s,t}(y^\pi)-\Phi^\pi_{s,t}(y) +\epsilon^\pi_{s,t} \text{ with }\epsilon^\pi_{s,t}\mathbin{\vcentcolon=} \Phi^\pi_{s,t}(y)-\Phi_{s,t}(y). \end{equation*} Hence, from Hypothesis~\ref{hyp:stability}, \begin{equation*} \abs{y^\pi_{s,t}-y_{s,t}}\leq \ell\normsup{y^\pi-y}+\abs{\epsilon^\pi_{s,t}}. \end{equation*} With \eqref{eq:35} in Corollary~\ref{cor:3} and since $y^\pi_0=y_0$, \begin{equation*} \normsup{y^\pi-y}\leq \ell\normsup{y^\pi-y}+A\mu_{0,T}(\pi)\omega_{0,T}. \end{equation*} As the uniform Lipschitz constant of $\Phi^\pi$ satisfies $\ell<1$ from Hypothesis~\ref{hyp:stability}, this proves \eqref{eq:36b}. If $z,y\in\mathcal{P}[\phi,a,K]$, then for any partition $\pi$ of $\mathbb{T}$, $\normsup{y-z}\leq \normsup{y-y^\pi}+\normsup{z-y^\pi}$, so that $y=z$. This proves uniqueness since $\mu_{0,T}(\pi)$ defined by \eqref{eq:mu} decreases to $0$ with the mesh of $\pi$. To prove \eqref{eq:36}, we consider first two nested partitions $\pi$ and $\sigma\subset \pi$. We proceed as above with $y$ replaced by $y^\sigma$. When $\sigma$ and $\pi$ are arbitrary partitions, then there exists a partition $\tau$ such that $\tau\subset\sigma$ and $\tau\subset\pi$. The triangle inequality yields~\eqref{eq:36}. That $\mathcal{P}[\phi,a]$ is contains only the limit of $\Set{y^\pi}_{\pi}$ follows from Lemma~\ref{lem:convergence}, since $y^\pi\in\mathcal{P}_\pi[\phi,a,L]$ from Lemma~\ref{lem:2}. \end{proof} \subsection{Stable almost flows and continuity} \label{sec:stable-almost-flows} We give now a sufficient condition to ensure Hypothesis~\ref{hyp:stability}. The notion of stable almost flow was introduced in \cite{brault2}. \begin{notation}[Ratio bound] For $\generalnorm{\star}{\cdot}$ being either $\normsup{\cdot}$ or $\normlip{\cdot}$, we define for \mbox{$\phi:\mathbb{T}_+^2\to\mathcal{C}(\mathrm{V},\mathrm{V})$}, \begin{equation*} \generalnorm{\star\div\varpi}{\phi}\mathbin{\vcentcolon=} \sup_{\substack{(r,t)\in\mathbb{T}_+^2\\r\neq t}} \frac{\generalnorm{\star}{\phi_{t,r}}}{\varpi(\omega_{r,t})} \text{ and } \generalnorm{\star\div\varpi}{\mathfrak{d}\phi}\mathbin{\vcentcolon=} \sup_{\substack{(r,s,t)\in\mathbb{T}_+^3\\r\neq t}} \frac{\generalnorm{\star}{\mathfrak{d}\phi_{t,s,r}}}{\varpi(\omega_{r,t})}. \end{equation*} \end{notation} \begin{definition}[Stable almost flow] A \emph{stable almost flow} is an almost flow $\phi\in\mathcal{A}[\delta,M]$ with $\normlip{\phi_{t,s}-\mathfrak{i}}\leq \delta_T$ which satisfies \begin{gather} \label{eq:saf:1} \normlipvarpi{\mathfrak{d} \phi}<+\infty, \end{gather} as well as the \emph{4-points control} \begin{multline} \label{eq:4pc} \abs{\phi_{t,s}(a)-\phi_{t,s}(b)-\phi_{t,s}(c)+\phi_{t,s}(d)} \\ \leq \widecheck{\phi}_{t,s}\Paren[\big]{\abs{a-b}\vee\abs{c-d}} \times \Paren[\big]{\abs{a-c}\vee\abs{b-d}}+(1+\delta_T)\abs{a-b-c+d}, \end{multline} where for any $\alpha\geq 0$, \begin{equation*} \widecheck{\phi}_{t,s}(\alpha\varpi(\omega_{r,s}))\leq \phi^{\circledast}(\alpha)\varpi(\omega_{r,t}), \ \forall (r,s,t)\in\mathbb{T}_+^3, \end{equation*} for $\phi^{\circledast}\geq 0$ that depends on $\alpha$ and $\omega_{0,T}$. Let us denote by $\mathcal{SA}$ the subset of $\mathcal{A}$ of stable almost flows. \end{definition} \begin{proposition} \label{prop:stability} Let $\phi\in\mathcal{SA}$ be a stable almost flow. Then the corresponding functional $\Phi^\pi$ given by Corollary~\ref{cor:3} satisfies \begin{multline*} \sup_{(t_i,t_j)\in\pi_+^2} \abs{\Phi^\pi_{i,j}(y)-\Phi^\pi_{i,j}(z)} \leq \ell_T\normsup{y-z} \\ \text{ with } \ell_T \mathbin{\vcentcolon=} \delta_T+\frac{\normlipvarpi{\mathfrak{d} \phi}+(1+\delta_T)(2+\delta_T)+\phi^\circledast(K) }{1-\varkappa}\varpi(\omega_{0,T}) \end{multline*} for any $y,z\in\mathcal{P}_\pi[\phi,a,K]$. In particular, $\ell_T\xrightarrow[T\to0]{}0$. \end{proposition} \begin{proof} Consider $y,z\in\mathcal{P}_\pi[\phi,a,K]$. Let us set \begin{equation} \label{eq:24} V_{i,j} \mathbin{\vcentcolon=} \Phi_{i,j}^\pi(y)-\widehat{\phi}_{j,i}(y_i) -\Phi_{i,j}^\pi(z)+\widehat{\phi}_{j,i}(z_i). \end{equation} As $\phi_{j,i}=\mathfrak{i}+\widehat{\phi}_{j,i}$, we rewrite \eqref{eq:24} as \begin{equation*} V_{i,j}= \Phi_{i,j}^\pi(y)-\Phi_{i,j}^\pi(z) +\phi_{j,i}(y_i) -\phi_{j,i}(z_i)+y_i-z_i. \end{equation*} Using \eqref{eq:20} in the proof of Proposition~\ref{prop:2}, \begin{multline*} V_{i,j}+V_{j,k}-V_{i,k} = \phi_{j,i}(y_i) -\phi_{j,i}(z_i)+y_j-z_j + \phi_{k,j}(y_j) -\phi_{k,j}(z_j) -\phi_{k,i}(y_i) +\phi_{k,i}(z_i) \\ = \phi_{j,i}(y_i) -\phi_{j,i}(z_i) +y_j-z_j +\mathfrak{d}\phi_{k,j,i}(y_i) -\mathfrak{d}\phi_{k,j,i}(z_i) \\ +\phi_{k,j}(y_j)-\phi_{k,j}(\phi_{j,i}(y_i)) -\phi_{k,j}(z_j)+\phi_{k,j}(\phi_{j,i}(z_i)). \end{multline*} Since $\phi$ is stable almost flow, the 4-points control \eqref{eq:4pc} on $\phi$ yields \begin{multline*} \abs{V_{i,j}+V_{j,k}-V_{i,k}} \\ \leq \widecheck{\phi}_{k,j}\Paren[\big]{ \abs{y_{j}-\phi_{j,i}(y_i)}\vee\abs{z_{j}-\phi_{j,i}(z_i)} } \times \Paren[\big]{ \abs{y_j-z_j}\vee \abs{\phi_{j,i}(y_i)-\phi_{j,i}(z_i)} } \\ +(2+\delta_T)\abs{y_j-z_j-\phi_{j,i}(y_i)+\phi_{j,i}(z_i)} +\normlipvarpi{\mathfrak{d} \phi}\abs{y_i-z_i}\varpi(\omega_{i,k}) \\ \leq B\normsup{y-z}\varpi(\omega_{i,k}) \end{multline*} for $(i,j,k)\in\pi_+^+$, where \begin{equation} \label{eq:B} B\mathbin{\vcentcolon=}\normlipvarpi{\mathfrak{d} \phi}+(1+\delta_T)(2+\delta_T)+(1\vee\delta_T) \phi^\circledast(K). \end{equation} Moreover, $V_{i,i}=V_{i,i+1}=0$. From the Davie lemma (Lemma~9 in~\cite{brault2}) with $U_{i,j}\mathbin{\vcentcolon=}\abs{V_{i,j}}$, \begin{equation*} \abs{V_{i,j}}\leq \frac{2B}{1-\varkappa}\normsup{y-z}\varpi(\omega_{i,j}),\ \forall (i,j)\in\pi_+^+. \end{equation*} Hence, \begin{multline*} \abs{\Phi^\pi_{i,j}(y)-\Phi^\pi_{i,j}(z)} \leq \normlip{\widehat{\phi}_{j,i}}\normsup{y-z}+\frac{B}{1-\varkappa}\normsup{y-z}\varpi(\omega_{i,k}) \\ \leq \Paren*{\delta_T+\frac{2B}{1-\varkappa}\varpi(\omega_{0,T})} \normsup{y-z}. \end{multline*} This proves the result. \end{proof} \begin{corollary} \label{cor:5} Let $\phi\in\mathcal{SA}$ be a stable almost flow. Then for $T$ small enough, Hypothesis~\ref{hyp:stability} is satisfied and thus~\eqref{eq:36} holds true. \end{corollary} \subsection{Continuity results for stable almost flows} \label{sec:continuity} The next proposition is a discrete version of \cite[Proposition~10]{brault2} on the distance between two numerical schemes, one associated to a stable almost flow. The proof is close to the one of Proposition~\ref{prop:stability}. The next result is the key to prove generic conditions. \begin{notation}[Distance on almost flows] \label{not:4} For $\phi,\psi\in\mathcal{A}[\delta,M]$, we define using~\eqref{eq:def:1}, \eqref{eq:def:5} and \eqref{eq:phitsr}, \begin{equation*} \label{eq:def:4} d_{\mathcal{A}}(\phi,\psi)\mathbin{\vcentcolon=} \max\Set*{d_\infty(\phi,\psi),\normO{\phi-\psi},\normsupvarpi{\mathfrak{d}\phi-\mathfrak{d}\psi}}. \end{equation*} \end{notation} \begin{proposition} \label{prop:stability:schemes} Let $\phi\in\mathcal{SA}\cap\mathcal{A}[\delta,M]$ be a stable almost flow and $\psi\in\mathcal{A}[\delta,M]$ be an almost flow. Consider a partition $\pi=\Set{t_i}_{i=0}^n$ of $\mathbb{T}$. Let $y^\pi$ and $z^\pi$ be the numerical schemes associated to $\phi$ and $\psi$ with $y^\pi_0=a$ and $z^\pi_0=b$. Then there exists a time $T$ small enough and constants $C$ and $C'$ that depend only on $L$ given by \eqref{eq:L}, $\phi^{\circledast}(L)$, $\delta$, $\varkappa$ and $\generalnorm{\Lip,\varpi}{\phi}$ such that \begin{gather*} \abs{y^\pi_j-\phi_{j,i}(y^\pi_i)-z^\pi_j+\psi_{j,i}(z^\pi_i)} \leq Cd_\mathcal{A}(\phi,\psi)\varpi(\omega_{i,j}),\ \forall (i,j)\in\pi_+^+, \\ \normsup{y^\pi-z^\pi}\leq Cd_{\mathcal{A}}(\phi,\psi)+C'\abs{a-b}. \end{gather*} \end{proposition} \begin{proof} The next proposition is a discrete version of \cite[Proposition~10]{brault2}. Set \begin{equation*} U_{i,k}\mathbin{\vcentcolon=} y^\pi_i-\phi_{k,i}(y^\pi_i)-z^\pi_i+\psi_{k,i}(z^\pi_i). \end{equation*} For $i=0,\dotsc,n$, $U_{i,i}=U_{i,i+1}=0$ from the definition of $y^\pi$ and $z^\pi$. Set $\alpha_{j,i}\mathbin{\vcentcolon=}\phi_{j,i}-\psi_{j,i}$ and $\alpha_{k,j,i}\mathbin{\vcentcolon=}\phi_{k,j,i}-\psi_{k,j,i}$. Assume that for any $(i,j,k)\in\pi_+^3$, \begin{gather*} \abs{\alpha_{k,j,i}(z^\pi_i)}\leq \epsilon_1\varpi(\omega_{i,k}),\\ \osc(\alpha_{k,j},\abs{z_j-\phi_{j,i}(z_i)})\leq \delta_T\epsilon_2(1+L)\varpi(\omega_{i,k}) \text{ and } \abs{\alpha_{j,i}(z^\pi_i)}\leq \epsilon_3. \end{gather*} With \eqref{eq:39} and since $y^\pi\in\mathcal{P}_\pi[\phi,a,L]$ and $z^\pi\in\mathcal{P}_\pi[\psi,b,L]$, the 4-points control~\eqref{eq:4pc} on $\phi$ yields \begin{multline*} \abs{U_{i,k}}\leq \abs{U_{i,j}}+(1+\delta_T)\abs{U_{j,k}} +\abs{\phi_{i,j,k}(z^\pi_i)-\psi_{i,j,k}(z^\pi_i)}\\ +\varpi(\omega_{i,k}) \Big( \phi^\circledast(L)(1+\delta_T)\normsup{y^\pi-z^\pi} +\phi^\circledast(L)\abs{\phi_{j,i}(z^\pi_i)-\psi_{j,i}(z^\pi_i)} \\ +\delta_T\epsilon_2(1+L)\varpi(\omega_{i,k}) +\normlipvarpi{\mathfrak{d} \phi}\normsup{y^\pi-z^\pi} ) \\ \leq \abs{U_{i,j}}+(1+\delta_T)\abs{U_{j,k}} +(N+N'\normsup{y^\pi-z^\pi})\varpi(\omega_{i,k}) \end{multline*} where \begin{align*} N&\mathbin{\vcentcolon=}\Paren*{\epsilon_1+\delta_T(1+L)\epsilon_2+\phi^\circledast(L)\epsilon_3} \leq (1+\delta_T L^\gamma+\phi^\circledast)d_{\mathcal{A}}(\phi,\psi) \\ \text{ and } N'&\mathbin{\vcentcolon=}\Paren*{\phi^\circledast(L)(1+\delta_T)+\normlipvarpi{\mathfrak{d} \phi}}. \end{align*} The Davie Lemma \cite[Lemma 9]{brault2} implies that \begin{gather} \label{eq:41} \abs{U_{i,k}}\leq ND\varpi(\omega_{i,k})+N'D\normsup{y^\pi-z^\pi}\varpi(\omega_{i,k}) \ \forall (i,k)\in\pi_+^2 \\ \notag \text{with } D\mathbin{\vcentcolon=} \frac{2+\delta_T}{1-\varkappa(1+\delta_T)^2-\delta_T}. \end{gather} Thus, \begin{equation*} \normsup{y^\pi-z^\pi}\leq ND\varpi(\omega_{0,T})+N'D\normsup{y^\pi-z^\pi}\varpi(\omega_{0,T}) +(1+\delta_T)\abs{a-b}. \end{equation*} Assuming that $T$ is small enough so that $N'D\varpi(\omega_{0,T})<1$, \begin{equation} \label{eq:40} \normsup{y^\pi-z^\pi}\leq \frac{ND}{1-N'D\varpi(\omega_{0,T})}+\frac{1+\delta_T}{1-N'D\varpi(\omega_{0,T})}\abs{a-b}. \end{equation} Injecting \eqref{eq:40} into \eqref{eq:41} leads to the result. \end{proof} \begin{notation}[Perturbations] Let $\mathcal{E}$ be the family of elements $\epsilon\in\mathcal{O}(\mathrm{V},\mathrm{V})$ such that for some parameter~$\eta\geq 0$, \begin{equation} \label{eq:42} \normO{\epsilon}\leq \eta\text{ and }\generalnorm{\infty\div\varpi}{\epsilon}\leq \eta. \end{equation} We denote by $\normE{\epsilon}$ the minimal value of $\eta$ for which \eqref{eq:42} holds. An element of $\mathcal{E}$ is called a \emph{perturbation} \cite{brault1}. \end{notation} \begin{notation}[Perturbed numerical schemes] Given an almost flow $\phi\in\mathcal{A}[\delta,M]$, a perturbation $\epsilon\in\mathcal{E}$, a starting point $a\in\mathrm{V}$ and a partition $\pi=\Set{t_i}_{i=0}^n$, the \emph{perturbed numerical scheme} associated $(\phi,\epsilon)$ is $z^\pi_{k+1}=\phi_{k+1,k}(z^\pi_k)+\epsilon_{k+1,k}(z^\pi_k)$ with $z^\pi_0=a$. A perturbed numerical scheme solves $z^\pi_{i,j}=\Phi^\pi_{i,j}(z^\pi)+E_{i,j}$ with~$E_{i,j}=\sum_{k=i}^{j-1} \epsilon_{i+1,i}(z_i^\pi)$. \end{notation} In the context of numerical analysis, a perturbation $\epsilon$ corresponds for example to \emph{round-off errors} while the choice of an almost flow correspond to \emph{truncation error}. \begin{corollary}[Stability of perturbed numerical schemes] \label{cor:stability} Let $\phi$ be a stable almost flow and $\epsilon\in\mathcal{E}$. Then there exists a constant $K$ depending on $\varpi$, $\omega_{0,T}$, $M$ and $\delta$ such that for any partition $\pi$ of $\mathbb{T}$, \begin{equation} \label{eq:43} \normsup{y^\pi-z^\pi}\leq K\normE{\epsilon}, \end{equation} where $y^\pi$ is the numerical scheme associated to $\phi$ and $z^\pi$ is the perturbed numerical scheme associated to $(\phi,\epsilon)$. \end{corollary} \begin{proof} From \cite{brault1}, $\psi_{t,s}\mathbin{\vcentcolon=}\phi_{t,s}+\epsilon_{t,s}$ is an almost flow, yet not necessarily a stable one, that belongs to $\mathcal{A}[\delta(1+\eta),M+(2+\delta_T)\eta]$. Moreover, \begin{equation*} d_{\mathcal{A}}(\psi,\phi)\leq \normE{\epsilon}\max\Set{(2+\delta_T),\varpi(\omega_{0,T})}. \end{equation*} Inequality~\eqref{eq:43} stems from Proposition~\ref{prop:stability:schemes}. \end{proof} \section{Generic properties of flows} \label{sec:generic} \subsection{The generic property} Related to differential equations, a \emph{generic property} is a property which holds for \textquote{almost all} (in the sense of Baire) vector fields and starting points. A precise description relies on the notion of residual set. The study of generic properties to differential equations have started with W.~Orlicz~\cite{orlicz}. Many results are exposed in~\cite{myjak}. \begin{definition}[Residual set] A set $\mathcal{N}$ in a complete metric space $\mathcal{M}$ is \emph{residual} if its $\mathcal{M}\setminus\mathcal{N}$ is of Baire first category. \end{definition} \begin{definition}[Generic property] A property is said to be \emph{generic} if it is true on a residual set. \end{definition} We now state our main result, which is an adaptation of the ones in \cite{lasota,deblasi83,myjak} to our setting. It relies on the following lemma. \begin{lemma}[{A. Lasota \& J.A. Yorke, \cite[Lemma~1.2]{myjak}}] \label{lem:generic} Let $\mathcal{M}$ be a complete metric space with a dense subset $\mathcal{N}$. Assume that there exists $\Theta:\mathcal{M}\to\mathbb{R}_+$ such that $\Theta(x)=0$ for any $x\in\mathcal{N}$ and $\Theta$ is continuous at any $x\in\mathcal{N}$. Then \mbox{$\Set{x\in\mathcal{M}\given \Theta(x)=0}$} is residual in $\mathcal{M}$. \end{lemma} \begin{hypothesis} \label{hyp:3} We consider a complete metric space $(\mathcal{Q},d)$ with a dense subspace~$\mathcal{R}$. There exists a continuous mapping from $(\mathcal{Q},d)$ to $\phi[f]\in(\mathcal{A}[\delta,M],d_\mathcal{A})$ ($d_\mathcal{A}$ is defined in Notation~\ref{not:4}) which transforms $f\in\mathcal{Q}$ into $\phi[f]\in\mathcal{A}[\delta,M]$ and such that $\phi[f]\in\mathcal{SA}[\delta,M]$ for any $f\in\mathcal{R}$. \end{hypothesis} \begin{theorem}[Generic property of existence, uniqueness and convergence] \label{thm:generic} Under Hypothesis~\ref{hyp:3}, existence, uniqueness of D-solution and convergence of numerical schemes are generic properties. More precisely, let $\mathcal{N}$ be the subset of $\mathcal{M}=\mathrm{V}\times\mathcal{Q}$ such that the numerical schemes $y^\pi\in\mathcal{C}(\mathbb{T},\mathrm{V})$ associated to $\phi[f]$ with $y^\pi_0=a$ converges uniformly with respect to $\pi$ to some $y\in\mathcal{C}(\mathbb{T},\mathrm{V})$. Then $\mathcal{N}$ is a residual set in $\mathrm{V}\times\mathcal{Q}$ and $y\in\mathcal{P}[\phi[f],a]$. In addition, the subset $\mathrm{V}\times\mathcal{R}$ of $\mathcal{M}$ such that $\mathcal{P}[\phi[f],a]$ contains only one point is a residual set. \end{theorem} \begin{proof} Let us define for $a\in\mathrm{V}$ and $f\in\mathcal{Q}$, \begin{equation*} \Theta((a,f))\mathbin{\vcentcolon=}\limsup_{\substack{\pi,\sigma\\ \mesh{\pi},\mesh{\sigma}\to 0}} \normsup{y^\pi[f,a]-y^\sigma[f,a]}, \end{equation*} where $y^\pi[f,a]$ is the numerical scheme associated to $\phi[f]$ with $y^\pi[f,a]_0=a$. With Proposition~\ref{prop:3}, $\Theta((a,f))=0$ for any $(a,f)\in\mathrm{V}\times\mathcal{R}$. Let $\Set{(a_k,f_k)}_{k\geq 0}$ be a sequence of elements of $\mathcal{M}$ converging to $(a,f)\in\mathrm{V}\times\mathcal{R}$. By Hypothesis~\ref{hyp:3}, $\phi[f_k]\in\mathcal{A}[\delta,M]$ while $\phi[f]\in\mathcal{SA}[\delta,M]$. By the triangle inequality, \begin{multline*} \normsup{y^{\pi}[f_k,a_k]-y^\sigma[f_k,a_k]} \leq \normsup{y^{\pi}[f_k,a_k]-y^\pi[f,a]} +\normsup{y^\pi[f,a]-y^\sigma[f,a]} \\ +\normsup{y^{\sigma}[f_k,a_k]-y^\sigma[f,a]}. \end{multline*} Using Corollary~\ref{cor:5} and Proposition~\ref{prop:stability:schemes}, \begin{multline*} \normsup{y^{\pi}[f_k,a_k]-y^{\sigma}[f_k,a_k]} \\ \leq 2C\Paren*{d_\mathcal{A}(\phi[f_k],\phi[f])+\abs{a-a_k}} +C'\max\Set{\mu_{0,T}(\pi),\mu_{0,T}(\sigma)}, \end{multline*} for a constant $C$ which depends on $f$ but which is uniform in $\pi,\sigma$, and a constant $C'$ which is uniform on $\pi,\sigma$. Thus, for any $\epsilon>0$, one may choose $k_0$ large enough such that for any $k\geq k_0$ $2C\Paren*{d_\mathcal{A}(\phi[f_k],\phi[f])+\abs{a-a_k}}\leq \epsilon$ as well as some $\eta$ such that when $\max\Set{\mesh{\pi},\mesh{\sigma}}<\eta$, $C'\max\Set{\mu_{0,T}(\pi),\mu_{0,T}(\sigma)}\leq \epsilon$. Therefore, for any $k\geq k_0$, $\Theta((a_k,f_k))\leq 2\epsilon$ and $\lim_k \Theta((a_k,f_k))=0$. It follows that $\Set{\Theta((a,f))=0\given (a,f)\in\mathrm{V}\times\mathcal{Q}}$, which contains $\mathrm{V}\times\mathcal{R}$, is residual in $\mathrm{V}\times\mathcal{Q}$. For the uniqueness, we replace $\Theta$ by \begin{equation*} \Theta((a,f))\mathbin{\vcentcolon=} \sup_{y,z\in \mathcal{P}[\phi[f],a]}\normsup{y-z}. \end{equation*} Again by Proposition~\ref{prop:convergence}, $\Theta((a,f))=0$ for $(a,f)\in\mathrm{V}\times\mathcal{R}$. The proof is similar to the above one. \end{proof} \subsection{Application to RDE} We consider the case of RDE $y_t=a+\int_0^t f(y_s)\,\mathrm{d}\mathbf{x}_s$, the result being similar for YDE. The driving rough path lies above a path living in a Banach space $\mathrm{U}$, while the solution $y$ lives in another Banach space $\mathrm{V}$. let us fix $2\leq p<3$. We consider a $p$-rough path $\mathbf{x}\mathbin{\vcentcolon=}(1,\mathbf{x}^{(1)},\mathbf{x}^{(2)})$ with respect to the control $\omega$ with values in $\mathbb{R}\oplus \mathrm{U}\oplus \mathrm{U}^{\bigotimes 2}$, \cite[Definition 3.1.3]{lyons98a}. This means that $\mathbf{x}$ satisfies $\mathbf{x}_{r,s}\otimes\mathbf{x}_{s,t}=\mathbf{x}_{s,t}$ for any $r\leq s\leq t\leq T$ and \begin{equation*} \normp{\mathbf{x}}\mathbin{\vcentcolon=} \sup_{\substack{(s,t)\in\mathbb{T}_+^2\\s\neq t}} \Paren*{ \frac{\abs{\mathbf{x}^{(1)}_{s,t}}}{\omega_{s,t}^{1/p}} +\frac{\abs{\mathbf{x}^{(2)}_{s,t}}}{\omega_{s,t}^{2/p}} }<+\infty. \end{equation*} \begin{definition}[Lipschitz vector fields] For any $\gamma>0$, a vector field $f:\mathrm{V}\to L(\mathrm{U},\mathrm{V})$ is said to be a $\Lip(\gamma)$-vector field (which we write $f\in\Lip(\gamma)$) if it is of class $\mathcal{C}_{\mathrm{b}}^{\floor{\gamma}}$ with \begin{equation*} \normhold{\gamma}{f}\mathbin{\vcentcolon=} \sum_{k=0,\dotsc,\floor{\gamma}} \normsup{\mathrm{D}^k f}+ \normhold{\gamma-\floor{\gamma}}{\mathrm{D}^k f}<+\infty, \end{equation*} where $\normhold{\lambda}{f}\mathbin{\vcentcolon=} \sup_{x\not= y}\abs{f(x)-f(y)}/\abs{x-y}^\lambda$ is the $\lambda$-Hölder norm for $0<\lambda\leq 1$. \end{definition} Fix $R\geq 0$ and $\gamma\geq p-1$. We define \begin{equation*} \mathcal{U}(R,\gamma)\mathbin{\vcentcolon=} \Set*{ f\in\Lip(\gamma)\given \normhold{\gamma}{f}\leq R}. \end{equation*} We use $\normhold{\gamma}{\cdot}$ as a norm on $\mathcal{U}(R,\gamma)$. For $f\in\mathcal{U}(R,\gamma)$, the \emph{Davie approximation} is the family \begin{equation} \label{eq:davie} \phi_{t,s}[f,\mathbf{x}](a)=a+f(a)\mathbf{x}^{(1)}_{s,t}+f^{(2)}(a)\mathbf{x}^{(2)}_{s,t} \text{ for }a\in\mathrm{V}\text{ and }(s,t)\in\mathbb{T}_+^2 \end{equation} with $f^{(2)}(a)=\mathrm{D} f(a)\cdot f(a)$. When $1+\gamma>p$, $\phi[f,\mathbf{x}]$ is an almost flow \cite{brault1}. When $\gamma>p$, then it is a stable almost flow \cite{brault2}. A regularisation argument implies that when $1\leq \gamma\leq 3$, then $\mathcal{U}(R,3)$ is dense into $\mathcal{U}(R,\gamma)$. \begin{lemma} Assume $\gamma>2$. Let $f_n\in\mathcal{U}(R,\gamma)$ which converges to $f\in\mathcal{U}(R,\gamma)$. Then $d_{\mathcal{A}}(\phi[f_n,\mathbf{x}],\phi[f,\mathbf{x}])$ converges to $0$. \end{lemma} \begin{proof} A classical computation shows that when $\gamma>1$, \begin{multline} \label{eq:21} \mathfrak{d}\phi_{t,s,r}[f](a)= \Paren*{ f(a+f(a)\mathbf{x}^{(1)}_{r,s}+f^{(2)}(a)\mathbf{x}^{(2)}_{r,s})-f(a+f(a)\mathbf{x}^{(1)}_{r,s})}\mathbf{x}^{(1)}_{s,t} \\ =\Paren*{f(a+f(a)\mathbf{x}^{(1)}_{r,s}+f^{(2)}(a)\mathbf{x}^{(2)}_{r,s})-f(a))}\mathbf{x}^{(2)}_{s,t} \\ +\int_0^1 \Paren*{\mathrm{D} f(a+\tau f(a)\mathbf{x}^{(1)}_{r,s})-\mathrm{D} f(a)}f(a)\mathbf{x}^{(1)}_{r,s}\otimes \mathbf{x}^{(1)}_{r,t}. \end{multline} Thus, for a constant $M$ that depends only on $\normhold{\gamma}{f}$ and $\normp{\mathbf{x}}$, \begin{equation} \label{eq:22} \abs{\mathfrak{d}\phi_{t,s,r}[f](a)} \leq M \varpi(\omega_{r,t}) \end{equation} with \begin{equation} \label{eq:23} \varpi(x)=x^{(2+\gamma)/p} \text{ and } M\leq \normhold{\gamma}{f}^2\max\Set{\normp{\mathbf{x}}^{1+\gamma},\normp{\mathbf{x}}^{2+\gamma}}. \end{equation} For $\gamma>2$, we easily deduce from \eqref{eq:davie} and \eqref{eq:21} that $\phi[f_n,\mathbf{x}]$ converges to $\phi[f,\mathbf{x}]$ with respect to $d_\mathcal{A}$, up to changing $\gamma$ into $\gamma'<\gamma$. The result follows from straightforward computations. \end{proof} Combining the above results with Theorem~\ref{thm:generic} leads to the following result. The second points is obtained by applying a theorem of Kuratowski and Ulam \cite{kuratowski} (See also Theorem~4.2 in \cite{deblasi83}). \begin{corollary} Existence, uniqueness and convergence of the numerical scheme related to the RDE $y=a+\int_0^\cdot f(y_s)\,\mathrm{d} \mathbf{x}_s$ is generic with respect to $(a,f)\in\mathrm{V}\times\mathcal{U}(R,\gamma)$ when $\gamma\geq 2$. In addition, if $\mathrm{V}$ is separable, then there exists a residual set $\mathcal{R}$ in $\mathcal{U}(R,\gamma)$ such that for an $f\in\mathcal{R}$, there exists a residual set $\mathcal{V}[f]$ such that existence, uniqueness and convergence of the numerical scheme holds for $a\in\mathcal{V}[f]$. \end{corollary} \section{Flows of diffeomorphisms through Brownian flows} \label{sec:br-flow} Let $(A,\Sigma,\mathbb{P})$ be a probability space. In the following we note without necessarily specifying it, by $\alpha$ some element of $A$. Moreover, for an integer $k$, $\mathrm{L}^k(A)$ denotes the space of random variables $K$ such that the quantity $\normf{\mathrm{L}^k}{K}=\mathbb{E}(K^k)^{1/k}$ is finite. In this section, the state space of the driving Brownian motion is $\mathrm{U}=\mathbb{R}^d$, while the state space of the solutions is $\mathrm{V}=\mathbb{R}^m$ for some $d,m\geq 1$. \begin{hypothesis} \label{hyp:7} Let $\sigma:\mathbb{R}^m\to L(\mathbb{R}^d,\mathbb{R}^m)$ be a continuous function in $\sigma\in\mathcal{C}_{\mathrm{b}}^{1+\gamma}$ for~$\gamma\in (0,1]$. \end{hypothesis} Let $B$ be a $d$-dimensional Brownian motion on $(A,\Sigma,\mathbb{P})$. We consider the family of Itô SDE \begin{equation} \label{eq:ito:1} X_t(a)=a+\int_0^t \sigma(X_s(a))\,\mathrm{d} B_s\text{ for } t\geq 0,\ a \in \mathbb{R}^m. \end{equation} Under Hypothesis~\ref{hyp:7} (even with $\gamma=0$), there exists a unique strong solution to~\eqref{eq:ito:1}. \begin{notation} \label{not:ebm} An \textit{enhanced} (Itô) Brownian motion \cite[Sect.~3.2]{friz14a} is a rough paths~$\mathbf{B}$ of order $2$ decomposed as $\mathbf{B}_{r,t}=1+B_{r,t}+\mathbf{B}^{(2)}_{r,t}$ with \begin{equation*} \mathbf{B}^{(2)}_{r,t}=\int_r^t B_{r,s}\otimes\mathrm{d} B_s,\ \forall (r,t)\in\mathbb{T}_+^2. \end{equation*} We assume that $(A,\Sigma,\mathbb{P})$ carries $\mathbf{B}$. \end{notation} The \emph{Davie approximation} is naturally defined as \begin{equation} \label{eq:ebm:davie} \phi_{t,s}[\sigma,\alpha](a)\mathbin{\vcentcolon=} a+\sigma(a)B_{s,t}(\alpha) +\mathrm{D}\sigma(a)\cdot\sigma(a)\mathbf{B}^{(2)}_{s,t}(\alpha) \text{ for } \alpha\in A. \end{equation} In \cite{brault1}, we saw that $\phi[\sigma,\alpha]$ is an almost flow when $\sigma\in\mathcal{C}_{\mathrm{b}}^{1+\gamma}$. When $\sigma\in\mathcal{C}_{\mathrm{b}}^{2+\gamma}$, $\phi$ is a stable almost flow. This latter case grants uniqueness of D-solutions as well as the existence of a Lipschitz flow. Here, we consider a deterministic function $\sigma\in\mathcal{C}_{\mathrm{b}}^{1+\gamma}\setminus\mathcal{C}_{\mathrm{b}}^2$. Actually, for any $\alpha$, Theorem~4.8 in \cite{davie05a} shows that for almost every $\alpha\in A$, there exists a vector field~$\sigma[\alpha]$ such that infinitely many D-solutions exist. For such a choice, $\phi[\sigma(\alpha),\alpha]$ cannot be a stable almost flow. Therefore, we cannot expect that $\phi[\sigma,\alpha]$ is a stable almost flow for any pair $(\sigma,\alpha)$. Our main result states the existence of a Lipschitz flow, but does not prove that $\phi[\sigma,\alpha]$ is a stable almost flow. A series of well-known results of H.~Kunita state that $a\mapsto X_t(a,\alpha)$ defines a flow of diffeomorphisms for almost every $\alpha\in A$ (see below). Our main theorem states that under Hypothesis~\ref{hyp:7}, there exists a Lipschitz flow associated to $\phi$ even when~$\phi$ is not a stable almost flow. Here, we consider only Itô integrals as with the Stratonovich, similar results require more regularity. Our main result below is closely connected to Proposition 4.3 in \cite{davie05a}. We denote by $\mathcal{C}_{\mathrm{loc}}^{1+\beta}$ the space of locally $(1+\beta)$-Hölder continuous functions. \begin{theorem} \label{thm:diffeo} Assume Hypothesis~\ref{hyp:7}, and let $X$ be the unique solution to the SDE~\eqref{eq:ito:1}. Set $\psi_{t,s}(a)\mathbin{\vcentcolon=} X_t\circ X_{s}^{-1}(a)$ for any $a\in\mathrm{V}=\mathbb{R}^m$ and any $(s,t)\in\mathbb{T}_+^2$. Then $\psi$ is almost surely a flow of $\mathcal{C}_{\mathrm{loc}}^{1+\beta}$-diffeomorphisms, $0< \beta<\gamma$, in the same galaxy as the almost flow $\phi[\sigma,\cdot]$ defined by \eqref{eq:ebm:davie}, and $X(a)$ is the unique D-solution in $\mathcal{P}[\phi[\sigma,\cdot],a]$. \end{theorem} As the flow associated to the RDE is Lipschitz, some convergence results in \cite{brault2} provides us a rate of convergence of discrete approximations, which is weaker as the one shown in Section~\ref{sec:D-sol} when stable almost flows are used. Here, the discrete approximation constructed from the Davie almost flow is the now classical \emph{Milstein scheme}~\cite{kloeden,kloeden3}. The pathwise rate of convergence of Itô-Taylor approximations, including the Milstein schemes, have been studied in \cite{talay,kloeden2,kloeden3,jentzen2}. For $\sigma\in\mathcal{C}^3$, the almost sure rate of convergence is $1-\epsilon$ for any $\epsilon>0$. Here, we consider $\sigma\in\mathcal{C}^{1+\gamma}$ with $\gamma\leq 1$. When $\sigma\in\mathcal{C}_{\mathrm{b}}^{2+\gamma}$, the Davie approximation is a stable almost flow and we obtain a rate of convergence of $(2+\gamma)/p-1$ for any $p>2$, hence of order $\gamma/2-\epsilon$. For $\sigma\in\mathcal{C}_{\mathrm{b}}^3$, we obtain a rate of convergence not as good as the one of P.~Kloeden and A.~Neuenkirch~\cite{kloeden2}. Yet the main point of this section is to study the rate of convergence for an almost flow not necessarily stable, under weak regularity conditions. \begin{corollary} \label{cor:milstein} Assume Hypothesis~\ref{hyp:7}, then the numerical scheme $X^\pi$ associated to the Davie approximation $\phi$ given by \eqref{eq:ebm:davie} with the initial condition $a\in\mathbb{R}^m$ converges almost surely to $X$, the unique solution to the SDE \eqref{eq:ito:1} with the rate of convergence $\Theta(\pi)\mathbin{\vcentcolon=} (\mesh\pi)^{\frac{\gamma}{2}-\epsilon},$ for all $\epsilon>0$ such that $\epsilon<\frac{\gamma}{2}$. \end{corollary} \begin{proof}[Proof of Corollary~\ref{cor:milstein}] From Theorem~\ref{thm:diffeo}, there exists a flow $\psi$ of regularity $\mathcal{C}_{\mathrm{loc}}^{1+\beta}$ (\text{$0<\beta<\gamma$}) in the galaxy of $\phi$ with $\varpi(x)=x^{(2+\gamma)\left(\frac{1}{2}-\epsilon'\right)}$ for all $\frac{1}{2}>\epsilon'>0$. Thus, according to \cite[Theorem 4.3]{brault1}, $X^\pi$ converges almost surely to $X$ with a rate of convergence $\Theta(\pi)=\pi^{\frac{\gamma}{2}-\epsilon}$ for all $\frac{\gamma}{2}>\epsilon>0$ and any initial condition $a\in\mathbb{R}^m$. \end{proof} We will give two proofs of Theorem~\ref{thm:diffeo}, one being based on a regularization argument and the second one based on the Kolmogorov-Chentsov continuity theorem. \begin{notation} Let $\Omega_N$ be the ball of radius $N>0$ of $\mathbb{R}^m$ and centered on $0$. We denote $\mathcal{G}_{T,N}\mathbin{\vcentcolon=} \mathcal{C}^0([0,T]\times\Omega_N,\mathbb{R}^m)$ equipped with the norm \begin{equation} \label{def:norm:cg} \normf{\mathcal{G}_{T,N}}{x}\mathbin{\vcentcolon=} \sup_{t\in [0,T]}\sup_{a\in\Omega_N}\abs{x_t(a)},\quad \forall x\in \mathcal{G}_{T,N}. \end{equation} \end{notation} \begin{theorem}[{\cite[Theorem 3.1 p.218]{kunita_saint_flour}}] \label{the:kunita_saint_flour} If $\sigma$ is of class $\mathcal{C}^{k+\gamma}_b$ with $\gamma\in (0,1)$ and $k\geq 1$ then the solution map $(t,a)\mapsto X_{t}(a)$ is continuous a.s. and for all $t\in [0,T]$ $X_t(\cdot)$ is a $\mathcal{C}^{k+\beta}$-diffeomorphism a.s. with $0\leq \beta<\gamma$. Moreover, for all $t\geq 0$, $a\in \mathbb{R}^m$, \begin{align} \label{eq:DX-kunita} \mathrm{D} X_t(a)=\Id+\int_0^t\mathrm{D}\sigma(X_s(a))\mathrm{D} X_s(a)\,\mathrm{d} {B_s}. \end{align} \end{theorem} \begin{proof}[First proof of Theorem~\ref{thm:diffeo}] Let $\sigma_n\in\mathcal{C}_{\mathrm{b}}^{1+\gamma}(\mathbb{R}^m)$ with $\gamma>0$ such that $\normf{\mathcal{C}^{1+\gamma}_b}{\sigma_n-\sigma}\to 0$ as $n\to\infty$. $\normhold{\gamma}{\sigma_n}\leq \mu\mathbin{\vcentcolon=} \normhold{\gamma}{\sigma}$ and $\sigma_n\in\mathcal{C}_{\mathrm{b}}^3$. Denote by $X^n$ the solution map to $X^n_t(a)=a+\int_0^t \sigma^n(X^n_s(a))\,\mathrm{d} B_s$. Since $\sigma_n\in\mathcal{C}_{\mathrm{b}}^3$, $X^n(a)$ is also a solution to the RDE $X_t^n(a)=a+\int_0^t\sigma_n(X_s^n(a))\,\mathrm{d} {\mathbf{B}_s}$ with $\varpi(x)\mathbin{\vcentcolon=} x^{(2+\gamma)/p}$. (See among others \cite{coutin-lejay3,lejay_victoir} for the Itô case and \cite{ledoux,bass,friz} for the Stratonovich case to which a Itô-Stratonovich correction term may be applied). As solutions to RDE are also D-solutions, $X^n(a)$ is associated to $\phi^n_{t,s}(a)\mathbin{\vcentcolon=} a+\sigma_n(a)B_{s,t} +\mathrm{D}\sigma_n(a)\cdot\sigma_n(a)\mathbf{B}^{(2)}_{s,t}$. We know from \cite[Theorems 2.3 and 2.5]{kunita86a} that $\Set{X^n}_n$ converges in probability to~$X$ with respect to the topology generated by $\normf{\mathcal{G}_{T,N}}{\cdot}$ for any $N>0$. Besides, set $M^n_t(a)\mathbin{\vcentcolon=} X^n_t(a)-a=\int_0^t \sigma_n(X^n_s(a))\,\mathrm{d} B_s$. Recall that $\Omega_N\mathbin{\vcentcolon=} \Set{\abs{a}\leq N}$. A direct application of the Burkholder-Davis-Gundy inequality on $M^n_t(a)$ shows that for any $p\geq2$, there exists a constant $C$ depending only on $\mu$, $p$ and $T$ such that \begin{equation*} \mathbb{E}\Paren*{\sup_{t\in[0,T]} \normf{\mathrm{L}^p(\Omega_N)}{M^n_t(\cdot)}^p}\leq C,\ \forall n. \end{equation*} Similarly, with the Grownall lemma and the Burkholder-Davis-Gundy, one gets that for a constant $C'$ depending only on $\mu$, $p$ and $T$ such that \begin{equation*} \mathbb{E}\Paren*{\sup_{t\in[0,T]} \normf{\mathrm{L}^p(\Omega_N)}{\mathrm{D} M^n_t(\cdot)}^p}\leq C',\ \forall n. \end{equation*} With the Sobolev embedding theorem \cite[Theorem IX.16]{brezis}, for any integer $N$, when $p>m$ ($m$ being the dimension of the space), there exists a constant $K$ depending only on $N$ and $p$ such that \begin{equation*} \normf{\abs{a}\leq N}{M^n_t(a)}\leq K\Paren*{\normf{\mathrm{L}^p(\Omega_N)}{M^n_t(\cdot)} +\normf{\mathrm{L}^p(\Omega_N)}{D M^n_t(\cdot)}}. \end{equation*} Hence, for any $p>m$ and any $N>0$, \begin{equation*} \sup_{n\in\mathbb{N}} \mathbb{E}\Paren*{\normf{\mathcal{G}_{T,N}}{M^n}^p}<+\infty. \end{equation*} This proves that $M^n$ is uniformly integrable. Therefore, $\Set{X^n}_{n\geq0}$ converges also in~$\mathrm{L}^q$ to $X$ with respect to $\normf{\mathcal{G}_{T,N}}{\cdot}$. Therefore, there exists a subsequence $\Set{n_k}_{k}$ such that $\Set{X^{n_k}}_{k\geq 0}$ converges almost surely to $X$ along a subsequence with respect to~$\normf{\mathcal{G}_{T,N}}{\cdot}$. Thanks to \eqref{eq:22}-\eqref{eq:23}, each $\phi^n$ belong to $\mathcal{A}[\delta,M]$ for a random function $\delta$ and a random constant $M$ which depend only on $\normp{\mathbf{B}}$ and $\normhold{\gamma}{\sigma}$. With Lemma~\ref{lem:2}, $X^{n_k}(a)\in\mathcal{P}[\phi,a,L]$ for a random constant $L$ which is uniform in $k\geq0$ and in $a$. Lemma~\ref{lem:convergence2} implies that $X(a)\in\mathcal{P}[a,\phi,L]$. Therefore, $X(a)$ is a D-solution associated to $\phi$. Since $X$ is a flow of $\mathcal{C}^{1+\beta}$-diffeomorphisms for any $0\leq \beta<\gamma$, we set $\psi_{t,s}(a)\mathbin{\vcentcolon=} X_t\circ X_s^{-1}(a)$ which defines a flow of $\mathcal{C}^{1+\beta}$-diffeomorphisms. Since $X\in\mathcal{P}[\phi,a,L]$ where $L$ does not depends on $a$, \begin{equation*} \sup_{a\in\mathbb{R}^N}\abs{X_t(a)-\phi_{t,s}[\sigma,\cdot](a)}\leq L\varpi(\omega_{s,t}), \ \forall (s,t)\in\mathbb{T}_+^2. \end{equation*} Therefore, \begin{equation*} \sup_{a\in\mathbb{R}^N}\abs{\psi_{t,s}(a)-\phi_{t,s}[\sigma,\cdot](X_s(a))}\leq L\varpi(\omega_{s,t}), \ \forall (s,t)\in\mathbb{T}_+^2. \end{equation*} This proves that $\phi$ and $\psi$ belong to the same galaxy. Thanks to \eqref{eq:DX-kunita}, we see that $a\mapsto\psi_{t,s}(a)-a$ is locally Lipschitz for each $(s,t)$ with a uniform control which decreases to $0$ as $T$ decreases to $0$. Hence $\psi$ is locally a flow of class $\mathcal{O}$ (see Example~\ref{ex:1}). Proposition~\ref{prop:uniqueness} in Appendix shows that $X(\alpha)$ is the unique D-solution associated to $\phi[\sigma,\alpha]$ for almost all $\alpha\in A$. \end{proof} In the following we propose another proof of Theorem~\ref{thm:diffeo} which is essentially based on the classical proof of the Kolmogorov-Chentsov criterion \cite[Theorem 1.8]{revuz} and its adaptation for the rough paths \cite[Theorem 3.1]{friz14a}. Let us denote, for any $a\in\mathbb{R}^m$ and any $(s,t)\in\mathbb{T}_+^2$ \begin{equation} \label{eq:Phi} \Psi_{s,t}(a) \mathbin{\vcentcolon=} X_{s,t}(a)-\sigma(X_s(a))B_{s,t}-\mathrm{D}\sigma(X_s(a))\cdot \sigma(X_s(a))\mathbf{B}^{(2)}_{s,t}, \end{equation} where $X(a)$ is the Itô solution defined by \eqref{eq:ito:1}. \begin{lemma}[{\cite[Lemma 4.1]{davie05a}}] \label{lem:phi-moments} If $\sigma\in \mathcal{C}_b^{1+\gamma}$ with $\gamma\in (0,1)$, then for any $k>0$, \begin{align*} \mathbb{E}\left(\abs{\Psi_{s,t}(a)}^k\right)\leq C|t-s|^{k\frac{(2+\gamma)}{2}},\quad \forall a\in\mathbb{R}^m, \forall (s,t)\in\mathbb{T}_+^2, \end{align*} where $C$ is constant that depends only on $k$, $\normf{\mathcal{C}_b^{1+\gamma}}{\sigma}$ and $T$ and $\Psi$ is defined \mbox{by \eqref{eq:Phi}}. \end{lemma} \begin{proposition} \label{prop:ito-davie-sol} We assume $\sigma\in\mathcal{C}^{1+\gamma}_b$ with $\gamma\in (0,1)$. Let $k$ be the smallest integer such that $k>\frac{6}{\gamma}$. Then, there exists a positive random constant $K\in L^k(A)$ such that for all $(s,t)\in\mathbb{T}_+^2$ and all $a\in\mathbb{R}^m$, \begin{align} \abs{\Psi_{s,t}(a)}\leq K|t-s|^{\theta}, \end{align} with $\theta\mathbin{\vcentcolon=} 1-\frac{3}{k}+\frac{\gamma}{2}>1$. It follows that the Itô solution $X(a)$ defined by \eqref{eq:ito:1} is a D-solution associated to the Davie almost flow defined by \eqref{eq:ebm:davie}. \end{proposition} \begin{proof} We fix the integer $k$ and the real $\theta$ as in the statement of the theorem. It is well known that a constant $C_k$ depending only on $k$ exists such that $\mathbb{E}(\abs{B_{s,t}}^k)\leq C_k |t-s|^{k/2}$ and $\mathbb{E}(\abs{\mathbf{B}^{(2)}_{s,t}}^k)\leq C_k |t-s|^k$ for any $(s,t)\in\mathbb{T}_+^2$. For an integer $n\geq0$, we set $D_n\mathbin{\vcentcolon=} \left\{\frac{kT}{2^n}, k=0,\dots,2^n\right\}$ the dyadic partition of $[0,T]$. We define \begin{align*} K_n\mathbin{\vcentcolon=} \sup_{t\in D_n}\abs{\Psi_{t,t+T2^{-n}}(a)},\ L_n\mathbin{\vcentcolon=} \sup_{t\in D_n}\abs{B_{t,t+T2^{-n}}} \text{ and }M_n\mathbin{\vcentcolon=} \sup_{t\in D_n}\abs{\mathbf{B}^{(2)}_{t,t+T2^{-n}}}. \end{align*} It follows from Lemma~\ref{lem:phi-moments} that for any $k\geq 0$, \begin{align} \label{eq:Knk} \mathbb{E}(K_n^k)\leq \sum_{t\in D_n}\mathbb{E}(\abs{\Psi_{t,t+T2^{-n}}(a)}^k) \leq C2^n2^{-nk\frac{(2+\gamma)}{2}}. \end{align} In a same way, \begin{align} \label{eq:LM} \mathbb{E}(L_n^k)\leq C_k2^{-n(k/2-1)}\quad \text{and}\quad \mathbb{E}(M_n^k)\leq C_k2^{-n(k-1)}. \end{align} For $s<t$ in $\bigcup_{n\in\mathbb{N}} D_n$, let $m$ be an integer such that $2^{-(m+1)}<\abs{t-s}\leq 2^{-m}$. There is an integer $N$ and a partition $s=\tau_0<\tau_1<\dots<\tau_{N-1}<\tau_N=t$ of $[s,t]$ with the following properties \begin{itemize} \item for each $i=0,\dots,N-1$, there $n\geq m+1$, such that $\tau_i,\tau_{i+1}$ are two consecutive points in $D_n$, \item at most two consecutive points that have the same length. \end{itemize} Setting $\Psi_{u,v,w}(a)\mathbin{\vcentcolon=} \Psi_{u,w}(a)-(\Psi_{u,v}(a)+\Psi_{v,w}(a))$ for $0\leq u\leq v\leq w\leq T$, we have \begin{align} \label{eq:phitwoterm} \Psi_{s,t}(a)=\sum_{i=0}^{N-1}\Psi_{\tau_i,\tau_{i+1}}(a)+\sum_{i=0}^{N-1}\Psi_{\tau_i,\tau_{i+1},t}(a). \end{align} We start by bounding the first right hand side of the above equation \begin{align} \label{eq:K_theta} \frac{\left|\sum_{i=0}^{N-1}\Psi_{\tau_i,\tau_{i+1}}(a)\right|}{|t-s|^\theta}\leq \frac{\sum_{i=0}^{N-1}\abs{\Psi_{\tau_i,\tau_{i+1}}(a)}}{2^{(m+1)\theta}} \leq 2\sum_{n\geq m+1}K_n2^{n\theta}\leq K_\theta, \end{align} where $K_\theta\mathbin{\vcentcolon=} 2\sum_{n\geq 0}K_n2^{n\theta}$ is a random constant in $L^k(A)$. Indeed, \begin{align*} \normf{L^k}{K_\theta}\leq 2\sum_{n\geq 0}\normf{L^k}{K_n}2^{- n\theta}\leq 2C\sum_{n\geq 0}2^{-n(\frac{2+\gamma}{2}-\frac{1}{k}-\theta)}. \end{align*} The above series is convergent because $\frac{2+\gamma}{2}-\frac{1}{k}-\theta=\frac{2}{k}>0$. To bound the second right hand side, we note that \begin{align} \label{eq:Phi-computed} \Psi_{u,v,w}(a)=S^{(1)}_{u,v} B_{v,w}+S^{(2)}_{u,v}B_{v,w}+S^{(3)}_{u,v}\mathbf{B}^{(2)}_{v,w}, \end{align} with \begin{align*} S^{(1)}_{u,v}&\mathbin{\vcentcolon=} \sigma(X_v)-\sigma(X_u)-\mathrm{D}\sigma(X_u)(X_v-X_u),\\ S^{(2)}_{u,v}&\mathbin{\vcentcolon=} \mathrm{D}\sigma(X_u)\int_u^v(\sigma(X_z)-\sigma(X_u))\,\mathrm{d} B_z,\\ S^{(3)}_{u,v}&\mathbin{\vcentcolon=} \mathrm{D}\sigma(X_v)\sigma(X_v)-\mathrm{D}\sigma(X_u)\sigma(X_u). \end{align*} We bound the moments of this three terms. For any $k>0$, \begin{align} \label{eq:s1} \mathbb{E}(\abs{S^{(1)}_{u,v}}^k)\leq \normf{\gamma}{\mathrm{D}\sigma}^k\mathbb{E}(\abs{X_v-X_u}^{k(1+\gamma)}) \leq \normf{\gamma}{\mathrm{D}\sigma}^kC_1\normsup{\sigma}^{k(1+\gamma)}\abs{v-u}^{k\frac{1+\gamma}{2}}, \end{align} where $C_1\geq 0$ is a constant that depends only on $k$ and $\gamma$. Similarly, \begin{align} \label{eq:s2} \mathbb{E}(\abs{S^{(2)}_{u,v}}^k)&\leq \normsup{\mathrm{D}\sigma}^{2k}C_2^2\normsup{\sigma}^k|v-u|^{k}, \\ \label{eq:s3} \text{and } \mathbb{E}(\abs{S^{(3)}_{u,v}}^k) &\leq \left[\normsup{\mathrm{D}\sigma}^2\normsup{\sigma}^k+\normf{\gamma}{\mathrm{D} \sigma}^k\normsup{\sigma}^{k(1+\gamma)}T^{k\frac{(1-\gamma)}{2}}\right]C_3\abs{v-u}^{k\frac{\gamma}{2}}, \end{align} where $C_2$, $C_3\geq 0$ are constants depending only on $k$ and $\gamma$. It follows from \eqref{eq:Phi-computed} \begin{align} \label{eq:Phi2} \left|\sum_{i=0}^{N-1}\Psi_{\tau_i,\tau_{i+1},t}(a)\right|&\leq\sup_{i}\abs{B_{\tau_{i+1},t}}\sum_{i=0}^{N-1}\left(\abs{S^{(1)}_{\tau_i,\tau_{i+1}}}+\abs{S^{(2)}_{\tau_i,\tau_{i+1}}}\right)+ \sup_i\abs{\mathbf{B}^{(2)}_{\tau_i,t}}\sum_{i=0}^{N-1}\abs{S^{(3)}_{\tau_i,\tau_{i+1}}}. \end{align} Yet, we have \begin{align} \label{eq:supB} \sup_i\abs{B_{\tau_i,t}}\leq \sum_{i=0}^{N-1}\abs{B_{\tau_i,\tau_{i+1}}}\leq 2\sum_{n\geq m+1}L_n. \end{align} Using Chen's relation \begin{align} \label{eq:supBB} \sup_i\abs{\mathbf{B}^{(2)}_{\tau_i,t}}&\leq \sum_{i=0}^{N-1}\abs{\mathbf{B}^{(2)}_{\tau_i,\tau_{i+1}}}+\sup_{i}\abs{B_{\tau_{i+1},t}}\sum_{i=0}^{N-1}\abs{B_{\tau_i,\tau_{i+1} }} &\leq 2\sum_{n\geq m+1}M_n+\left(2\sum_{n\geq m+1}L_n\right)^2. \end{align} Thus, combining \eqref{eq:Phi2}, \eqref{eq:supB} and \eqref{eq:supBB}, \begin{multline*} \left|\sum_{i=0}^{N-1}\Psi_{\tau_i,\tau_{i+1},t}(a)\right| \leq 4\sum_{n\geq m+1}L_n\sum_{n\geq m+1}\left(S^{(1)}_n+S^{(2)}_n\right)\\ +\left(2\sum_{n\geq m+1}M_n +\left(2\sum_{n\geq m+1}L_n\right)^2\right)\left(\sum_{n\geq m+1}S^{(3)}_n\right), \end{multline*} where $S^{(\ell)}_n\mathbin{\vcentcolon=} \sup_{t\in D_n}\abs{S^{(\ell)}_{t,t+T2^{-n}}}$ for $l\in\{1,2,3\}$. We show with \eqref{eq:s1}, \eqref{eq:s2} and \eqref{eq:s3}, in a same way as for $K_n$, that for $\ell\in\Set{1,2}$, \begin{equation} \label{eq:sl} \mathbb{E}\left(\left[S^{(\ell)}_n\right]^k\right)\leq C_42^{-n\left(\frac{k(1+\gamma)}{2}-1\right)}, \text{~and~} \mathbb{E}\left(\left[S^{(3)}_n\right]^k\right) \leq C_52^{-n\left(\frac{k\gamma}{2}-1\right)}, \end{equation} where $C_4$, $C_5$ are constants that depend on $\normf{\mathcal{C}^{1+\gamma}_b}{\sigma}$, $k$, $\gamma$ and $T$. We recall that $\theta\mathbin{\vcentcolon=} 1-\frac{3}{k}+\frac{\gamma}{2}$, then $\theta>1$ and there exists a constant $\theta_1\in \left(\frac{1}{2}-\frac{1}{k},\frac{1}{2}\left(\theta+\frac{1}{k}-\frac{\gamma}{2}\right)\right)$. We have \begin{align} \label{eq:K'_theta} \frac{\left|\sum_{i=0}^{N-1}\Psi_{\tau_i,\tau_{i+1},t}(a)\right|}{\abs{t-s}^{\theta}}\leq K'_\theta, \end{align} where \begin{multline} \label{eq:K'series} K'_\theta\mathbin{\vcentcolon=} 4\sum_{n\geq 0}L_n2^{n\theta_1} \sum_{n\geq 0}\left(S^{(1)}_n+S^{(2)}_n\right)2^{n(\theta_2-\theta_1)} \\ +\left(2\sum_{n\geq 0}M_n2^{n2\theta_1}+\left(2\sum_{n\geq 0} L_n2^{n\theta_1}\right)^2\right)\left(\sum_{n\geq 0}S^{(3)}_n2^{n(\theta-\theta_1)}\right). \end{multline} The constant $K'_\theta$ is random variable in $\mathrm{L}^k(A)$. Indeed, using \eqref{eq:LM}, \eqref{eq:sl} and our choice of $k$, $\theta_1$, $\theta$, we check that the right hand side of \eqref{eq:K'series} contains only converging series in $\mathrm{L}^k(A)$. Setting $K\mathbin{\vcentcolon=} K_\theta+K'_\theta$ and using \eqref{eq:phitwoterm}, \eqref{eq:K_theta}, \eqref{eq:K'_theta}, we obtain that for all $s<t$ in $\bigcup_{n\in\mathbb{N}} D_n$, $\abs{\Psi_{s,t}(a)}\leq K\abs{t-s}^\theta$, with $K\in \mathrm{L}^k(A)$. By continuity of $(s,t)\mapsto \Psi_{s,t}(a)$ the above estimation is true for all $(s,t)\in\mathbb{T}_+^2$. This concludes the proof. \end{proof} \begin{proof}[Second proof of Theorem~\ref{thm:diffeo}] According to Proposition~\ref{prop:ito-davie-sol}, when $\sigma\in\mathcal{C}_b^{1+\gamma}$ with $\gamma\in (0,1)$, the Itô solution $X(a)$ is a D-solution associated to Davie almost flow $\phi$. More precisely, $X\in\mathcal{P}[\phi,a,K]$, with a random constant $K$ that does not depend on $a$. Then we conclude as in the first proof of Theorem~\ref{thm:diffeo}. \end{proof}
1,108,101,563,787
arxiv
\section{Submission of conference papers to CoLLAs 2022} CoLLAs requires electronic submissions, processed by \href{https://openreview.net/}{https://openreview.net/}. See CoLLAs' website for more instructions. If your paper is ultimately accepted, the statement {\tt {\textbackslash}collasfinalcopy} should be inserted to adjust the format to the camera ready requirements. The format for the submissions is a variant of the ICLR format (which is inline with the NeurIPS format). Please read carefully the instructions below, and follow them faithfully. \subsection{Style} Papers to be submitted to CoLLAs 2022 must be prepared according to the instructions presented here. Authors are required to use the CoLLAs \LaTeX{} style files obtainable at the CoLLAs website (\href{www.lifelong-ml.cc}{www.lifelong-ml.cc}). Please make sure you use the current files and not previous versions. Tweaking the style files may be grounds for rejection. \subsection{Retrieval of style files} The style files for CoLLAs and other conference information are available online at: \begin{center} \href{http://www.lifelong-ml.cc/}{http://www.lifelong-ml.cc/} \end{center} The file \verb+collas2022_conference.pdf+ contains these instructions and illustrates the various formatting requirements your CoLLAs paper must satisfy. Submissions must be made using \LaTeX{} and the style files \verb+collas2022_conference.sty+ and \verb+collas2022_conference.bst+ (to be used with \LaTeX{}2e). The file \verb+collas2022_conference.tex+ may be used as a ``shell'' for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in sections \ref{gen_inst}, \ref{headings}, and \ref{others} below. \section{General formatting instructions} \label{gen_inst} The text must be confined within a rectangle 6.5~inches wide and 9~inches (54~picas) long. Use 10~point type with a vertical spacing of 11~points. Times New Roman is the preferred typeface throughout. Paragraphs are separated by 1/2~line space, with no indentation. Paper title is 17~point, in small caps and left-aligned. All pages should start at 1~inch (6~picas) from the top of the page. Authors' names are set in boldface, and each name is placed above its corresponding address. The lead author(s)'s name is to be listed first, and the co-authors' names are set to follow. Authors sharing the same address can be on the same line. Please pay special attention to the instructions in section \ref{others} regarding figures, tables, acknowledgments, and references. There will be a strict upper limit of 9 pages for the main text of the initial submission, with unlimited additional pages for citations. \section{Headings: first level} \label{headings} First level headings are in small caps, flush left and in point size 12. One line space before the first level heading and 1/2~line space after the first level heading. \subsection{Headings: second level} Second level headings are in small caps, flush left and in point size 10. One line space before the second level heading and 1/2~line space after the second level heading. \subsubsection{Headings: third level} Third level headings are in small caps, flush left and in point size 10. One line space before the third level heading and 1/2~line space after the third level heading. \section{Citations, figures, tables, references} \label{others} These instructions apply to everyone, regardless of the formatter being used. \subsection{Citations within the text} Citations within the text should be based on the \texttt{natbib} package and include the authors' last names and year (with the ``et~al.'' construct for more than two authors). When the authors or the publication are included in the sentence, the citation should not be in parenthesis using \verb|\citet{}| (as in ``See \citet{Hinton06} for more information.''). Otherwise, the citation should be in parenthesis using \verb|\citep{}| (as in ``Deep learning shows promise to make progress towards AI~\citep{Bengio+chapter2007}.''). The corresponding references are to be listed in alphabetical order of authors, in the \textsc{References} section. As to the format of the references themselves, any style is acceptable as long as it is used consistently. \subsection{Footnotes} Indicate footnotes with a number\footnote{Sample of the first footnote} in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2~inches (12~picas).\footnote{Sample of the second footnote} \subsection{Figures} All artwork must be neat, clean, and legible. Lines should be dark enough for purposes of reproduction; art work should not be hand-drawn. The figure number and caption always appear after the figure. Place one line space before the figure caption, and one line space after the figure. The figure caption is lower case (except for first word and proper nouns); figures are numbered consecutively. Make sure the figure caption does not get separated from the figure. Leave sufficient space to avoid splitting the figure and figure caption. You may use color figures. However, it is best for the figure captions and the paper body to make sense if the paper is printed either in black/white or in color. \begin{figure}[h] \begin{center} \fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \end{center} \caption{Sample figure caption.} \end{figure} \subsection{Tables} All tables must be centered, neat, clean and legible. Do not use hand-drawn tables. The table number and title always appear before the table. See Table~\ref{sample-table}. Place one line space before the table title, one line space after the table title, and one line space after the table. The table title must be lower case (except for first word and proper nouns); tables are numbered consecutively. \begin{table}[t] \caption{Sample table title} \label{sample-table} \begin{center} \begin{tabular}{ll} \multicolumn{1}{c}{\bf PART} &\multicolumn{1}{c}{\bf DESCRIPTION} \\ \hline \\ Dendrite &Input terminal \\ Axon &Output terminal \\ Soma &Cell body (contains cell nucleus) \\ \end{tabular} \end{center} \end{table} \section{Default Notation} In an attempt to encourage standardized notation, we have included the notation file from the textbook, \textit{Deep Learning} \cite{goodfellow2016deep} available at \href{https://github.com/goodfeli/dlbook_notation/}{https://github.com/goodfeli/dlbook\_notation/}. Use of this style is not required and can be disabled by commenting out \texttt{math\_commands.tex}. \newpage \centerline{\bf Numbers and Arrays} \bgroup \def1.5{1.5} \begin{tabular}{p{1in}p{3.25in}} $\displaystyle a$ & A scalar (integer or real)\\ $\displaystyle \va$ & A vector\\ $\displaystyle \mA$ & A matrix\\ $\displaystyle \tA$ & A tensor\\ $\displaystyle \mI_n$ & Identity matrix with $n$ rows and $n$ columns\\ $\displaystyle \mI$ & Identity matrix with dimensionality implied by context\\ $\displaystyle \ve^{(i)}$ & Standard basis vector $[0,\dots,0,1,0,\dots,0]$ with a 1 at position $i$\\ $\displaystyle \text{diag}(\va)$ & A square, diagonal matrix with diagonal entries given by $\va$\\ $\displaystyle \ra$ & A scalar random variable\\ $\displaystyle \rva$ & A vector-valued random variable\\ $\displaystyle \rmA$ & A matrix-valued random variable\\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Sets and Graphs} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle \sA$ & A set\\ $\displaystyle \R$ & The set of real numbers \\ $\displaystyle \{0, 1\}$ & The set containing 0 and 1 \\ $\displaystyle \{0, 1, \dots, n \}$ & The set of all integers between $0$ and $n$\\ $\displaystyle [a, b]$ & The real interval including $a$ and $b$\\ $\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\ $\displaystyle \sA \backslash \sB$ & Set subtraction, i.e., the set containing the elements of $\sA$ that are not in $\sB$\\ $\displaystyle \gG$ & A graph\\ $\displaystyle \parents_\gG(\ervx_i)$ & The parents of $\ervx_i$ in $\gG$ \end{tabular} \vspace{0.25cm} \centerline{\bf Indexing} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle \eva_i$ & Element $i$ of vector $\va$, with indexing starting at 1 \\ $\displaystyle \eva_{-i}$ & All elements of vector $\va$ except for element $i$ \\ $\displaystyle \emA_{i,j}$ & Element $i, j$ of matrix $\mA$ \\ $\displaystyle \mA_{i, :}$ & Row $i$ of matrix $\mA$ \\ $\displaystyle \mA_{:, i}$ & Column $i$ of matrix $\mA$ \\ $\displaystyle \etA_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor $\tA$\\ $\displaystyle \tA_{:, :, i}$ & 2-D slice of a 3-D tensor\\ $\displaystyle \erva_i$ & Element $i$ of the random vector $\rva$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Calculus} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\ [2ex] $\displaystyle \frac{\partial y} {\partial x} $ & Partial derivative of $y$ with respect to $x$ \\ $\displaystyle \nabla_\vx y $ & Gradient of $y$ with respect to $\vx$ \\ $\displaystyle \nabla_\mX y $ & Matrix derivatives of $y$ with respect to $\mX$ \\ $\displaystyle \nabla_\tX y $ & Tensor containing derivatives of $y$ with respect to $\tX$ \\ $\displaystyle \frac{\partial f}{\partial \vx} $ & Jacobian matrix $\mJ \in \R^{m\times n}$ of $f: \R^n \rightarrow \R^m$\\ $\displaystyle \nabla_\vx^2 f(\vx)\text{ or }\mH( f)(\vx)$ & The Hessian matrix of $f$ at input point $\vx$\\ $\displaystyle \int f(\vx) d\vx $ & Definite integral over the entire domain of $\vx$ \\ $\displaystyle \int_\sS f(\vx) d\vx$ & Definite integral with respect to $\vx$ over the set $\sS$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Probability and Information Theory} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle P(\ra)$ & A probability distribution over a discrete variable\\ $\displaystyle p(\ra)$ & A probability distribution over a continuous variable, or over a variable whose type has not been specified\\ $\displaystyle \ra \sim P$ & Random variable $\ra$ has distribution $P$\\% so thing on left of \sim should always be a random variable, with name beginning with \r $\displaystyle \E_{\rx\sim P} [ f(x) ]\text{ or } \E f(x)$ & Expectation of $f(x)$ with respect to $P(\rx)$ \\ $\displaystyle \Var(f(x)) $ & Variance of $f(x)$ under $P(\rx)$ \\ $\displaystyle \Cov(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P(\rx)$\\ $\displaystyle H(\rx) $ & Shannon entropy of the random variable $\rx$\\ $\displaystyle \KL ( P \Vert Q ) $ & Kullback-Leibler divergence of P and Q \\ $\displaystyle \mathcal{N} ( \vx ; \vmu , \mSigma)$ & Gaussian distribution % over $\vx$ with mean $\vmu$ and covariance $\mSigma$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Functions} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle f: \sA \rightarrow \sB$ & The function $f$ with domain $\sA$ and range $\sB$\\ $\displaystyle f \circ g $ & Composition of the functions $f$ and $g$ \\ $\displaystyle f(\vx ; \vtheta) $ & A function of $\vx$ parametrized by $\vtheta$. (Sometimes we write $f(\vx)$ and omit the argument $\vtheta$ to lighten notation) \\ $\displaystyle \log x$ & Natural logarithm of $x$ \\ $\displaystyle \sigma(x)$ & Logistic sigmoid, $\displaystyle \frac{1} {1 + \exp(-x)}$ \\ $\displaystyle \zeta(x)$ & Softplus, $\log(1 + \exp(x))$ \\ $\displaystyle || \vx ||_p $ & $\normlp$ norm of $\vx$ \\ $\displaystyle || \vx || $ & $\normltwo$ norm of $\vx$ \\ $\displaystyle x^+$ & Positive part of $x$, i.e., $\max(0,x)$\\ $\displaystyle \1_\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\ \end{tabular} \egroup \vspace{0.25cm} \section{Final instructions} Do not change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes (except perhaps in the \textsc{References} section; see below). Please note that pages should be numbered. \section{Preparing PostScript or PDF files} Please prepare PostScript or PDF files with paper size ``US Letter'', and not, for example, ``A4''. The -t letter option on dvips will produce US Letter files. Consider directly generating PDF files using \verb+pdflatex+ (especially if you are a MiKTeX user). PDF figures must be substituted for EPS figures, however. Otherwise, please generate your PostScript and PDF files with the following commands: \begin{verbatim} dvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps ps2pdf mypaper.ps mypaper.pdf \end{verbatim} \subsection{Margins in LaTeX} Most of the margin problems come from figures positioned by hand using \verb+\special+ or other commands. We suggest using the command \verb+\includegraphics+ from the graphicx package. Always specify the figure width as a multiple of the line width as in the example below using .eps graphics \begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.eps} \end{verbatim} or \begin{verbatim} \usepackage[pdftex]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.pdf} \end{verbatim} for .pdf graphics. See section~4.4 in the graphics bundle documentation \\ (\href{http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.ps}{http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.ps}) A number of width problems arise when LaTeX cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \verb+\-+ command. \subsubsection*{Author Contributions} If you'd like to, you may include a section for author contributions as is done in many journals. This is optional and at the discretion of the authors. \textbf{This should be only done in the camera ready version of the manuscript, not in the anonymized version submitted for review!} \subsubsection*{Acknowledgments} Use unnumbered third level headings for the acknowledgments. All acknowledgments, including those to funding agencies, go at the end of the paper. \textbf{This should be only done in the camera ready version of the manuscript, not in the anonymized version submitted for review!} \section{Agent's Domain Specific Language(DSL)} \label{sec:dsl} This section describes the details of logical form of each action pictorially. We support three dialogue types: HUMAN\_GIVE\_COMMAND, GET\_MEMORY and PUT\_MEMORY. We support the following actions in our dataset : Build, Dance, Get, Spawn, Resume, Fill, Destroy, Move, Undo, Stop, Dig and FreeBuild. In figure \ref{fig:event_fig}, we represent an event in the agent's grammar and DSL: \begin{figure}[ht] \centering \includegraphics[width=16cm]{images/event.png} \caption{The agent's DSL showing the structure of an event.} \label{fig:event_fig} \end{figure} In figure \ref{fig:action_fig}, we show a full pictorial representation of actions in the agent's DSL: \begin{figure} \centering \includegraphics[width=16cm]{images/action.png} \caption{The representation of an action in agent's DSL.} \label{fig:action_fig} \end{figure} Filters add a lot of expressiveness to the agent's grammar, we show a representation of filters in figure \ref{fig:filters_fig} \begin{figure} \centering \includegraphics[width=16cm]{images/filters.png} \caption{The representation of filters in agent's DSL.} \label{fig:filters_fig} \end{figure} \section{Data Generation Interface - Human Intelligence Task (HIT)} \label{sec:hit} This appendix describes the interface used by crowd-sourced workers to interact with the Droidlet agent and generate data by issuing commands. Figure \ref{fig:instructions} is the instructions popup, which is the first thing that the worker sees when starting the HIT. The instructions are paginated to reduce each section to a digestible amount of content. \begin{figure} \centering \includegraphics[width=16cm]{images/hit_instructions.png} \caption{The instructions given to crowd-sourced workers for completing the agent interaction HIT.} \label{fig:instructions} \end{figure} Figure \ref{fig:start_hit} shows the view of the HIT page that workers see at the beginning of the task. There is a prompt in the chat to start the clock (each interaction is a minimum of five minutes), and there is a prompt superimposed over the voxel world window indicating that users need to click once in that window in order for the voxel world to render. The "stoplight" performance score is a 0 out of 10 at the start of the HIT, and no feedback is available yet. The instructions, which were shown in a popup previously, are available for review in the dropdown at the top of the page. However, the agent capabilities are always available for easy reference just to the left of the interaction window. \begin{figure} \centering \includegraphics[width=11cm]{images/start_hit.png} \caption{View of the HIT page at the beginning of the task.} \label{fig:start_hit} \end{figure} Figure \ref{fig:status} shows two of the status messages that workers see after submitting a command. These status messages are available so that the worker is not confused about what is happening at any given time, and can more reliably identify if there is a bug or the agent has frozen. The four status messages that are shown after every command are, in order: "sending command", "command received", "assistant thinking", and "assistant is doing the task". The first is cleared when the agent acknowledges having received the command. The second is cleared after 500ms. The third is cleared after the NSP has parsed the command. The fourth and final status is cleared after the agent has completed the task, if it knows how. After the fourth status message is cleared the next UI screen that appears is the error routing screen, which the user must progress through before being allowed to issue another command. \begin{figure} \centering \includegraphics[width=11cm]{images/agent_thinking.png} \includegraphics[width=11cm]{images/agent_working.png} \caption{Status update messages given to workers as the agent is processing instructions and performing the task. The worker retains the ability to issue a "stop" command while the agent is working.} \label{fig:status} \end{figure} Figure \ref{fig:nlu_error} and Figure \ref{fig:task_error} show the error marking flows after the agent processes a command containing an NLU error and a non-NLU task error, respectively. Correct error marking is critical in order to appropriately route the appropriate data to the appropriate annotator. After completing this decision tree presented one question at a time, the worker is returned back to the original interaction window shown in Figure 1. \begin{figure} \centering \includegraphics[width=11cm]{images/nlu_error1.png} \includegraphics[width=11cm]{images/nlu_error2.png} \caption{Error routing flow for a command that contains an NLU error.} \label{fig:nlu_error} \end{figure} \begin{figure} \centering \includegraphics[width=11cm]{images/task_error1.png} \includegraphics[width=11cm]{images/task_error2.png} \includegraphics[width=11cm]{images/task_error3.png} \caption{Error routing flow for a command that does not contain an NLU error but which the agent does not complete correctly (in this case a perception error).} \label{fig:task_error} \end{figure} \section{Introduction} Present day machine learning (ML) research prioritizes end-to-end learning. Not only are end-to-end models able to achieve excellent performance on static tasks, there is a growing literature on how to adapt pre-trained networks to new tasks, and large pre-trained models can have impressive zero-shot performance on unseen tasks. In the setting of embodied agents, this manifests as agents actualized as monolithic ML models, where inputs to the model are the agent's perceptual sensors, and the model's outputs directly control agent actions. There are now a number of environments designed for the training of end-to-end embodied agents \cite{beattie2016deepmind, savva2019habitat, guss2019minerl, petrenko2021megaverse}, and there is hope (and some evidence) that the same sort of transfer and adaptability seen in language and vision models will carry over to the embodied agent setting. Nevertheless, agents implemented as fully end-to-end ML models are rare in production systems (or in real-world embodied agents, a.k.a. robots). While this in part is a symptom of the rapid improvement and scaling in the literature and the lag in technology transfer, these systems require performance and safety guarantees that are still not easily obtainable from end-to-end ML models; and must be maintainable by human engineers. On the other hand, it is difficult for pipelined agents to learn from experience once deployed. Instead, human engineers design a module, collect and collate data for it, train the appropriate ML model, and then deploy it. Thus the agent's abilities don't scale directly with the experience it receives, but rather with the amount of human power that can be brought to bear in building the modules. To somewhat oversimplify, engineers trade off ML scalability (the ability to learn new things through interaction, without engineering investment) for modularity, serviceability, and interpretability. This work is a case study of automating self-improvement via interactions with people in a {\it pipelined} ML-powered agent. The agent consists of a set of modules, some of which are learned, and others heuristic. The agent is not ``end-to-end'' in the ML sense, but end-to-end interaction is a vital part of the agent's learning mechanism. Through appropriate UX (user experience) design, crowd workers are able to assign credit to module errors {\it without} knowledge of the architecture of the agent. Using this, we automate a loop of human interaction, credit-assignment, module-data-annotation, model-retraining and re-deployment that successfully improves a semantic parsing module over multiple rounds of re-deployment. We thus give evidence it is possible to keep modularity without giving up ML scalability in this setting. \begin{figure} \centering \includegraphics[width=15cm]{images/dashboard.png} \caption{An image of the dashboard user interface seen by crowd workers in production.} \label{fig:world} \end{figure} \section{Setting and Methods} \subsection{The setting} We describe the agent architecture and the world in which it lives. \subsubsection{World} The agent is embodied in a three dimensional voxel world. Each voxel can be occupied by space or an impassable block of material. Movement is possible in any direction, as long as the voxel is unoccupied; and the agent moves via discrete steps of size one voxel. The agent also can turn to look in any direction; and so its pose can be represented by a $(x, y, z, \text{pitch}, \text{yaw})$ tuple. In addition to being able to act by changing its body or head position, the agent can point at rectanguloid regions of space (by visibly flashing them), and can ``speak'' in text. The agent can also place blocks of various colors, or destroy them. A human player co-occupies the world with the agent. The human player's pose is also determined by an $(x, y, z, \text{pitch}, \text{yaw})$ tuple. The human player can also speak in text to the agent, and the agent can see the human player's pose (including the pitch and yaw, allowing it to decide what the human is looking at). The human can also place blocks and destroy blocks. See the interface in Figure \ref{fig:world}. \subsubsection{The agent} We use a Droidlet agent \cite{pratik2021droidlet}. The agent's perceptual input includes the location of the agent's self pose, the player's pose, the location and type (i.e. material) of each block in space, and the chat history. It is equipped with heuristic perceptual methods to recognize connected components of blocks and the local ground plane. It also makes use of a BERT-based semantic parsing model further described in Section \ref{sec:parser} as its natural language understanding (NLU) ``perception''. The agent also has heuristic, scripted ``Tasks'' that allow execution of atomic programs like movement to locations in space, re-orienting pose, pointing, or placing blocks. The agent has also limited, scripted dialogue capabilities (also implemented as Tasks) to clarify when needed. The parameters of these Tasks are provided by a ``Controller'' module that inputs a partially specified program in the agent's domain specific language (DSL), either from the output of the NLU module or from the agent's intrinsic behaviors, and, using the agent's memory system, fully specifies the program. See \cite{pratik2021droidlet} for more details \subsection{NLU model details} \label{sec:parser} The agent uses a neural semantic parser (NSP) to convert commands from players into partially specified programs in the agent's DSL; these are fully specified into executable Tasks in the agent's interpreter, see \cite{pratik2021droidlet} for details. The neural semantic parser is the ML module that was improved over the course of our experiments. The agent's DSL is similar to the one described in \cite{srinet2020craftassist}, using the same top-level commands (Move/Dance, Build/Copy/Destroy/Dig, Stop/Resume), but the children of these have been expanded. For example, a ``Copy'' top-level command might take a ``ReferenceObject'' (corresponding to some object in the world) as a child, and the possible queries to specify that ReferenceObject have been expanded from \cite{srinet2020craftassist}. The full grammar is included in the supplemental, and some examples are displayed in Figure \ref{fig:lf_examples}. The architecture of agent's semantic parsing model is similar to the one described in \cite{srinet2020craftassist}. It is an encoder-decoder seq2seq model where the encoder is finetuned from BERT \cite{Devlin2019BERTPO} using \cite{huggingFace}, and the decoder is trained from scratch. In order to use a sequence based decoder, we linearize the target logical forms in depth-first order. \begin{figure}[h!] \fontsize{7pt}{8pt}\selectfont \begin{subcolumns}{0.48\columnwidth} \begin{subfigure}{0.48\columnwidth} \vspace{.5cm} \centering "dig a moat around the fort": \begin{lstlisting}[language=json] "action_sequence": [ {"action_type": "DIG", "location": { "relative_direction": "AROUND", "reference_object": { "filters": { "where_clause": { "AND": [{"pred_text": "has_name", "obj_text": [0, [5, 5]]}]} }}}, "schematic": { "filters": { "where_clause": { "AND": [{"pred_text": "has_name", "obj_text": [0, [2, 2]]}] }}}}] \end{lstlisting} \end{subfigure} \cr\noalign{\hfill} \begin{subfigure}{0.48\columnwidth} \centering "move to the left of the cube": \begin{ccr} \begin{lstlisting}[language=json] "action_sequence": [ {"action_type": "MOVE", "location": { "relative_direction": "LEFT", "reference_object": { "filters": { "where_clause": { "AND": [{"pred_text": "has_name", "obj_text": [0, [6, 6]]}] }}}}}] \end{lstlisting} \end{ccr} \end{subfigure} \vfill \begin{subfigure}{0.48\columnwidth} \centering "build a box": \begin{ccr} \begin{lstlisting}[language=json] "action_sequence": [ {"action_type": "BUILD", "schematic": { "filters": { "where_clause": { "AND": [{"pred_text": "has_name", "obj_text": [0, [2, 2]] }] }}}}] \end{lstlisting} \end{ccr} \end{subfigure} \end{subcolumns} \caption{Some examples of commands that can be parsed in the agent's DSL. Fields of the form $[x, [y, z]]$ where $x$, $y$, and $z$ are numbers are {\it spans} of text (e.g. the $y$ through $z$th tokens on the $x$th text input). Fields with keys "filters" correspond to queries to the agent's database. \label{fig:lf_examples}} \end{figure} \subsection{Learning from Humans} Human workers are connected with an agent. They interact with it through their web browser, where we render the agent and a representation of the world. Interaction data is gathered through crowd-sourced tasks where the workers are instructed to issue free form commands to the agents, using a category of actions from a suggested list of agent's capabilities (eg. 'build', 'destroy'). The workers are given no other instructions about what type of commands to give, other than to be creative and diverse. After each command, workers are prompted as to whether the task was carried out correctly end-to-end, whether the command was correctly understood, and whether the agent correctly perceived the objects that the workers referred to. If the player marks that the command was not correctly understood, then this the command and the agent's parse are recorded as an NLU error. These marked errors are then routed to {\it another} set of qualified crowd workers who write the ground truth parses for these commands. These parse annotation tasks are further distributed into small tasks consisting of 1-3 questions that determine the annotation of a particular node in the parse tree. These annotations are then used as training data to improve the NLU model offline. The retrained model is then re-deployed before the next set of human interactions. Figure \ref{fig:hitl} has a diagram showing the agent learning pipeline. The entire pipeline operates autonomously, from launching interaction jobs, to error annotation, to model retraining, to re-deployment. \begin{figure}[ht] \centering \includegraphics[width=16cm]{images/hitl.png} \caption{A diagram showing the lifelong learning process of the Droidlet agent. The logical form above has been simplified for clarity.} \label{fig:hitl} \end{figure} \subsubsection{Challenges of crowd-sourcing Human-Agent Interactions} Working with humans in the loop involves challenges that go beyond model architectures and learning algorithms. Apart from making tools that are effective for cooperative people, it is necessary to plan for annotators that will sometimes behave erratically, or even adversarially. A major issue (common to many crowd-worker deployments) has been - dealing with workers who, covertly or overtly, try to cheat their way through the task. Cheating in this case could mean not doing the task at all, trying to game our qualification criteria, or simply doing the bare minimum to pass but not engaging with the task. The combination of the following methods has allowed for very high quality data collection: \begin{itemize} \item Workers must first qualify for our interaction task by answering a simple set of questions to prove they are not a bot and are capable of reading the instructions. \item We disable the submission button until a basic list of criteria have been met, and we don't advertise what those criteria are beyond the task instructions. \item By offering performance incentives, we make it more profitable to not cheat than to cheat. \item We blacklist workers who repeatedly perform poorly. \item We ask workers to reflect on their own performance, which facilitates perspective-taking and improved performance on repeated iterations of the task. \cite{10.1145/2145204.2145355} \end{itemize} Even workers who are not acting adversarially can present challenges to development. They may not understand the instructions if not presented clearly, their knowledge of the English language may vary, and they may not have a strong aptitude with technology to navigate the interface. These constraints necessitate a focus on usability and user testing throughout the life cycle of the project. While the human factor presents varied challenges to the development of the agent interface, it has also created a continuous feedback cycle that facilitates overall agent improvement. We have many users of the system issuing thousands of commands, some of which cause the agent to crash or behave in unexpected ways. We would not discover these edge cases very quickly on our own. \subsubsection{Error Routing} \label{routing} There could be several types of issue that cause the agent to fail to execute a command. One example is that the user asks the agent to do something it that is not expressible in its DSL (``let's play chess'') or that is in its DSL, but part of the command refers to something the agent does not know (``build a camel'', where the agent does not know what a ``camel'' is). The agent can also fail because its visual perception module did not recognize an object, because it did not correctly retrieve the right information from memory, and the focus of this paper: because the NLU model failed to accurately parse the command. Differentiating between these types of failure is essential for being able to route the correct data to the correct annotator. In normal operation, only commands that are marked as containing an NLU error are sent to be annotated to so that ground truth for those commands is added to the data set. This process of differentiating between types of errors is executed using a decision tree that is presented to the worker one question at a time. In the Appendix there are examples of this decision tree in Figure \ref{fig:nlu_error}, which represents the correct error marking flow after an NLU error, and Figure \ref{fig:task_error}, which represents the error marking flow after a non-NLU task error. \section{Related Work} There is a large literature on human in the loop machine learning, see \cite{hitl_survey} for a survey. Our setting is an embodied agent with a language interface. There is existing work showing improvement after multiple rounds of re-deployment with dialogue agents, for example \cite{hancock2019learning, shuster2021dialogue, kiela2021dynabench}. Some prior work building towards sophisticated interactive tools for ``machine teaching'' \cite{simard2017machine}, where ML naive users are able to guide model training towards high accuracy and coverage, has been considered in the literature and many such tools exist as deployed services, for example \cite{ratner2017snorkel} (commercialized at \url{https://snorkel.ai/}) or \url{https://scale.com/}. These are superior to the re-deployment loop described in this work in the sense that the model re-training occurs ``in-session'', and the machine teacher can immediately see the results of their annotations and adjust accordingly. Furthermore, these also have tools for automatically generating labeled data from rules or automating data augmentation. However, the work described in this case study is complementary to these, in that it focuses on automating the end-to-end data-collection and retraining of ML models that are important internal components of an embodied agent. We give evidence that fully modular ML systems will be able to self-improve even if gradients cannot pass from one part of the system to another. In future work, we hope to combine our system with responsive in-session learning as described in these services. Our work is inspired by \cite{wang2017naturalizing}, where multiple rounds of users build up a semantic parser for a voxel world editor. In \cite{shah2021minerl} the authors propose a competition to train embodied agents in a voxel world through language descriptions. Our work is also related to \cite{suhr2019executing}, where the authors build an interactive environment where (embodied) players and agents (playing as the role of a language issuing ``leader'' with full observability or faster moving ``follower'' with partial observability) collaborate via natural language to collect cards by moving to their spatial locations. A followup \cite{kojima2021continual} is especially relevant; in that work they show how multiple rounds of learning can continue to improve the language generation capabilities of a ``leader'' model. In addition to the embodied agents and players, our work shares with \cite{kojima2021continual} multiple rounds of data collection and the use of player feedback after ``execution'' to label examples. However, the key difference is that in \cite{kojima2021continual} the agent is a single ML model, whereas in this work, we aim to show that credit can be assigned to different components in a modular system, the data for the component can be annotated, and the component re-trained without any engineer intervention. There are several works showing how humans can interactively teach robotic agents, for example \cite{saxena2014robobrain, paxton2017costar, mandlekar2018roboturk, Cabi2019a, mandlekar2020human}. In \cite{saxena2014robobrain}, the authors demonstrate large-scale crowd-sourcing of data for perceptual and knowledge-base components of a robotics system. In \cite{mandlekar2018roboturk, mandlekar2020human} crowd-workers are connected with robotic manipulators to demonstrate movements or parts of movements. COSTAR \cite{paxton2017costar} is a modular system for teaching robots to carry out tasks using behavior trees. Our work is similar to COSTAR in that it is built on a modular system with perception decoupled from action generation; but in this work we focus on the infrastructure for crowd-sourcing annotations, rather than mechanisms for live human teaching. Finally, our work builds on the ideas of \cite{carlson2010toward,mitchell2018never}. Our hope is to demonstrate progress towards {\it embodied} incarnations of these. \section{Results} \subsection{NLU Error Collection} In Table \ref{tab:errorfunnel} the results of the NLU error generation funnel are reported. In total, over the course of the experiments, we collected $18,163$ de-duplicated commands. In early runs, we found that training only on new data where the NLU model failed led to feedback effects. We updated our protocol to re-train using {\it all} de-duplicated commands at each iteration (including the ones the model correctly parsed). We leave methods for balancing the cost of labeling against distributional stability for future work. Even though we annotated all of the commands on later re-deployments, we calculated the accuracies of workers in routing errors - in ongoing and future work we expect to have several ML models active in the agent. Workers are relatively precise: 89\% of the time they mark a command as resulting in an NLU error, it turns out to be a true error. However, we estimate only 43\% of NLU errors are marked. This is an estimate because it can only be calculated for commands for which there has ever been an annotation. Future work on interaction task design will attempt to address this discrepancy. \begin{table} \begin{center} \begin{tabular}{||l c||} \hline \textbf{Pipeline Stage} & \textbf{Number of Commands} \\ \hline\hline All Commands & 22,685 \\ \hline De-duplicated, Valid Commands & 18,163 \\ \hline Marked Agent Errors & 7,461 \\ \hline Marked NLU Errors & 2,559 \\ \hline Marked NLU Errors Successfully Annotated & 2,403 \\ \hline Marked "True" NLU Errors & 2,138 \\ \hline \textit{All Known NLU Errors} & \textit{4,944} \\ \hline \end{tabular} \end{center} \caption{The number of commands for which each row description applies. "All Commands" refers to all commands from the data presented here. "De-duplicated, Valid Commands" refers to the subset of 'All Commands' that are unique and ask the agent to perform a task within its capabilities. "Marked Agent Errors" refers to the number of times a worker indicated, after issuing a command and observing the resulting agent behavior, that the agent failed to perform the task. "Marked NLU Errors" refers to the subset of 'Marked Agent Errors' for which workers indicated the agent did not understand the command, based on a report of the NSP output. "Marked NLU Errors Successfully Annotated" refers to the subset of 'Marked NLU Errors' for which a ground truth logical form was successfully added to the data set through the annotation process. The remainder were outstanding at the time of model retraining and redeployment but remain accessible for later use. "Marked 'True' NLU Errors" refers to the subset of 'Marked NLU Errors Successfully Annotated' for which the ground truth annotation varied from the NSP inference. The ratio of the two previous values forms the worker error marking precision. "All Known NLU Errors" refers to the subset of 'De-duplicated, Valid Commands' for which a) a ground truth logical form exists in the data set and b) the ground truth annotation varied from the NSP inference. The ratio of 'Marked "True" NLU Errors' to 'All Known NLU Errors' forms the estimate of worker error marking recall.} \label{tab:errorfunnel} \end{table} \subsection{NLU model improvements} \label{sec:nlu_improve} We have run 10 iterations of the full interaction $\rightarrow$ routing $\rightarrow$ annotation $\rightarrow$ retrain pipeline. The first 5 iterations were run 16 weeks ago over a period of 3 weeks. In these, we did not re-deploy the NLU model after an iteration. For the next 5 iterations taken over the last two weeks, we redeployed the re-trained model at each iteration. In order to measure the improvements of the NLU model, for each new tranche of data from the iterations, we randomly split into a train, valid and test set. We then build a sequence of training data sets R$_n$ which are the union of the first $n$ training sets, V$_n$ which are the union of the first $n$ validation sets, and T$_n$, the union of the first $n$ test sets. Here R$_0$ is taken from \cite{srinet2020craftassist}; this is used to train the initial deployed model. For each tranche of data $n$, we compare three models. The first is the baseline, trained on R$_0$. The next is the continually-learned model, trained on R$_n$ (trained the same way as the model that was used for obtaining R$_{n+1}$ for 100 epochs. Finally, we take the continually learned model trained on R$_n$, and then finetune it for 10 epochs on R$_0$; we call this the ``re-biased'' model. We repeated the model training 5 times for each tranche with different random seeds. Our main results are Figure \ref{fig:nlu_improve}. The colored lines represent mean values of model accuracy across all 5 experiments and the shaded error bands represent the standard error. In the left of Figure \ref{fig:nlu_improve}, we show the accuracy of the model trained on R$_n$ (all the data up to the n$_{th}$ interaction job) vs. the original baseline, all tested on the final test data T$_{10}$ (the union of the test sets from each tranche). The $x$ axis is the total number of training examples used for that model, arranged in the sequence the were obtained, and the $y$ axis is accuracy, where accuracy is taken to be an exact match of the annotated parse. We can see a steady improvement on the final test set in the continually learned models over the baseline. The re-biased model also improves, although not quite as much. In right of Figure \ref{fig:nlu_improve}, we show the results of training on R$_n$ and testing on T$_0$ (the initial test data, from \cite{srinet2020craftassist}). The continually-learned semantic parsing models perform worse on T$_0$ even though they are trained with a larger amount of data; but this is not surprising, as the collection procedure for the base data R$_0$ was different than R$_i$ for $i>0$, and so the distribution is different. Specifically, most of the commands in R$_0$ were collected by asking crowd-workers what they might ask an agent to do; whereas in this work, the crowdworkers are actually connected to the agent, and interact with it, giving multiple commands in each session. The re-biased model manages to keep its performance almost at the level of the baseline (while improving on the new data). \begin{figure} \begin{center} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1.0\textwidth]{images/nlu_plot_AnVN_A0VN.png} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1.0\textwidth]{images/nlu_plot_AnV0_FnV0.png} \end{minipage} \end{center} \caption{On the left: model accuracies (exact match of parse) on T$_{10}$ (union of all collected test data). On the right: accuracies of all models on T$_0$ (base test data collected in \cite{srinet2020craftassist}). The $x$ axis is the number of training examples. Red is continually learned, Blue is re-biased, and Gray is baseline. The continually learned model does worse over time on the base-test dataset (which was collected with a different procedure), but improves on the full data. Re-biased improves on the full data (but less than non-re-biased) without losing on the base data. See Section \ref{sec:nlu_improve} for details.\label{fig:nlu_improve}} \end{figure} \subsection{Annotator Experience Improvements} The NLU model improvement rate is a function of the quantity and diversity of the NLU system errors, and is constrained to the first order by the resources available to fund interactions with the agent. Therefore, the goal of our UI/UX (user interface and user experience) research is to efficiently generate and correctly mark as many high quality errors as possible in each interaction, and a focus on interface usability is critical to this end. We have been guided by standard usability heuristics, the most impactful of which are listed here below. \begin{itemize} \item \textbf{Aligning With Design Standards} - Utilizing UI components and affordances that match user expectations, as well as reducing overall visual clutter helps reduce cognitive load of using the interface. \item \textbf{Forced Choices} - Providing clear, blocking choices for important UI tasks rather than relying on the user to recognize a branch in workflow and select the appropriate option. \item \textbf{Visual Feedback} - Implementing clear and easy to understand visual indicators of agent status as well as the quality of the interaction (number and diversity of commands) helps the workers understand our expectations better. \item \textbf{Performance Incentives} - Shifting to paying workers a lower base rate with incentives for good performance both: lowers the cost of data collection on a per-error basis and results in higher worker pay. \end{itemize} \begin{figure} \centering \includegraphics[width=4cm]{images/exp3-cmds.png} \includegraphics[width=4cm]{images/exp4-cmds.png} \includegraphics[width=4cm]{images/exp3-stoplight.png} \includegraphics[width=4cm]{images/exp4-stoplight.png} \caption{The charts above show the number of commands workers issued per interaction task (left two charts) and the stoplight performance score described in Section \ref{exp3} (right two charts) before (first and third charts) and after (second and fourth charts) issuing performance incentives. The red line indicates the authors' target for each metric. There are fewer data in the third chart, because a recording bug in UI/UX Experiment 3 caused half of these data to be lost.} \label{fig:incentives} \end{figure} While there is not an obvious baseline of usability for a specific interface, Figure \ref{fig:efficiency} shows the cost efficiency improvements for each iteration as the project progressed, providing a strong validation of effort spent improving task usability. Over the course of the four UI/UX experiments listed, the cost of collecting a single NLU error fell by 71\%. Below is more detail on the nature of each of the four UI/UX experiments. The experiments are cumulative, meaning each experiment includes the changes made in the previous. \subsubsection{Experiment 1 - Clarity and Verbosity} The goal of the first experiment was to reduce visual clutter, verbosity, and text complexity. \cite{hirth2020taskprefs} finds that "Incomprehensible Instructions" are the single most frustrating aspect of task design when present. Experiment 1 reduced the number of instruction words by almost half, and paginated the remaining so workers were never reading more than a few sentences at a time. The instructions are attached in Appendix B, Figure \ref{fig:instructions}. After the initial read during the task, instructions are hidden but available through a drop-down mechanism to further reduce clutter on screen. The information from the instructions most relevant to producing good interaction data is copied outside the instructions window immediately next to the interaction interface for easy reference. \subsubsection{Experiment 2 - Visual Feedback and Forced Choices} The purpose of the second experiment was to ensure that the worker is not confused about the agent status or what to do next. After each command, the agent must receive and interpret it, then potentially plan and carry out an action or set of actions, as well as respond to the user if appropriate. Experiment 2 introduced messages to report the agent status periodically. The second change in this experiment was to make the error marking decision tree described in Section \ref{routing} and shown in Figures \ref{fig:nlu_error} and \ref{fig:task_error} in the Appendix. If error marking is a passive call-to-action, workers may not mark effectively because they may not remember to mark erroneous commands or may be eager to move on with the task. By forcing the worker to decide one way or the other before continuing, a much higher percentage of agent errors are captured. \subsubsection{Experiment 3 - Align with Design Standards} \label{exp3} Human-Computer Interaction research has shown that familiarity with an interface reduces cognitive load, and therefore increases task accuracy and reduces task completion time. For a review if this concept see \cite{HOLLENDER2010hci}. Experiment 3 replaced the chat interface with a UI that closely resembles that one would find on a cell phone or the help window of a website, with the purposes of better aligning with workers' existing mental model of a chat interface. This experiment also introduced a new component to the interface - a stoplight that serves as an feedback indicator of the overall task performance to the worker. If the light is red the worker knows that their performance is unsatisfactory, and so forth. The metric used to determine the stoplight color is a weighted average of the log of: the number of commands issued, the diversity of commands in-session (average word edit distance between each session command), and the average creativity of the commands (word edit distance compared to all previously issued commands). The weights and thresholds driving the stoplight indicator were empirically tuned to align with the results a worker should obtain by engaging in good faith with the task. \subsubsection{Experiment 4 - Performance Incentives} The final experiment in this series was meant to operationalize the stoplight introduced in the previous section by offering performance incentives based on the "stoplight score", or the score out of 10 that determines the stoplight color. In this experiment, workers receive a lower base pay in addition to a bonus payment after completion based on their score. Workers can see their expected bonus reported in real time after they issue each command. This change had several notable effects. Firstly, nearly all of the workers achieved a score in the "green" performance band, compared to the previous experiment where approximately 2/3 did, shown in Figure \ref{fig:incentives}. Second, while the cost per interaction task went up, and therefore worker compensation per time went up, data generation efficiency measured in NLU errors per dollar actually rose. Furthermore, workers responded to the change positively, providing qualitative feedback that in addition to increasing their compensation, the change also improved the enjoyability and clarity of the task. This is evidence that further task gamification and/or incentives alignment may be fruitful and mutually beneficial direction for future research. \begin{figure} \begin{center} \begin{tabular}{||l c c||} \hline \textbf{Experiment Name} & \textbf{Number of tasks Completed} & \textbf{Data Generation Efficiency Ratio} \\ \hline\hline Baseline & 17 & 1.0 \\ \hline Exp1 - Instruction Clarity and Verbosity & 106 & 1.1 \\ \hline Exp2 - Visual Feedback and Forced Choices & 197 & 2.5 \\ \hline Exp3 - Design Standards Alignment & 191 & 3.1 \\ \hline Exp4 - Performance Incentives & 150 & 3.5 \\ \hline \end{tabular} \end{center} \caption{Table showing the the efficiency of data collection (NLU errors collected per \$) as I/UX improvements were made, reported as a ratio between the efficiency of that experiment and the baseline value before any UI/UX improvements were made. Data generation efficiency is computed as an average over all tasks completed in that experiment. UI/UX experiments are cumulative, not independent (each includes the changes of the previous).} \label{fig:efficiency} \end{figure} \section{Discussion} In this work have given an example of an ML powered pipelined agent that uses end-to-end interaction as a crucial part of its learning mechanism; and demonstrated that it can improve over multiple rounds of re-deployment. This is made possible in part through UX allowing naive crowdworkers with no knowledge of the system architecture to route errors and complete complex annotations in an assembly-line style. In future work, we would like to extend the approaches discussed in this work to agents with learnable perceptual systems and learnable Task executors; or even learnable memory and Controller modules. More generally, we think these approaches will valuable in the even context of works like \cite{dalmia2019enforcing, veniat2020efficient} that build modular ML systems that allow automatic credit assignment, as hybrids that empower humans to teach the system at the level of its modules while automatically assigning credit when such humans are unavailable could be more powerful than either end-to-end or pipelined systems. \section{Model Training Details} \label{sec:model_training_detail} For the standard model retraining job, we train the models for 100 epochs (on average till lack of improvement on $V_n$). The batch size is set to 24 in order to fit into a single 16G GPU chip. For Transformer decoder learning rate, we are choosing between 0.0000005, 0.000001 and 0.000005 while for the encoder learning rate we are choosing between 0.0 and 0.000001. This gives us a total number of 6 different combinations of hyperparameters for each model training job; we validate on $V_n$. All other parameters are the default from \cite{huggingFace}. For model re-biasing, we train the models on the original training dataset for 10 epochs (this is roughly till lack of improvement on $V_0$, the initial validation set from \cite{srinet2020craftassist}).
1,108,101,563,788
arxiv
\section{Introduction} Within the great variety of quantum technologies under development, mesoscopic superconducting circuits have emerged offering a rich breeding ground to test theoretical proposals and study new physics \cite{Introduction to QEM circuits,Microwave Photonics 2017}. The Josephson effect plays a fundamental role in these circuits since Josephson junctions (JJs) are naturally non-dissipative and nonlinear elements \cite{Frontiers of the Josephson effect}. Due to the improvement of quantum technologies in superconducting circuits, particular stable configurations in such Josephson arrays \cite{Fazio 2001,Fazio 2012} and its dynamics have been possible to study. A recent line of research within quantum simulation is the emulation of synthetic gauge fields \cite{Synthetic gauge fields,Chiral ground state currents,Gauge potentials} which overcomes several difficulties that bring the direct application of real magnetic fields in superconducting circuits. The presence of a magnetic field, real or synthetic, implies that the system breaks discrete symmetries such as parity (P) or time-reversal (TR). We may say a system has TRS if evolving forward in time, and then reversing the evolution for the same amount of time, the system ends up at the initial state. In this context, reciprocity is defined as the invariance of a system under the exchange of source and observer \cite{Electromagnetic nonreciprocity,A. Kamal thesis}. Thus, breaking TRS in a controlled way and obtaining a nonreciprocal response is crucial in the design of quantum communication gadgets. For instance, breaking reciprocal symmetry in Josephson junctions can give rise to superconducting devices such as isolators, gyrators, circulators, directional amplifiers and wave mixers. Isolators and circulators are necessary elements in most superconducting circuit experiments to shield the circuit from external noise sources and to extract the signals out of the circuit. Magnetic nonreciprocal devices based on the Faraday effect involve centimetres sized magnets hindering the scalability of the circuits \cite{Microwave gyrator}. Nonetheless, there have been recent developments in the search of scalable, low noise, wide bandwidth, dynamical range nonreciprocal devices working at cryogenic temperatures \cite{A. Kamal 2011}. They serve the purpose of qubit readout in quantum computation, quantum simulation and quantum sensing and provide us with new capabilities. These proposals include a graph-based scheme to optimise nonreciprocal circuits \cite{Graph based analysis}, quantum Hall effect based gyrators and circulators \cite{Hall effect circulator,Quantum Hall circulator,Self impedance circulator}, parametric and traveling-wave parametric amplifiers \cite{Byeong 2012,near_quantum_limited_TWPA,Flux driven JPA,Low noise kinetic inductance,Optimizing Josephson ring modulator,Widely tunable parametric amplifier,M. H. Devoret 2010,Nonlinearities and parametric amp}, Josephson parametric converters \cite{A. Kamal 2013}, an interferometric Josephson isolator \cite{Baleegh Abdo 2019}, a field programmable Josephson amplifier \cite{Lecoc 2017}, mechanical circulators \cite{Mechanical on-chip microwave}, reconfigurable circulators \cite{Nonreciprocal reconfigurable circuit,Reconfigurable Josephson circulator}, a passive circulator \cite{Passive on-chip}, or others \cite{Kerckhoff 2015}. In this article, we study chiral states \cite{ChiralSpinStates} in a Josephson junction ring \cite{Quantum phase slips} as natural quantum states that break P and TR symmetries. We show that breaking time-reversal symmetry can be achieved with a Josephson ring, in the so-called transmon regime, coupled to input/output ports also through JJs. In a quenched dynamics simulation, we calculate the lifetime of these states and characterise the out-of-equilibrium properties of this setup. Inspired by Koch {\it et al.} \cite{Time reversal} and M\"uller {\it et al.} \cite{Mueller_2018}, we show that a circulating behaviour is realised changing to a basis of common and differential input modes. Furthermore, a tunable directional coupler is proposed using the nonreciprocal features of the scattering matrix of three transmission lines connected to a JJ ring. \section{Chirality} \label{sec: Chirality} A superconducting node is described locally by a periodic degree of freedom $\{ | \phi \rangle \}$, where $\phi \in ( -\pi , \pi ]$ characterises the superconducting phase of a given superconducting island. Equivalently, the discrete conjugate variable $\{ | \tilde{n} \rangle \}$, where $\tilde{n} \in \mathbb{Z}$, characterises the number of Cooper pairs in the same superconducting island. This two local basis are related by a Fourier transform: $|\phi \rangle = \sum_{\tilde{n} \in \mathbb{Z}} e^{-i\phi \tilde{n}} |\tilde{n}\rangle$. In a triangular plaquette given by three superconducting nodes connected by three Josephson junctions, a complete basis for the three superconducting nodes is given by $\{ | \phi_{1} \rangle \otimes | \phi_{2} \rangle \otimes | \phi_{3} \rangle \}$, in the flux basis, or $\{ | \tilde{n}_{1} \rangle \otimes | \tilde{n}_{2} \rangle \otimes | \tilde{n}_{3} \rangle \}$, in the charge basis. Again, both bases are related by a Fourier transform: $ | \phi_{1} , \phi_{2} , \phi_{3} \rangle = \sum_{\{\tilde{n}_{1},\tilde{n}_{2},\tilde{n}_{3}\} \in \mathbb{Z}} e^{-i\phi_{1} \tilde{n}_{1}} e^{-i\phi_{2} \tilde{n}_{2}} e^{-i\phi_{3} \tilde{n}_{3}} |\tilde{n}_{1} , \tilde{n}_{2} , \tilde{n}_{3} \rangle$. If we consider the set of states with a fixed total charge $N=\tilde{n}_{1} + \tilde{n}_{2} + \tilde{n}_{3}$ in the flux basis, it can be described by \begin{equation*} |N,\varphi_{2},\varphi_{3} \rangle =\sum_{\{n_{2},n_{3}\} \in \mathbb{Z}} e^{-i\varphi_{2} n_{2}} e^{-i\varphi_{3} n_{3}} |N - n_{2} - n_{3} , n_{2} , n_{3} \rangle \end{equation*} From this set, we would like to characterise the subset of states that are invariant under the cyclic permutation of the three nodes $P_{123}$ right-handed (or $P_{132}$ left-handed), \begin{equation*} \begin{split} P_{123}|N,\varphi_{2},\varphi_{3} \rangle &= \sum e^{-i\varphi_{2} n_{2}} e^{-i\varphi_{3} n_{3}} | n_{2} , n_{3}, N - n_{2} - n_{3} \rangle \\ P_{132}|N,\varphi_{2},\varphi_{3} \rangle &= \sum e^{-i\varphi_{2} n_{2}} e^{-i\varphi_{3} n_{3}} | n_{3} , N - n_{2} - n_{3} , n_{2} \rangle \end{split} \end{equation*} These states are equivalent up to an overall phase $|N,\varphi_{2},\varphi_{3} \rangle \sim P_{123}|N,\varphi_{2},\varphi_{3} \rangle \sim P_{132}|N,\varphi_{2},\varphi_{3} \rangle$ if $3 \varphi_{2} =2 \pi \mathbb{Z}$, $3 \varphi_{3} =2 \pi \mathbb{Z}$, and $\left( \varphi_{2} + \varphi_{3} \right)=2 \pi \mathbb{Z}$ which give us just three states: \begin{equation*} \begin{split} |N,0,0 \rangle &=\sum |N - n_{2} - n_{3} , n_{2} , n_{3} \rangle \\ |N,\frac{2 \pi}{3},-\frac{2 \pi}{3} \rangle &=\sum e^{-i \frac{2 \pi }{3} n_{2}} e^{ i\frac{ 2 \pi }{3} n_{3} } |N - n_{2} - n_{3} , n_{2} , n_{3} \rangle \\ |N,-\frac{2 \pi}{3},\frac{2 \pi}{3} \rangle &=\sum e^{ i\frac{ 2 \pi }{3} n_{2}} e^{-i \frac{ 2 \pi }{3} n_{3} } |N - n_{2} - n_{3} , n_{2} , n_{3} \rangle \end{split} \end{equation*} It is straightforward to realise that under the action of time-reversal or parity transformation, $|N,0,0 \rangle$ remains invariant, while $|N,\frac{2 \pi}{3},-\frac{2 \pi}{3} \rangle$ maps onto $|N,-\frac{2 \pi}{3},\frac{2 \pi}{3} \rangle$. The two non-trivial states under the action of these permutations are the only two chiral states in this setup. In fact, it can be defined a chiral operator \cite{ChiralSpinStates} $\chi = \frac{P_{123} - P_{132}}{2i}$ which changes the sign under $P$ or $TR$ transformation but remains invariant under the combination of parity and time reversal $(PT)$ transformation. In this way, a non-zero value of this operator signals $P$ and $T$ symmetry breaking states. More concretely, the eigenvalues of the permutation operator for these three states are: $P_{123}\big|_{ |N,0,0 \rangle} =1$, $P_{123}\big|_{ |N,\frac{2 \pi}{3},-\frac{2 \pi}{3} \rangle} =e^{-i \frac{2 \pi }{3} N}$, and $P_{123}\big|_{ |N,-\frac{2 \pi}{3},\frac{2 \pi}{3} \rangle }=e^{i \frac{2 \pi }{3} N}$, which gives $\chi |N,\frac{2 \pi}{3},-\frac{2 \pi}{3} \rangle =- \sin{\left( \frac{2 \pi N}{3} \right)} |N,\frac{2 \pi}{3},-\frac{2 \pi}{3} \rangle$, $\chi |N,-\frac{2 \pi}{3},\frac{2 \pi}{3} \rangle = \sin{\left( \frac{2 \pi N}{3} \right)} |N,-\frac{2 \pi}{3},\frac{2 \pi}{3} \rangle$, and $\chi |N,0,0 \rangle = 0$. In fact, the phase acquired by the state under the permutation $P_{123}$ can be understood as a Berry phase. \begin{figure}[!] \centering \includegraphics[width=0.75\linewidth]{ring.pdf} \caption{ Circuit representation of the triangular plaquette given by three superconducting nodes connected by three JJs threaded by an external magnetic flux $\Phi_{e}$ with capacitive bias to ground; for the measurement and characterisation of the triangular plaquette, three transmission lines are connected with three external JJs and three resonators. The main parameters of the circuit that will be used through the text are: $C_{0}$, the capacitance to the ground of each node, $E_{J}$, $C_{J}$ are the Josephson energy and capacitance of the JJs ring respectively.} \label{fig1} \end{figure} \section{Circuit QED architecture} Following \cite{Time reversal,Mueller_2018}, we consider a minimal circuit QED (cQED) set up consisting of a ring of three Josephson junctions threaded by an external magnetic flux with capacitive bias to ground (see Fig. \ref{fig1}). By making use of flux-node description \cite{Devoret_1995_QFluct}, the Lagrangian of the system can be directly written \begin{eqnarray} L&=& \frac{1}{2}\dot{\bsb{\Phi}}^T\msf{C}\dot{\bsb{\Phi}}+\sum_i E_{Ji}\cos(\Delta\phi_i-\phi_{e,i}), \end{eqnarray} where the vector of node fluxes is defined $\bsb{\Phi}\equiv(\Phi_1, \Phi_2, \Phi_3)$, the phase variables are defined through the second Josephson relation $\phi_x=2\pi\Phi_x/\Phi_0$ with $\Phi_0$ the flux quantum constant. The differences of phases correspond to $\Delta\phi_i=\phi_{i+1}-\phi_i$, for $i=\{1,2,3\}$ and identifying $i=4$ with $i=1$. The capacitance matrix is defined as \begin{equation} \msf{C}=\begin{pmatrix} C_{\Sigma}&-C_J&-C_J\\ -C_J&C_{\Sigma}&-C_J\\ -C_J&-C_J&C_{\Sigma}\\ \end{pmatrix}, \end{equation} with $C_{\Sigma}=C_0+2C_J$, and $C_{0}$ local capacitance and $C_{J}$ the capacitance in the Josephson junctions. The external flux $\phi_{e}$ appears naturally equally distributed among the three links, i.e., $\phi_{e,i}=\phi_{e}/3$, with this gauge choice, the translational invariance is not explicitly broken. The Legendre transformation involves the definition of conjugated charge variables $\bsb{Q}=\msf{C}\dot{\bsb{\Phi}}$, to derive the Hamiltonian \begin{eqnarray} H&=& \frac{1}{2}\bsb{Q}^T \msf{C}^{-1}\bsb{Q} -\sum_i E_{Ji}\cos(\Delta\phi_i-\frac{\phi_{e}}{3}).\label{eq:H_plaquette_ext_flux} \end{eqnarray} For the sake of simplicity, let us work with the number of Cooper pair variables $\bsb{\tilde{n}}=\bsb{Q}/2e$, where $e$ is the electron charge. Conjugated classical variables are promoted to operators, with commutation relations $[\tilde{n}_i, e^{\mp i\phi_j}]=\mp\delta_{ij}e^{\mp i\phi_j}$. We recall that Hamiltonian (\ref{eq:H_plaquette_ext_flux}) has an important symmetry, readily, the total charge in the plaquette $N=\sum_i \tilde{n}_i$ is a conserved quantity. We can perform a canonical transformation $\bsb{\varphi}\rightarrow\msf{T}\bsb{\varphi}$ and $\bsb{n}\rightarrow(\msf{T}^T)^{-1}\bsb{n}$, \begin{equation} \msf{T}=\begin{pmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \\ \end{pmatrix} \label{eq: first canonical transformation} \end{equation} which defines $\varphi \equiv \phi_{1}$, $\varphi_{2} \equiv \phi_{2} - \phi_{1}$, $\varphi_{3} \equiv \phi_{3} - \phi_{1}$ and the conjugate momenta or charge operators $N \equiv \tilde{n}_{1}+\tilde{n}_{2}+\tilde{n}_{3}$, $n_{2} \equiv \tilde{n}_{2}$, $n_{3} \equiv \tilde{n}_{3}$ from which we arrive at the Hamiltonian \begin{equation} \begin{split} &H=E_{N} N^{2} - E_{J} ~ V \left(\varphi_{2} , \varphi_{3} \right) \\ &+E_{C} \left[ \left( N-n_{2}-n_{3} \right)^{2} + n_{2}^{2} + n_{3}^{2} \right]. \label{eq: Plaquette Hamiltonian} \end{split} \end{equation} with $E_{N}= \frac{2e^{2}C_{J}}{C_{0} \left(C_{0} + 3 C_{J} \right) } $, $E_{C}= \frac{2e^{2}}{\left(C_{0} + 3 C_{J} \right)}$, $V \left(\varphi_{2} , \varphi_{3} \right)= \cos{\left( \varphi_{2} - \frac{\phi_{e}}{3} \right)} + \cos{\left( \varphi_{3} + \frac{\phi_{e}}{3} \right)} + \cos{\left( \varphi_{3} - \varphi_{2} - \frac{\phi_{e}}{3} \right)}$. For possible sources of disorder in the dynamics or decay channels see Appendix \ref{disorderanddecay} In the following, we will use this Hamiltonian in the limit $E_{N} \gg E_{J} \gg E_{C}$ (or $\frac{e^{2}}{C_{0}} \gg E_{J} \gg \frac{e^{2}}{C_{J}}$). Therefore, neglecting the last term in the previous Hamiltonian, the eigenvectors are given by the vectors $|N,\varphi_{2},\varphi_{3} \rangle$ just defined in the later section, and as a function of the external flux $\phi_{e}$, the ground state of this Hamiltonian is given by: $|N,0,0 \rangle$ when $|\phi_{e}| < \pi$; $ |N,\frac{2 \pi}{3},-\frac{2 \pi}{3} \rangle$ when $\pi < \phi_{e} < 3 \pi$; and $ |N,-\frac{2 \pi}{3},\frac{2 \pi}{3} \rangle$ when $-3 \pi < \phi_{e} < - \pi$ (see Fig. \ref{fig: spectral flow}). The spectrum of the Hamiltonian does not change when $\phi_{e}$ is changed by $2\pi \mathbb{Z}$, nonetheless the eigenvectors do not remain the same in the flow of changing the external flux. This fact characterises the spectral flow of the Hamiltonian that we will use to load the chiral states in the ring. Another figure of merit we will use to characterise the states loaded in the triangular plaquette is given by the chiral current, that is just the sum of the current at every Josephson junction, i.e. $I_{\text{ch}} (\phi_{e}) =I_{0} \sum_i \sin(\Delta\phi_i-\phi_{e}/3) =I_{0} \sin{\left( \varphi_{2} - \frac{\phi_{e}}{3} \right)} - I_{0} \sin{\left( \varphi_{3} + \frac{\phi_{e}}{3} \right)} + I_{0} \sin{\left( \varphi_{3} - \varphi_{2} - \frac{\phi_{e}}{3} \right)}$. At $\phi_{e} =0$, $I_{\text{ch}} (0) $ changes the sign under $P$ or $TR$ transformation, such that a non-zero expectation value of this operator can signal $P$ and $T$ symmetry breaking states. \begin{figure}[!] \centering \includegraphics[width=1\linewidth]{figure2.pdf} \caption{Spectrum of Hamiltonian \eqref{eq: Plaquette Hamiltonian} and its dependence on the external flux. The inset shows this dependence for the expected value of the chiral current. The continuous lines refer to the energies and currents for a ratio $E_{J}/E_{C} = 10^{5}$, the dashed lines for $E_{J}/E_{C} = 10^{2}$ and the dotted ones to $E_{J}/E_{C} = 10$, being $E_{J} = 10 GHz$ and considering one excitation $N=1$. When the external flux $\phi_{e} \in \left[ -3 \pi, 3 \pi \right]$ there are three special ground states, one centred at $-2 \pi$ for the plaquette being in the state $\ket{N,-\frac{2\pi}{3},\frac{2\pi}{3}}$, another for a zero flux which corresponds to the state $\ket{N,0,0}$ and the one for a $2 \pi$ flux and the state $\ket{N,\frac{2\pi}{3},-\frac{2\pi}{3}}$. It is important to note that the spectrum of the Hamiltonian maps to itself whenever we introduce a transformation $\phi_{e} \rightarrow \phi_{e} + 2 \pi k$ with $k \in \mathbb{Z}$. We take advantage of this spectral flow in order to load one of the two chiral states in the plaquette.} \label{fig: spectral flow} \end{figure} \begin{figure}[!] \includegraphics[scale=0.23]{figure3a.pdf} \includegraphics[scale=0.23]{figure3b.pdf} \caption{(a) Chiral current expected value time evolution obtained for two different $E_J/E_C$ ratios. The dashed plots correspond to the currents calculated within the harmonic approximation. The blue and green plots stand for $E_J/E_C = 10^{2}$ and the orange and red curves for $E_J/E_C = 4 \times 10^{2}$. The higher the ratio, the bigger the number of points we need to use in the discretisation to reach the continuum limit (see Appendix \ref{sec: numeric estimation}). (b) Half-life time dependence on the $E_J/E_C$ ratio numerical values. In this plot, we have investigated higher ratios with the exact Hamiltonian in the phase basis. The half-life time is the time it takes the chiral current to halve its initial value $ \langle I_{ch} \rangle \left(t = \tau \right) = \langle I_{ch}\rangle \parent{t=0} /2 $. From the numerical fit of the curve $\frac{\tau}{\tau_{0}} = \left( \frac{E_{J}}{E_{C}} \right)^{\alpha}$, we extract $\alpha=0.6088601 \pm 6 \times 10^{-7}$ and $\tau _{0} = 0.04859 \pm 2 \times10^{-5} \mathrm{ ns}$. The $\tau$ dependence on the $E_J/E_C$ ratio predicts that a $100 \mathrm{ns}$ half-life time can be achieved for $E_{J}/E_{C}$ of the order of $10^{5}$.} \label{fig: chiral currents} \end{figure} \subsection{Spectral flow} A Josephson ring can be described with the Hamiltonian (\ref{eq:H_plaquette_ext_flux}), where the classical magnetic flux $\phi_{e}$ is shared equally by every junction and thus translational invariance remains an explicit symmetry. This Hamiltonian is unitarily equivalent and therefore iso-spectral to another Hamiltonian $H_{1}$ where the potential energy is given by $ V_{1} \left(\varphi_{2} , \varphi_{3} \right)= \cos{\left( \varphi_{2} \right)} + \cos{\left( \varphi_{3} \right)} + \cos{\left( \varphi_{3} - \varphi_{2} - \phi_{e} \right)}$ where the magnetic flux just appears on one of the JJs. The unitary transformation that maps $H$ onto $H_{1}$ is composed by a sequence of phase displacements. For instance, starting with a displacement of $\varphi_{3} \to \varphi_{3}-\frac{\phi_{e}}{3}$, followed by $\varphi_{2} \to \varphi_{2}+\frac{2\phi_{e}}{3}$ we recover $H_{1}$. A particular and clarifying limit is given by the classical magnetic flux $\phi_{e} = 2 \pi k$ with $k \in \mathbb{Z}$. At this value, $V \left( \varphi_{2} , \varphi_{3} \right) = \cos{\left( \varphi_{2} - \frac{2 \pi k}{3} \right)} + \cos{\left( \varphi_{3} + \frac{2 \pi k}{3} \right)} + \cos{\left( \varphi_{3} - \varphi_{2} - \frac{2 \pi k}{3} \right)}$ and $V_{1} \left( \varphi_{2} , \varphi_{3} \right) = \cos{\left( \varphi_{2} \right)} + \cos{\left( \varphi_{3} \right)} + \cos{\left( \varphi_{3} - \varphi_{2} \right)}$. We can be tempted to assume as ``trivial'' the action of the magnetic flux at any of these $k$ points. Nonetheless, $H$ and $H_{1}$ are just iso-spectral but there is a non-trivial unitary action on the eigenstates. Under the displacement operator $e^{- \frac{ i n_{3} 2 \pi }{3}} e^{ \frac{ i n_{2} 2 \pi }{3}}$, the states $|N,\varphi_{2},\varphi_{3} \rangle \to |N,\varphi_{2} + \frac{2 \pi}{3},\varphi_{3} - \frac{2 \pi}{3}\rangle $, and in particular $|N,0,0\rangle \to |N, \frac{2 \pi}{3} , - \frac{2 \pi}{3} \rangle \to |N, -\frac{2 \pi}{3} , \frac{2 \pi}{3} \rangle \to |N, 0 , 0 \rangle$. \section{Superconducting chiral states} \label{sec: preparation of the states} In the following, we describe how to load and detect the chiral states. These states are characterised by the appearance of currents flowing clockwise or counter-clockwise through the loop. The first step to prepare the initial state is to thread the plaquette with a magnetic flux of $2 \pi$, in units of the magnetic flux quantum $\Phi_{0}$. After that, the system is cooled down until it reaches the ground state shown in Fig. \ref{fig: spectral flow}, which is a chiral state. Then, we perform a sudden quench by turning off the magnetic flux. In this way, the chiral state becomes a highly excited state of the free Hamiltonian. When an external static magnetic flux $\phi_{e}$ threads the ring, the Hamiltonian is given by equation \eqref{eq: Plaquette Hamiltonian}. We work in the phase regime in which Josephson energy is much bigger than the charge energy $E_{N} \gg E_{J} \gg E_{C}$. Being the total charge a conserved quantity ($N$ is a good quantum number) we can restrict ourselves to a subspace of constant total number of excitations. Applying the canonical transformation \begin{equation} \begin{split} \phi _+ = \frac{1}{2} \parent{\varphi _2 + \varphi _3},& \quad \phi _- = \frac{1}{2} \parent{\varphi _2 - \varphi _3},\\ n_+ = n_2 + n_3, &\quad n_- = n_2-n_3, \end{split} \end{equation} such that the new variables satisfy the usual commutation relations $\left[n_{\pm}, e^{i \phi _{\pm}} \right] = e^{i \phi _{\pm}}$, the Hamiltonian is mapped onto \begin{equation*} \begin{split} &H = \left(E_{N} + \frac{E_{C}}{3}\right) N^2 + \frac{E_C}{2} \cor{3 \parent{n_+ - \frac{2}{3}N }^2 + n_-^{2}} \\ &-E_J\cor{2 \cos \phi _+ \cos \parent{\phi _- - \frac{\phi_{e}}{3}} + \cos \parent{2 \phi _- + \frac{\phi_{e}}{3}}}. \end{split} \end{equation*} To gain more insight we also study the Hamiltonian in the harmonic approximation around $\phi_{+} \to 0$ and $\phi_{-} \to \frac{\phi_{e}}{3} = \frac{2 \pi k}{3}$, $k \in \mathbb{Z}$ , where \begin{equation} \begin{split} H &\to \left(E_{N} + \frac{E_{C}}{3}\right)N^2 + \frac{E_C}{2} \cor{3 \parent{n_+ - \frac{2}{3}N }^2 + n_-^{2}} \\ & + E_{J} \cor{\phi _{+}^{2} + 3 \parent{ \phi _{-}- \frac{\phi_{e}}{3}}^{2}}. \end{split} \label{eq: harmonic Hamiltonian} \end{equation} To perform the numerical calculations, we discretise the phase degrees of freedom. The phases are chosen to take $L$ values $\phi _{\pm} \equiv \frac{2 \pi k_{\pm}}{L}$ contained in the interval $\phi _{\pm} \in \left(-\pi ,\pi \right]$ (or $k_{\pm} \in \left[ - \frac{L}{2}+1, \frac{L}{2} \right]$), setting the charges to lie in $n_{\pm} \in \left[ - \frac{L}{2}+1, \frac{L}{2} \right]$. In the limit of $L \rightarrow \infty $ the continuum is recovered. The representation of the ground state of the Hamiltonian requires a minimum number of discrete levels $L$ to achieve a faithful numerical simulation, which indeed depends on the ratio $E_{J}/E_{C}$ (see Appendix \ref{sec: numeric estimation}). Moreover, we evolve the chiral current operator with an increasing number of discrete levels $L$ till the curves of the evolution collapse to the continuum limit. The higher the energy ratio $E_{J}/E_{C}$ the bigger number of levels are needed to reach the continuum limit. We are interested in the regime where the net chiral current flowing in one sense is nonzero, so that the state breaks TRS. For this reason, we define the time $\tau$ as the time it takes the current to halve its initial value, so that the chiral properties of the state are still manifested. The dynamics in the harmonic approximation can be completely described in terms of coherent states, and the chiral current oscillates as expected. We take the chiral current operator as the sum of currents flowing through the three nodes of the plaquette, which in the variables we have chosen \begin{equation*} I_{ch} = 2 \cos \phi _+ \sin \parent{\phi _- - \frac{\phi_{e}}{3}}- \sin \parent{2\phi _- + \frac{\phi_{e}}{3}}. \end{equation*} Expressing the Hamiltonian in the phase basis we diagonalise it numerically to obtain the ground state for a fixed value of the magnetic flux $\Phi_{e}=2\pi \Phi_{0}$. Next, we evolve the chiral state in time with the Hamiltonian without magnetic flux and we compute the time evolution of the chiral current. We find the chiral current lives longer according to a power law on the ratio $E_{J}/E_{C}$. It is important to mention that the state of the plaquette is specially robust against charge noise, as we are working in the phase regime. Once we have obtained the chiral current, we seek for the regime in which it preserves a single circulation sense and take the half-life time to characterise this regime. In Fig. \ref{fig: chiral currents}, we fit the numerical data for the mean lifetime of the chiral current to the curve $\frac{\tau}{\tau_{0}} = \left( \frac{E_{J}}{E_{C}}\right)^{\alpha}$ with the numerical parameters $\alpha=0.6088601 \pm 6 \times 10^{-7}$ and $\tau _{0} \sim 0.04859 \pm 2 \times 10^{-5} \mathrm{ ns}$. Therefore, currents with $\tau$ in the order of a hundred nanoseconds may be accomplished for $\frac{E_{J}}{E_{C}} \approx 10^{5}$. \subsection{Chiral effective dynamics} In order to introduce a weak perturbation in the JJ ring to measure the state of the ring, we couple a resonator of frequency $\omega _r \ll E_J$ to each of the three nodes in the plaquette such that the energy levels of the JJ ring are much closer among them than those of the resonators. With this in mind we can decouple and eliminate the degrees of freedom of the ring and derive a low-energy Hamiltonian for the resonators. As the coupling elements we choose are JJs, the total number of charges in the ring is no longer preserved. If we remain in the one excitation subspace of the plaquette, this extra charge will be able to hop to the adjacent resonators while the chiral state remains. For the calculation of the effective Hamiltonian and input-output relations of the next subsection, we follow \cite{Time reversal}. Taking the limit of $E_{C} \ll E_{J}$ in the Hamiltonian of the plaquette \begin{figure}[!] \begin{minipage}[h]{\linewidth} \includegraphics[width=.8\linewidth]{figure4a.pdf} \end{minipage}\vfill% \begin{minipage}[h]{\linewidth} \includegraphics[width=.8\linewidth]{figure4b.pdf} \end{minipage}\vfill% \begin{minipage}[h]{\linewidth} \includegraphics[width=.8\linewidth]{figure4c.pdf} \end{minipage} \caption{(a) Schematic representation of the hopping of one excitation from state $\ket{1_{a}}$ to the state $\ket{1_{b}}$. The excitation in the first resonator gains a phase $e^{i \gamma}$ when it tunnels through the JJ connecting the resonator to the ring. This phase is lost when it exits the ring towards the adjacent resonator such that the only remaining phase is the one it acquires in the ring. (b) Energy level description of the JJ triangular plaquette and the effect of the Josephson coupling to the three external nodes. (c) System quench and transition probabilities for one excitation to hop from resonator to resonator when the initial state of the resonators is $\frac{1}{\sqrt{2}} \parent{\ket{100} - \ket{010}}$. Once the ring has relaxed to its ground state the magnetic field is switched off, leaving the plaquette in the chiral state $\ket{N, \frac{2 \pi}{3}, - \frac{2 \pi}{3}}$. The state considered in the resonators is non-chiral, that is to say, the expected value of $\chi$ in this state is zero. Hence, the circulation in the resonators is a signature of the plaquette hosting a chiral state.} \label{fig: circulation} \end{figure} \begin{equation} \begin{split} H = & E_{N} N^{2} - E_{J} \left[2 \cos \phi _+ \cos \parent{\phi _- - \frac{\phi_{e}}{3}} \right. \\ &\left. + \cos \parent{2 \phi _- + \frac{\phi_{e}}{3}} \right] + \sum _{i=a,b,c} \omega _{i} a^{\dagger}_{i} a_{i} + H_{\text{int}}, \end{split} \end{equation} where $\omega _{i} = \omega _{r}$ are the frequencies of the resonators which we assume to be the same, and the interaction Hamiltonian \begin{equation*} H_{\text{int}} = \frac{E_{J}^{r}}{2} \parent{e^{i \parent{\phi _1 - \phi_a}}+e^{i \parent{\phi _2 - \phi _b}}+e^{i \parent{\phi _3 - \phi _c}} + \text{h.c.} }. \label{eq: Interaction Hamiltonian} \end{equation*} $E_{J}^{r}$ is the Josephson energy of the JJs coupling the resonators with the ring, being the resonators $a,b$ and $c$ coupled to nodes $1,2$ and $3$ respectively. We consider the limit of $E_{C}^{r} \rightarrow 0$, \textit{i.e.} no kinetic energy for the resonators' junctions. This term would affect the phases of the state of the plaquette whereas $H_{\text{int}}$ leaves the state invariant and therefore does not destroy the chirality of the ring. The effective Hamiltonian is obtained through a Schrieffer-Wolf transformation \cite{Atom-Photon Interactions} and subsequent projection onto the chiral state of the plaquette \begin{equation} \begin{split} H_{e} &= P_{plq} H P_{plq} + P_{plq} H_{\text{int}} P_{plq} \\ &+ \frac{1}{2} P_{plq}\cor{i E_J^r S , H_{\text{int}}} P_{plq} + \dots, \end{split} \end{equation} with $S$ being the generator of the transformation and $P_{plq} = \ket{N, \frac{2 \pi}{3},-\frac{2 \pi}{3}}\bra{N, \frac{2 \pi}{3},-\frac{2 \pi}{3}}$ the projector for the plaquette in the chiral state. In the charge basis, the exponentials of the phases act as creation and annihilation operators such that the effective Hamiltonian \begin{equation} \begin{split} &H_{e} = \sum _{i=a,b,c} \parent{\omega _{i}+3g} a_{i}^{\dagger}a_{i} \\ &+ \frac{g}{2} \left( a_{b}^{\dagger} a_{a} e^{-i \frac{2\pi}{3}} + a_{c}^{\dagger} a_{b} e^{-i \frac{2\pi}{3}} + a_{a}^{\dagger} a_{c} e^{-i \frac{2\pi}{3}} + \text{h.c.} \right), \end{split} \label{eq: effective Hamiltonian} \end{equation} \begin{equation} g = \frac{ \parent{E_{J}^r}^{2}}{E_{N} \parent{1- \parent{\frac{\omega _{r}}{E_{N} }-2N}^{2}}} \approx \frac{ \parent{E_{J}^r}^{2}}{E_{N }\parent{1-4N^{2}}}, \end{equation} being in this case $N$ the constant denoting the number of excitations in the chiral state of the plaquette. The phases that appear in the Hamiltonian are directly due to the initial state in which the triangular plaquette is loaded. A change in the external flux affects only the eigenvalues of the initial Hamiltonian of the plaquette but not the phases. The effective Hamiltonian can be diagonalised $H_{e} = \sum_{k} A^{\dagger}_{k}A_{k} \Omega _{k}$, the energies given by \begin{equation} \Omega _{k} = 3g + \omega _{r} + 2g \cos \parent{\frac{2 \pi k}{3}+ \frac{2\pi}{3}}, \label{eq: Omegas} \end{equation} where the subindex denotes the allowed wave-numbers $k=-1,0,1$ of the three eigenstates, as the Hamiltonian is diagonal in the reciprocal space. Consider now the case of introducing a single excitation in one resonator. We find the excitation can be observed subsequently in the other two resonators with equal likelihood, so neither circulation nor signature of chirality is observed in this scenario (see Appendix \ref{appendix: transition probabilities}). If the initial state of the resonators is $\ket{\Psi _{0}} = \frac{1}{\sqrt{2}} \parent{\ket{100} - \ket{010}}$, where $\bra{\Psi _{0}} \chi \ket{\Psi _{0}} = 0$, which ensures we are not introducing any chirality by initiating the resonators in this state, the probability of finding the excitation in each resonator shows a clear circulating behaviour (see Fig. \ref{fig: circulation}). \subsection{Nonreciprocal S-matrix} \begin{figure}[!] \centering \includegraphics[width=\linewidth]{figure5.pdf} \caption{Circuit representation of the triangular plaquette coupled to three resonators with frequencies $\omega_{i}$ through Josephson junctions with energies $E^{r}_{J}$ which are connected capacitively to transmission lines. Input mode $b_{+}^{in} = \frac{1}{\sqrt{2}}\parent{b_1^{in}+ b_2^{in}}$ and $b_{-}^{in} = \frac{1}{\sqrt{2}}\parent{b_1^{in}- b_2^{in}}$ are shown.} \label{fig: circuit} \end{figure} \begin{figure}[!] \includegraphics[scale=0.332]{figure6.pdf} \caption{(a,b) Outgoing power at each transmission line. The scattering matrix elements are $S_{i\pm}=b_i^{out}/b_{\pm}^{in}$ when the input modes are given by $b_{\pm}^{in}$, with $i=1,2,3$ modes represented by the blue, orange and green curves respectively. The coupling strength to the resonators is set to $g=0.5 \omega _{r}$ and the effective photon decay rate is set to $\Gamma = 0.35 \omega _{r}$, see Appendix \ref{sec: scattering matrix} for the dependence of the scattering matrix elements on the circuit parameters. (c) Input-output scheme for three input signals with close frequencies. By choosing an effective photon decay rate and slightly tuning the input modes frequency, we can modulate the distribution of the output power through the transmission lines, the system behaving as a tunable directional coupler.} \label{fig: outgoing powers} \end{figure} Finally, to check for the response of the JJ ring when we capacitively couple to three transmission lines, we study the scattering matrix of the system. This scattering matrix relates the input modes of the semi-infinite transmission lines with the output modes as sketched in Fig. \ref{fig: circuit}. When the plaquette hosts a chiral state, we expect a nonreciprocal behaviour between the input and the output modes. Taking the input-output relations for the transmission line modes \cite{Time reversal,Mueller_2018} \begin{equation} b_{j}^{out} \cor{\omega} =b_{j}^{in} \cor{\omega} + \frac{\Gamma}{3} \sum _{k=-1}^{1} \sum _{j'=1}^{3} \frac{e^{2 \pi i \parent{j-j'}k/3}}{i \parent{\omega - \Omega _{k}} - \frac{\Gamma}{2} } b_{j'}^{in} \cor{\omega} \label{eq: input-output relations} \end{equation} with $\Gamma$ being the effective photon decay rate and $\Omega _{k}$ the frequencies of Eq. \eqref{eq: Omegas}. The complete S-matrix can be written as \begin{equation} \msf{S} = \begin{pmatrix} \alpha & \beta e^{i2\pi/3} & \beta e^{-i2\pi/3} \\ \beta e^{-i2\pi/3} & \alpha & \beta e^{i2\pi/3} \\ \beta e^{i2\pi/3} &\beta e^{-i2\pi/3} & \alpha \end{pmatrix}. \end{equation} with two complex parameters $\alpha$ and $\beta$ (see Appendix \ref{sec: scattering matrix} for more details). The first thing to notice is that the S-matrix that relates the input and output modes, ${\bf{b^{out}}} = {\msf{S}} {\bf{b^{in}}}$, is not time-reversal symmetric, i.e., $\msf{S} \neq \msf{S}^{T}$. In fact, there is a nonreciprocal phase difference $\arg{\left(S \right)}- \arg{\left( S^{T} \right)}=\frac{4 \pi}{3}$ for any value of the frequency $\omega/\omega_{r}$, coupling $g/\omega_{r}$, and decay $\Gamma/\omega_{r}$. Also $S S^{\dagger} = S^{\dagger} S = \mathbbm{1}$ as it should be by unitarity. Moreover, the output power in each transmission line is shown in Fig. \ref{fig: outgoing powers} for input modes $b_{+}^{in} = \frac{1}{\sqrt{2}}\parent{b_1^{in}+ b_2^{in}}$ and $b_{-}^{in} = \frac{1}{\sqrt{2}}\parent{b_1^{in}- b_2^{in}}$, i.e. $\tilde{S}=SU^{\dagger}$, where $U$ is the change of basis in the input ports. It is relevant to point out that by shifting the frequency of the input modes, most of the output power can be concentrated in any of the three ports at will with a maximal directionality of $2/3$, working as a tunable directional coupler. Due to the rotational symmetry of the setup, this behaviour is independent of the pair of continuous ports used for the input signal. Thus, the device can be seen as a circulator between differential input modes and local output modes. \section{Conclusions} In summary, we have shown the possibility of encoding chiral states in a superconducting Josephson junction plaquette that break time-reversal and parity symmetry. We have described a method to load these states in the proposed setup based on a spectral flow protocol, and we have discussed a possible way to access the non-trivial phases. Finally, we have analysed how such plaquettes can potentially become a fundamental unit of quantum nonreciprocal devices. \begin{acknowledgments} We thank the valuable and constructive comments by S. Girvin, C. M\"uller, and P. Zoller. Also, the authors acknowledge support from the projects QMiCS (820505) and OpenSuperQ (820363) of the EU Flagship on Quantum Technologies, Spanish MINECO/FEDER FIS2015- 69983-P, Basque Government IT986-16, EU FET Open Grant Quromorphic, and Shanghai STCSM (Grant No. 2019SHZDZX01-ZX04). This material is also based upon work supported by the U.S. Department of Energy. \end{acknowledgments}
1,108,101,563,789
arxiv
\section{Introduction} PARS experimental station is planned on the soon-to-be-built VELA/CLARA beam line in the Daresbury Laboratories as shown in Fig.\ref{fig:layout} \cite{Clara, Xia}. PARS will receive an $250\,$MeV electron beam with a flexible parameter range. This will allow the station to conduct wide range systematic studies on electron driven plasma wakefield acceleration. Program aims to explore single and two-beam operation. The former aims to study maximum achievable accelerating gradient and head-to-tail acceleration with a single electron bunch. Whereas the latter aims to demonstrate the acceleration of a witness or trailing bunch. Numerical studies reported in this paper were performed by using VSim \cite{VSim}. \begin{figure*}[htb!] \centering \includegraphics[width=0.9\textwidth] {WEPWA048f1.png} \caption{Layout of the CLARA beamline and PARS experimental station.} \label{fig:layout} \vspace{-1.0em} \end{figure*} \section{Single bunch acceleration} The maximum achievable accelerating gradient was studied for different bunch length and radius values between $30-75\,\mu$m and $20-100\,\mu$m, respectively, considering a $250\,$MeV electron bunch with a charge of $250\,$pC. Higher wakefields of $1-3\,$GV/m for a tightly focused drive beam ($20\mu$m), and $200-300\,$MV/m for more realistic beam sizes are possible within the bunch length range between $30-75\,\mu$m (Fig.\ref{fig:density_scan}). The achieved field gradient is proportional to the plasma density and after a certain density it scales inversely proportional to the squared bunch length as predicted by the linear theory Eq.\ref{eqn:linear_theory}, \begin{equation} E=240(MV/m)\bigg(\frac{N}{4\times10^{10}}\bigg)\bigg(\frac{0.6}{\sigma_z(mm)}\bigg)^2 \label{eqn:linear_theory} \end{equation} where N is the density of background plasma electrons and $\sigma_z$ is the bunch length. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth] {WEPWA048f2.png} \caption{Accelerating gradient of the first bucket for different beam parameters as a function of the plasma density after $4.5\,$mm propagation.} \label{fig:density_scan} \vspace{-1.0em} \end{figure} A realistic case of $50\,\mu$m bunch length and $100\,\mu$m bunch radius which yields $300\,$MV/m gradient for single bunch propagated $4.5\,$mm at a plasma density of $5\times10^{21}\,m^{-3}$ was selected for further studies in this paper. \section{Two-bunch acceleration} A two-beam scenario was simulated using the baseline case detailed above. A second bunch of same sizes but with a certain fraction of the drive bunch charge was initially placed half a plasma wavelength ($\lambda_p/2$) behind the centre of the driver bunch. The maximum energy gain and the minimum energy spread of the trailing bunch was found to occur between $(\lambda_p/2-40\,)\mu$m and $(\lambda_p/2-20\,)\mu$m behind the driver bunch as shown in Fig.\ref{fig:energy_distance}. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth] {WEPWA048f3.png} \caption{Energy and the energy spread of the trailing bunch as a function of its distance to the driver bunch around the initial location at $\lambda_p/2$.} \label{fig:energy_distance} \vspace{-1.0em} \end{figure} The initial two beam configuration shown in Fig.\ref{fig:beam_profiles}-(a) was tracked along a $~0.5\,$m long plasma column. Fig.\ref{fig:beam_profiles}-(b) shows the beam profiles evolved after $0.45\,$m reaching an energy of $315\,MeV$ with a $10\%$ energy spread. Energy spread control is under study through beam loading and bunch profile manipulation. A ``fish-bone'' structure starts forming in the driver bunch and both bunches are transversely focused where there is no significant bunch length change. 2D field distribution for the two-beam case is given in Fig.\ref{fig:2D_field} with accelerating blue region and decelerating red region. \begin{figure}[htb!] \centering \subfloat[]{\includegraphics[width=0.45\textwidth] {WEPWA048f4a.png}} \\ \subfloat[]{\includegraphics[width=0.45\textwidth] {WEPWA048f4b.png}} \caption{(a) Initial and (b) final beam intensity distributions for the travel through $~0.5\,$m long plasma column.} \label{fig:beam_profiles} \vspace{-1.5em} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth] {WEPWA048f5.png} \caption{2D field strength distribution for the longitudinal component of the plasma wakefield.} \label{fig:2D_field} \vspace{-1.0em} \end{figure} \begin{figure}[htb!] \centering \subfloat[]{\includegraphics[width=0.45\textwidth] {WEPWA048f6a.png}} \\ \subfloat[]{\includegraphics[width=0.45\textwidth] {WEPWA048f6b.png}} \caption{(a) 2D and (b) projected field strength of the transverse plasma wakefield.} \label{fig:foc_field} \vspace{-1.5em} \end{figure} \begin{figure}[htb] \raggedleft \includegraphics[width=0.450\textwidth] {WEPWA048f7.png} \caption{Longitudinal instantaneous wakefield components in single and two-bunch cases after $0.5\,$m propagation with the hint of beam loading. Oval shapes represent bunches of different species on their respective locations with the same colour code as the curves.} \label{fig:beam_loading} \vspace{-1.5em} \end{figure} Plasma wakefields consist of co-existing transverse and longitudinal fields as they are induced due to the motion of the plasma electrons in both directions. The transverse field accompanying the above longitudinal field is shown in Fig.\ref{fig:foc_field}-(a) as a 2D intensity map. The transverse fields generally have a comparable field strength to the longitudinal fields (Fig.\ref{fig:foc_field}-(b)) and they might act as focusing fields depending on the phase of the trailing bunch. \section{Beam loading with two bunches} It has been observed that in the presence of a second bunch the resulting field in the region is modified by the self electromagnetic field of this bunch. This phenomena is similar to the ``beam loading'' in the RF cavities. The position of the second beam can be adjusted so that the beam loading effect can be used, in favour of the scheme, in a way to reduce the energy spread on the second bunch by adjusting the field gradient across it. Such a scan was performed by moving the trailing bunch around its initial location at $\lambda_p/2$. Effect of the beam loading on controlling the energy spread is shown in Fig.\ref{fig:energy_distance} previously. This phenomenon was demonstrated in Fig.\ref{fig:beam_loading} in more detail for the trailing bunch locations providing the highest final energies for each case. Longitudinal plasma wakefield components in the presence of a single bunch and two bunch cases with different trailing beam charges are presented. As seen in the figure, the absolute values and the shape of the first bucket located about $450\,\mu$m are modified for different charges of the trailing bunch. Beam loading is larger for a trailing bunch carrying $50\%$ of the drive bunch and the a plateau is formed leading to a $1.4\%$ smaller energy spread compared to the case where the trailing bunch has $20\%$ of the drive beam charge. The cause of the modification is both the charge difference and the fact that the locations of trailing bunches are $20\,\mu$m apart from each other for presented cases. This effect is under further optimisation. \section{Diagnostics under consideration} The novelty of the experimental study of plasma wakefields brings the necessity of employing and developing the novel measurement techniques for both plasma and the beam. In PARS experimental station we aim to implement various measurement techniques such as optical transmission radiation interference (OTRI) technique \cite{OTRI}, coherent diffraction radiation (CDR) monitoring \cite{CDR1,CDR2,CDR3}, electro-optical sampling (EOS) \cite{EOS} etc. for the electron beams and plasma diagnostics to measure beam sizes and densities. A double-magnet magnetic spectrometer with a segmented beam dump is being studied to demonstrate the feasibility to measure a wide range of energies with an adequate energy resolution \cite{segdump}. % \section{Conclusions and outlook} A summary of feasibility studies for an electron driven plasma wakefield acceleration test facility is given in this paper. Given the realistic beam parameters, primarily an acceleration gradient of $200-300\,$MV/m is aimed to be experimentally demonstrated. The beam loading effect was studied and promising results were obtained towards the control of the energy spread of the trailing bunch. A $1.4\%$ improvement is shown in this paper, further studies considering different drive and trailing bunch profiles are in progress in order to optimise the transformer ratio and energy spread. Alongside with plasma acceleration studies, plasma lensing effect will be tested as well, which is reported elsewhere \cite{plasma_lens}. \section{Acknowledgements This work was supported by the Cockcroft Institute Core Grant and STFC. The authors gratefully acknowledge the computing time granted on the supercomputer JUROPA at J{\"u}lich Supercomputing Centre (JSC). \raggedend
1,108,101,563,790
arxiv
\section{Introduction} Many data problems nowadays carry the structure that the number $p$ of covariables may greatly exceed sample size $n$, i.e., $p \gg n$. In such a setting, a huge amount of work has been pursued addressing prediction of a new response variable, estimation of an underlying parameter vector and variable selection, see for example the books by \cite{hastetal09}, \cite{pbvdg11} or the more specific review article by \cite{fanlv10}. With a few exceptions, see Section \ref{subsec.otherwork}, the proposed methods and presented mathematical theory do not address the problem of assigning uncertainties, statistical significance or confidence: thus, the area of statistical hypothesis testing and construction of confidence intervals is largely unexplored and underdeveloped. Yet, such significance or confidence measures are crucial in applications where interpretation of parameters and variables is very important. The focus of this paper is the construction of p-values and corresponding multiple testing adjustment for a high-dimensional linear model which is often very useful in $p \gg n$ settings: \begin{eqnarray}\label{mod.lin} \mathbf{Y} = \mathbf{X} \beta^0 + \varepsilon, \end{eqnarray} where $\mathbf{Y} = (Y_1,\ldots ,Y_n)^T$, $\mathbf{X}$ is a fixed design $n \times p$ design matrix, $\beta^0$ is the true underlying $p \times 1$ parameter vector and $\varepsilon$ is the $n \times 1$ stochastic error vector with $\varepsilon_1,\ldots ,\varepsilon_n$ i.i.d. having $\mathbb{E}[\varepsilon_i] = 0$ and $\mbox{Var}(\varepsilon_i) = \sigma^2 < \infty$; throughout the paper, $p$ may be much larger $n$. We are interested in testing one or many null-hypotheses of the form: \begin{eqnarray}\label{hypoth} H_{0,G};\ \beta^0_j = 0\ \mbox{for all}\ j \in G, \end{eqnarray} where $G \subseteq \{1,\ldots ,p\}$ is a subset of all the indices of the covariables. Of substantial interest is the case where $G = \{j\}$ corresponding to a hypothesis for the individual $j$th regression parameter ($j=1,\ldots ,p$). At the other end of the spectrum is the global null-hypothesis where $G = \{1,\ldots ,p\}$, and we allow for any $G$ between an individual and the global hypothesis. \subsection{Past work about high-dimensional linear models}\label{subsec.pastwork} We review in this section an important stream of research for high-dimensional linear models. The more familiar reader may skip Section \ref{subsec.pastwork}. \subsubsection{The Lasso} The Lasso \citep{tibs96} \begin{eqnarray*} \hat{\beta}_{\mathrm{Lasso}} = \hat{\beta}_{\mathrm{Lasso}}(\lambda) = \mathrm{argmin}_{\beta} \big( \|\mathbf{Y} - \mathbf{X} \beta\|_2^2/n + \lambda \|\beta\|_1 \big), \end{eqnarray*} has become tremendously popular for estimation in high-dimensional linear models. The three main themes which have been considered in the past are prediction of the regression surface (and for a new response variable) with corresponding measure of accuracy \begin{eqnarray}\label{predict} \|\mathbf{X} (\hat{\beta}_{\mathrm{Lasso}} - \beta^0)\|_2^2/n, \end{eqnarray} estimation of the parameter vector whose quality is assessed by \begin{eqnarray}\label{est-norms} \|\hat{\beta}_{\mathrm{Lasso}} - \beta^0\|_q\ (q \in \{1,2\}), \end{eqnarray} and variable selection or estimating the support of $\beta^0$, denoted by the active set $S_0 = \{j;\ \beta^0_j \neq 0,\ j=1,\ldots ,p\}$ such that \begin{eqnarray}\label{var-sel} \mathbb{P}[\hat{S} = S_0] \end{eqnarray} is large for a selection (estimation) procedure $\hat{S}$. \cite{greenrit03} proved the first result closely related to prediction as measured in (\ref{predict}). Without any conditions on the deterministic design matrix $\mathbf{X}$, except that the columns are normalized such that $(n^{-1} \mathbf{X}^T \mathbf{X})_{jj} \equiv 1$, one has with high probability at least $1 - 2 \exp(- t^2/2)$: \begin{eqnarray}\label{slow-rate} & &\|\mathbf{X} (\hat{\beta}_{\mathrm{Lasso}}(\lambda) - \beta^0)\|_2^2/n \le 3/2 \lambda \|\beta^0\|_1,\nonumber\\ & &\lambda = 4 \sigma \sqrt{\frac{t^2 + 2 \log(p)}{n}}, \end{eqnarray} see \citet[Cor.6.1]{pbvdg11}. Thereby, we assume Gaussian errors but such an assumption can be relaxed \citep[formula (6.5)]{pbvdg11}. From an asymptotic point of view (where $p$ and $n$ diverge to $\infty$), the regularization parameter $\lambda \asymp \sqrt{\log(p)/n}$ leads to consistency for prediction if the truth is sparse with respect to the $\ell_1$-norm such that $\|\beta^0\|_1 = o(\lambda^{-1}) = o(\sqrt{n/\log(p)})$. The convergence rate is then at best $O_P(\lambda) = O_P(\sqrt{\log(p)/n})$ assuming $\|\beta^0\|_1 \asymp 1$. Such a slow rate of convergence can be improved under additional assumptions on the design matrix $\mathbf{X}$. The ill-posedness of the design matrix can be quantified using the concept of ``modified'' eigenvalues. Consider the matrix $\hat{\Sigma} = n^{-1} \mathbf{X}^T \mathbf{X}$. The smallest eigenvalue of $\hat{\Sigma}$ is \begin{eqnarray*} \lambda_{\mathrm{min}}(\hat{\Sigma}) = \min_{\beta} \beta^T \hat{\Sigma} \beta. \end{eqnarray*} Of course, $\lambda_{\mathrm{min}}(\hat{\Sigma})$ equals zero if $p > n$. Instead of taking the minimum on the right-hand-side over all $p \times 1$ vectors $\beta$, we replace it by a \emph{constrained} minimum, typically over a cone. This leads to the concept of restricted eigenvalues \citep{brt09,koltch09b,koltch09a,rasketal10} or weaker forms such as the compatibility constants \citep{vandeGeer:07a} or further slight weakening of the latter \citep{sunzhang11}. Relations among the different conditions and ``modified'' eigenvalues are discussed in \cite{van2009conditions} and \citet[Ch.6.13]{pbvdg11}. Assuming that the smallest ``modified'' eigenvalue is larger than zero, one can derive an oracle inequality of the following prototype: with probability at least $1 - 2 \exp(- t^2/2)$ and using $\lambda$ as in (\ref{slow-rate}): \begin{eqnarray}\label{oracle-ineq} \|\mathbf{X}(\hat{\beta}_{\mathrm{Lasso}}(\lambda) - \beta^0)\|_2^2/n + \lambda \|\hat{\beta}_{\mathrm{Lasso}} - \beta^0\|_1 \le 4 \lambda^2 s_0/\phi_0^2, \end{eqnarray} where $\phi_0$ is the compatibility constant (smallest ``modified'' eigenvalue) of the fixed design matrix $\mathbf{X}$ \citep[Cor.6.2]{pbvdg11}. Again, this holds by assuming Gaussian errors but the result can be extended to non-Gaussian distributions. From (\ref{oracle-ineq}), we have two immediate implications: from an asymptotic point of view, using $\lambda \asymp \sqrt{\log(p)/n}$ and assuming that $\phi_0$ is bounded away from 0, \begin{eqnarray} & &\|\mathbf{X}(\hat{\beta}_{\mathrm{Lasso}}(\lambda) - \beta^0)\|_2^2/n = O_P(s_0 \log(p)/n),\label{fast-rate}\\ & &\|\hat{\beta}_{\mathrm{Lasso}}(\lambda) - \beta^0\|_1 = O_P(s_0 \sqrt{\log(p)/n}),\label{lassoell1} \end{eqnarray} i.e., a fast convergence rate for prediction as in (\ref{fast-rate}) and an $\ell_1$-norm bound for the estimation error. We note that the oracle convergence rate, where an oracle would know the active set $S_0$, is $O_P(s_0/n)$: the $\log(p)$-factor is the price to pay by not knowing the active set $S_0$. An $\ell_2$-norm bound can be derived as well: $\|\hat{\beta}_{\mathrm{Lasso}}(\lambda) - \beta^0\|_2 = O_P(\sqrt{s_0 \log(p)/n})$ assuming a slightly stronger restricted eigenvalue condition. Results along these lines have been established by \cite{buneaetal06}, \cite{geer07} who covers generalized linear models as well, \cite{zhang2008sparsity}, \cite{MY08}, \cite{brt09} among others. The Lasso is doing variable selection: a simple estimator of the active set $S_0$ is $\hat{S}_{\mathrm{Lasso}}(\lambda) = \{j;\ \hat{\beta}_{\mathrm{Lasso};j}(\lambda) \neq 0\}$. In order that $\hat{S}_{\mathrm{Lasso}}(\lambda)$ has good accuracy for $S_0$, we have to require that the non-zero regression coefficients are sufficiently large (since otherwise, we cannot detect the variables in $S_0$ with high probability). We make a ``beta-min'' assumption whose asymptotic form reads as \begin{eqnarray}\label{beta.min} \min_{j \in S_0} |\beta_j^0| \gg \sqrt{s_0 \log(p)/n}. \end{eqnarray} Furthermore, when making a restrictive assumption for the design, called neighborhood stability, or assuming the equivalent irrepresentable condition, and choosing a suitable $\lambda \gg \sqrt{\log(p)/n}$: \begin{eqnarray*} \mathbb{P}[\hat{S}_{\mathrm{Lasso}}(\lambda) = S_0] \to 1, \end{eqnarray*} see \cite{mebu06}, \cite{zhaoyu06}, and \cite{Wai08} establishes exact scaling results. The ``beta-min'' assumption in (\ref{beta.min}) as well as the irrepresentable condition on the design are restrictive and non-checkable. Furthermore, these conditions are essentially necessary \citep{mebu06,zhaoyu06}. Thus, under weaker assumptions, we can only derive a weaker yet useful result about variable screening. Assuming a restricted eigenvalue condition on the fixed design $\mathbf{X}$ and the ``beta-min'' condition in (\ref{beta.min}) we still have asymptotically that for $\lambda \asymp \sqrt{\log(p)/n}$: \begin{eqnarray}\label{var-screening} \mathbb{P}[\hat{S}(\lambda) \supseteq S_0] \to 1\ (n \to \infty). \end{eqnarray} The cardinality of the estimated active set (typically) satisfies $|\hat{S}(\lambda)| \le \min(n,p)$: thus if $p \gg n$, we achieve a massive and often useful dimensionality reduction in the original covariates. We summarize that a slow convergence rate for prediction ``always'' holds. Assuming some ``constrained minimal eigenvalue'' condition on the fixed design $\mathbf{X}$, we obtain the fast convergence rate in (\ref{fast-rate}), and an estimation error bound as in (\ref{lassoell1}); with the additional ``beta-min'' assumption, we obtain the practically useful variable screening property in (\ref{var-screening}). For consistent variable selection, we necessarily need a (much) stronger condition on the fixed design, and such a strong condition is questionable to be true in a practical problem. Hence variable selection might be a too ambitious goal with the Lasso. That is why the original translation of Lasso (Least Absolute Shrinkage and Selection Operator) may be better re-translated as Least Absolute Shrinkage and \emph{Screening} Operator. We refer to \cite{pbvdg11} for an extensive treatment of the properties of the Lasso. \subsubsection{Other methods} Of course, the three main inference tasks in a high-dimensional linear model, as described by (\ref{predict}), (\ref{est-norms}) and (\ref{var-sel}), can be pursued with other methods than the Lasso. An interesting line of proposals include concave penalty functions instead of the $\ell_1$-norm in the Lasso, see for example \cite{fanli01} or \cite{zhang2010}. The adaptive Lasso \citep{zou06}, analyzed in the high-dimensional setting by \citet{huangetal06} and \citet{geer11}, can be interpreted as an approximation of some concave penalization approach \citep{zouli08}. A related procedure to the adaptive Lasso is the relaxed Lasso \citep{Meinshausen:05}. Another method is the Dantzig selector \citep{cantao07} which has similar statistical properties as the Lasso \citep{brt09}. Other algorithms include orthogonal matching pursuit (which is essentially forward variable selection) or $L_2$Boosting (matching pursuit) which have desirable properties \citep{Tropp04,pb06}. Quite different from estimation of the high-dimensional parameter vector are variable screening procedures which aim for an analogous property as in (\ref{var-screening}). Prominent examples include the ``Sure Independence Screening'' (SIS) method \citep{fanlv07}, and high-dimensional variable screening or selection properties have been established for forward variable selection \citep{wang09} and for the PC-algorithm \citep{pbkama09} (``PC'' stands for the first names of its inventors, Peter Spirtes and Clark Glymour). \subsection{Assigning uncertainties and p-values for high-dimensional regression}\label{subsec.uncertass} At the core of statistical inference is the specification of statistical uncertainties, significance and confidence. For example, instead of having a variable selection result where the probability in (\ref{var-sel}) is large, we would like to have measures controlling a type I error (false positive selections), including p-values which are adjusted for large-scale multiple testing, or construction of confidence intervals or regions. In the high-dimensional setting, answers to these core goals are challenging. \cite{mebu10} propose Stability Selection, a very generic method which is able to control the expected number of false positive selections: that is, denoting by $V = |\hat{S} \cap S_0^c|$, Stability Selection yields a finite-sample upper bound of $\mathbb{E}[V]$ (not only for linear models but also for many other inference problems). To achieve this, a very restrictive (but presumably non-necessary) exchangeability condition is made which, in a linear model, is implied by a restrictive assumption for the design matrix. On the positive side, there is no requirement of a ``beta-min'' condition as in (\ref{beta.min}) and the method seems to provide reliable control of $\mathbb{E}[V]$. \citet{WR08} propose a procedure for variable selection based on sample splitting. Using their idea and extending it to multiple sample splitting, \cite{memepb09} develop a much more stable method for construction of p-values for hypotheses $H_{0,j}:\ \beta^0_j =0\ (j=1,\ldots ,p)$ and for adjusting them in a non-naive way for multiple testing over $p$ (dependent) tests. The main drawback of this procedure is its required ``beta-min'' assumption in (\ref{beta.min}). And this is very undesirable since for statistical hypothesis testing, the test should control type I error regardless of the size of the coefficients, while the power of the test should be large if the absolute value of the coefficient would be large: thus, we should avoid assuming (\ref{beta.min}). Up to now, for the high-dimensional linear model case with $p \gg n$, it seems that only \cite{zhangzhang11} managed to construct a procedure which leads to statistical tests for $H_{0,j}$ without assuming a ``beta-min'' condition. \subsection{A loose description of our new results} Our starting point is Ridge regression for estimating the high-dimensional regression parameter. We then develop a bias correction, addressing the issue that Ridge regression is estimating the regression coefficient vector projected to the row space of the design matrix: the corrected estimator is denoted by $\hat{\beta}_{\mathrm{corr}}$. Theorem \ref{th1} describes that under the null-hypothesis, the distribution of a suitably normalized $a_{n,p} |\hat{\beta}_{\mathrm{corr}}|$ can be asymptotically and stochastically (componentwise) upper-bounded: \begin{eqnarray}\label{formula-descr} & &a_{n,p} |\hat{\beta}_{\mathrm{corr}}| \stackrel{\mathrm{as}}{\preceq} (|Z_j| + \Delta_j)_{j=1}^p,\nonumber\\ & &(Z_1,\ldots ,Z_p) \sim {\cal N}_p(0,\sigma^2 n^{-1} \Omega), \end{eqnarray} for some \emph{known} positive definite matrix $\Omega$ and some \emph{known} constants $\Delta_j$. This is the key to derive p-values based on this stochastic upper bound. It can be used for construction of p-values for individual hypotheses $H_{0,j}$ as well as for more global hypotheses $H_{0,G}$ for \emph{any} subset $G \subseteq \{1,\ldots ,p\}$, including cases where $G$ is (very) large. Furthermore, Theorem \ref{th2} justifies a simple approach for controlling the familywise error rate when considering multiple testing of regression hypotheses. Our multiple testing adjustment method itself is closely related to the Westfall-Young permutation procedure \citep{westyoung93} and hence, it offers high power, especially in presence of dependence among the many test-statistics \citep{memabu11}. \subsubsection{Relation to other work}\label{subsec.otherwork} Our new method as well as the approach in \cite{zhangzhang11} provide p-values (and the latter also confidence intervals) without assuming a ``beta-min'' condition. Both of them build on using linear estimators and a correction using a non-linear initial estimator such as the Lasso. Using e.g. the Lasso directly leads to the problem of characterizing the distribution of the estimator (in a tractable form): this seems very difficult in high-dimensional settings while it has been worked out for low-dimensional problems \citep{knfu00}. The work by \cite{zhangzhang11} is the only one which studies (sufficiently closely) related questions and goals as in this paper. The approach by \cite{zhangzhang11} is based on the idea of projecting the high-dimensional parameter vector to low-dimensional components, as occurring naturally in the hypotheses $H_{0,j}$ about single components, and then proceeding with a linear estimator. This idea is pursued with the ``efficient score function'' approach from semiparametric statistics \citep{bicketal98}. The difficulty in the high-dimensional setting is the construction of the score vector $z_j$ from which one can derive a confidence interval for $\beta^0_j$: \citet{zhangzhang11} propose it as the residual vector from the Lasso when regressing $\mathbf{X}^{(j)}$ against all other variables $\mathbf{X}^{(\setminus j)}$ (where $\mathbf{X}^{(J)}$ denotes the design sub-matrix whose columns correspond to the index set $J \subseteq \{1,\ldots ,p\}$). They then prove the asymptotic validity of confidence intervals for finite, sparse linear combinations of $\beta^0$. The difference to our work is primarily a rather different construction of the projection where we make use of Ridge estimation with a very simple choice of regularization. A drawback of our method is that, typically, it is not theoretically rate-optimal in terms of power. \section{Model, estimation and p-values} Consider one or many null-hypotheses as in (\ref{hypoth}). We are interested in constructing p-values for hypotheses $H_{0,G}$ without imposing a ``beta-min'' condition as in (\ref{beta.min}): the statistical test itself will distinguish whether a regression coefficient is small or not. \subsection{Identifiability}\label{subsec.identif} We consider model (\ref{mod.lin}) with fixed design. Without making additional assumptions on the design matrix $\mathbf{X}$, there is a problem of identifiability. Clearly, if $p > n$ and hence $\mbox{rank}(\mathbf{X}) \le n < p$, there are different parameter vectors $\theta$ such that $\mathbf{X} \beta^0 = \mathbf{X} \theta$. Thus, we cannot identify $\beta^0$ from the distribution of $Y_1,\ldots ,Y_n$ (and fixed design $\mathbf{X}$). \citet{shadeng11} give a characterization of identifiability in a high-dimensional linear model (\ref{mod.lin}) with fixed design. Following their approach, it is useful to consider the singular value decomposition \begin{eqnarray*} & &\mathbf{X} = RSV^T,\\ & &R\ \mbox{$n \times n$ matrix with}\ R^T R = I_n,\\ & &S\ \mbox{$n \times n$ diagonal matrix with singular values}\ s_1,\ldots ,s_n,\\ & &V\ \mbox{$p \times n$ matrix with}\ V^T V = I_n. \end{eqnarray*} Denote by ${\cal R}(\mathbf{X}) \subset \mathbb{R}^p$ the linear space generated by the $n$ rows of $\mathbf{X}$. The projection of $\mathbb{R}^p$ onto ${\cal R}(\mathbf{X})$ is then \begin{eqnarray*} P_{\mathbf{X}} = \mathbf{X}^T (\mathbf{X} \mathbf{X}^T)^{-} \mathbf{X} = V V^T, \end{eqnarray*} where $A^{-}$ denotes the pseudo-inverse of a squared matrix $A$. A natural choice of a parameter $\theta^0$ such that $\mathbf{X} \beta^0 = \mathbf{X} \theta^0$ is the projection of $\beta^0$ onto ${\cal R}(\mathbf{X})$. Thus, \begin{eqnarray}\label{theta} \theta^0 = P_{\mathbf{X}} \beta^0 = V V^T \beta^0. \end{eqnarray} Then, of course, $\beta^0 \in {\cal R}(\mathbf{X})$ if and only if $\beta^0 = \theta^0$. \subsection{Ridge regression}\label{subsec.Ridgeplarge} Consider Ridge regression \begin{eqnarray}\label{Ridgetheta} \hat{\beta} = \mbox{argmin}_{\beta} \|\mathbf{Y} - \mathbf{X} \beta\|_2^2/n + \lambda \|\beta\|_2^2 = (n^{-1} \mathbf{X}^T \mathbf{X} + \lambda I_p)^{-1} n^{-1} \mathbf{X}^T \mathbf{Y}, \end{eqnarray} where $\lambda = \lambda_n$ is a regularization parameter. By construction of the estimator, $\hat{\beta} \in {\cal R}(\mathbf{X})$; and indeed, as discussed below, $\hat{\beta}$ is a reasonable estimator for $\theta^0 = P_X \beta^0$. We denote by \begin{eqnarray*} \hat{\Sigma} = n^{-1} \mathbf{X}^T \mathbf{X}. \end{eqnarray*} The covariance matrix of the Ridge estimator, multiplied by $n$, is then \begin{eqnarray}\label{omegasvd} \Omega = \Omega(\lambda) &=& (\hat{\Sigma} + \lambda_n I)^{-1} \hat{\Sigma} (\hat{\Sigma} + \lambda_n I)^{-1}\nonumber\\ & =& V \mbox{diag}(\frac{s_1^2}{(s_1^2 + \lambda)^2},\ldots ,\frac{s_n^2}{(s_n^2 + \lambda)^2}) V^T, \end{eqnarray} a quantity which will appear at many places again. We assume that \begin{eqnarray}\label{minvar} \Omega_{\mathrm{min}}(\lambda) := \min_{j \in \{1,\ldots ,p\}} \Omega_{jj}(\lambda) > 0. \end{eqnarray} We do not require that $\Omega_{\mathrm{min}}(\lambda)$ is bounded away from zero as a function of $n$ and $p$. Thus, the assumption in (\ref{minvar}) is very mild: a rather peculiar design would be needed to violate the condition, see also the equivalent formulation in formula (\ref{minvar2}) below. Furthermore, (\ref{minvar}) is easily checkable. We denote by $\lambda_{\mathrm{min} \neq 0}(A)$ the smallest non-zero eigenvalue of a symmetric matrix $A$. We then have the following result. \begin{prop}\label{prop1} Consider the Ridge regression estimator $\hat{\beta}$ in (\ref{Ridgetheta}) with regularization parameter $\lambda > 0$. Assume condition (\ref{minvar}), see also (\ref{minvar2}). Then, \begin{eqnarray*} & &\max_{j \in \{1,\ldots ,p\}}|\mathbb{E}[\hat{\beta}_j ] - \theta^0_j]| \le \lambda \|\theta^0\|_2 \lambda_{\mathrm{min} \neq 0}(\hat{\Sigma})^{-1},\\ & &\min_{j \in \{1,\ldots ,p\}} \mbox{Var}(\hat{\beta}_j) \ge n^{-1} \sigma^2 \Omega_{\mathrm{min}}(\lambda). \end{eqnarray*} \end{prop} A proof is given in Section \ref{sec.proofs}, relying in large parts on \citet{shadeng11}. We now discuss under which circumstances the estimation bias is smaller than the standard error. Qualitatively, this happens if $\lambda >0$ is chosen sufficiently small. For a more quantitative discussion, we study the behavior of $\Omega_{\mathrm{min}}(\lambda)$ as a function of $\lambda$ and we obtain an equivalent formulation of (\ref{minvar}). \begin{lemm}\label{lemm1} We have the following: \begin{enumerate} \item \begin{eqnarray*} \Omega_{\mathrm{min}}(\lambda) = \min_j \sum_{r=1}^n \frac{s_r^2}{(s_r^2 + \lambda)^2} V_{jr}^2. \end{eqnarray*} From this we get: \begin{eqnarray}\label{minvar2} \mbox{(\ref{minvar}) holds if and only if}\ \ \min_{1 \le j \le p} \max_{1 \le r \le n, s_r \neq 0} V_{jr}^2 > 0. \end{eqnarray} \item Assuming (\ref{minvar}), \begin{eqnarray*} \Omega_{\mathrm{min}}(0^+) := \lim_{\lambda \searrow 0^+} \Omega_{\mathrm{min}}(\lambda) = \min_j \sum_{r= 1;s_r \neq 0}^n \frac{1}{s_r^2}V_{jr}^2 > 0. \end{eqnarray*} \item \begin{eqnarray}\label{omegamin} \mbox{if (\ref{minvar}) holds:}\ \ 0 < L_C \le \liminf_{\lambda \in (0,C]} \Omega_{\mathrm{min}}(\lambda) \le M_C < \infty, \end{eqnarray} for any $0 < C < \infty$, and where $0 < L_C < M_C < \infty$ are constants which depend on $C$ and on the design matrix $\mathbf{X}$ (and hence on $n$ and $p$). \end{enumerate} \end{lemm} The proof is straightforward using the expression (\ref{omegasvd}). The statement 3. says that for a given data-set, the variances of the $\hat{\beta}_j$'s remain in a reasonable range even if we choose $\lambda > 0$ arbitrarily small; the statement doesn't imply anything for the behavior as $n$ and $p$ are getting large (as the data and design matrix change). From Proposition \ref{prop1} we immediately obtain the following result. \begin{corr} Consider the Ridge regression estimator $\hat{\beta}$ in (\ref{Ridgetheta}) with regularization parameter $\lambda > 0$ satisfying \begin{eqnarray}\label{lambdachoice} \lambda \Omega_{\mathrm{min}}(\lambda) ^{-1/2} \le n^{-1/2} \sigma \|\theta^0\|_2^{-1} \lambda_{\mathrm{min} \neq 0}(\hat{\Sigma}). \end{eqnarray} In addition, assume condition (\ref{minvar}), see also (\ref{minvar2}). Then, \begin{eqnarray*} \max_{j \in \{1,\ldots ,p\}}(\mathbb{E}[\hat{\beta}_j ] - \theta^0_j])^2 \le \min_{j \in \{1,\ldots ,p\}} \mbox{Var}(\hat{\beta}_j). \end{eqnarray*} Due to the third statement in Lemma \ref{lemm1} regarding the behavior of $\Omega_{\mathrm{min}}(\lambda)$, (\ref{lambdachoice}) can be fulfilled for a sufficiently small value of $\lambda$ (a more precise characterization of the maximal $\lambda$ which fulfills (\ref{lambdachoice}) would require knowledge of $\|\theta^0\|_2$). \end{corr} \subsection{The projection bias and corrected Ridge regression}\label{subsec.projbias} As discussed in Section \ref{subsec.identif}, Ridge regression is estimating the parameter $\theta^0 = P_{\mathbf{X}} \beta^0$ given in (\ref{theta}). Thus, in general, besides the estimation bias governed by the choice of $\lambda$, there is an additional projection bias $B_j = \theta^0_j - \beta^0_j\ (j=1,\ldots ,p)$. Clearly, \begin{eqnarray*} B_j = (P_{\mathbf{X}} \beta^0)_j - \beta^0_j = (P_{\mathbf{X}})_{jj} \beta^0_j - \beta^0_j + \sum_{k \neq j} (P_{\mathbf{X}})_{jk} \beta^0_k. \end{eqnarray*} In terms of constructing p-values, controlling type I error for testing $H_{0,j}$ or $H_{0,G}$ with $j \in G$, the projection bias has only a disturbing effect if $\beta^0_j = 0$ and $\theta^0_j \neq 0$, and we only have to consider the bias under the null-hypothesis: \begin{eqnarray}\label{hoj} B_{H_0;j} = \sum_{k \neq j} (P_{\mathbf{X}})_{jk} \beta^0_k. \end{eqnarray} The bias $B_{H_0;j}$ is also the relevant quantity for the case under the non null-hypothesis, see the brief comment after Proposition \ref{prop-repr}. We can estimate $B_{H_0;j}$ by \begin{eqnarray*} \hat{B}_{H_0;j} = \sum_{k \neq j} (P_{\mathbf{X}})_{jk} \hat{\beta}_{\mathrm{init};k}, \end{eqnarray*} where $\hat{\beta}_{\mathrm{init}}$ is an initial estimator such as the Lasso which guarantees a certain estimation accuracy, see assumption (A) below. This motivates the following bias-corrected Ridge estimator for testing $H_{0,j}$, or $H_{0,G}$ with $j \in G$: \begin{eqnarray}\label{Ridgecorr} \hat{\beta}_{\mathrm{corr};j} = \hat{\beta}_j - \hat{B}_{H_0;j} = \hat{\beta}_j - \sum_{k \neq j} (P_{\mathbf{X}})_{jk} \hat{\beta}_{\mathrm{init};k}. \end{eqnarray} We then have the following representation. \begin{prop}\label{prop-repr} Assume model (\ref{mod.lin}) with Gaussian errors. Consider the corrected Ridge regression estimator $\hat{\beta}_{\mathrm{corr}}$ in (\ref{Ridgecorr}) with regularization parameter $\lambda > 0$, and assume (\ref{minvar}). Then, \begin{eqnarray*} & &\hat{\beta}_{\mathrm{corr};j} = Z_j + \gamma_j\ (j=1,\ldots ,p)\\ & &Z_1,\ldots ,Z_p \sim {\cal N}_p(0,n^{-1} \sigma^2 \Omega),\ \Omega = \Omega(\lambda),\\ & &\gamma_j = (P_{\mathbf{X}})_{jj} \beta^0_j - \sum_{k \neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k) + b_j(\lambda),\\ & &b_j(\lambda) = \mathbb{E}[\hat{\beta}_j(\lambda)] - \theta^0_j. \end{eqnarray*} \end{prop} A proof is given in Section \ref{sec.proofs}. We infer from Proposition \ref{prop-repr} a representation which could be used not only for testing but also for constructing confidence intervals: \begin{eqnarray*} \frac{\hat{\beta}_{\mathrm{corr};j}}{(P_{\mathbf{X}})_{jj}} - \beta_j^0 = \frac{Z_j}{(P_{\mathbf{X}})_{jj}} - \sum_{k \neq j} \frac{(P_{\mathbf{X}})_{jk}}{(P_{\mathbf{X}})_{jj}}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k) + \frac{b_j(\lambda)}{(P_{\mathbf{X}})_{jj}}. \end{eqnarray*} The normalizing factors for the variables $Z_j$ bringing them to the ${\cal N}(0,1)$-scale are \begin{eqnarray*} a_{n,p;j}(\sigma) = n^{1/2} \sigma^{-1} \Omega_{jj}^{-1/2}\ (j=1,\ldots ,p) \end{eqnarray*} which are also depending on $\lambda$ through $\Omega = \Omega(\lambda)$. We refer to Section \ref{subsec.anrate} where the unusually fast divergence of $a_{n,p;j}(\sigma)$ is discussed. The test-statistics we consider are simple functions of $a_{n,p;j}(\sigma) \hat{\beta}_{\mathrm{corr};j}$. \subsection{Stochastic bound for the distribution of the corrected Ridge estimator: asymptotics} We provide here an asymptotic stochastic bound for the distribution of $a_{n,p;j}(\sigma) \hat{\beta}_{\mathrm{corr};j}$ under the null-hypothesis. The asymptotic formulation is compact and the basis for the construction of p-values in Section \ref{sec.pvalues}, but we give more detailed finite-sample results in Section \ref{sec.finites}. We consider a triangular array of observations from a linear model as in (\ref{mod.lin}): \begin{eqnarray}\label{mod.lin2} \mathbf{Y}_n = \mathbf{X}_n \beta^0_n + \varepsilon_n,\ n=1,2,\ldots , \end{eqnarray} where all the quantities and also the dimension $p = p_n$ are allowed to change with $n$. We make the following assumption. \begin{description} \item[(A)] There are constants $\Delta_j = \Delta_{j,n} > 0$ such that \begin{eqnarray*} \mathbb{P}[\cap_{j=1}^{p_n} \{|a_{n,p;j}(\sigma) \sum_{k \neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k)| \le \Delta_{j,n}\}] \to 1\ (n \to \infty). \end{eqnarray*} \end{description} We will discuss in Section \ref{subsec.Delta} constructions for such bounds $\Delta_j$ (which are typically not negligible). Our next result is the key to obtain a p-value for testing the null-hypothesis $H_{0,j}$ or $H_{0,G}$, saying that asymptotically, \begin{eqnarray*} a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| \stackrel{\mathrm{as.}}{\preceq} |W| + \Delta_j, \end{eqnarray*} where $W \sim {\cal N}(0,1)$, and similarly for the multi-dimensional version with $\hat{\beta}_{\mathrm{corr};G}$ (where $\preceq$ denotes ``stochastically smaller or equal to''). \begin{theo}\label{th1} Assume model (\ref{mod.lin2}) with fixed design and Gaussian errors. Consider the corrected Ridge regression estimator $\hat{\beta}_{\mathrm{corr}}$ in (\ref{Ridgecorr}) with regularization parameter $\lambda_n > 0$ such that \begin{eqnarray*} \lambda_n \Omega_{\mathrm{min}}(\lambda_n)^{-1/2} = o(\min(n^{-1/2} \|\theta^0\|_2^{-1} \lambda_{\mathrm{min} \neq 0}(\hat{\Sigma})))\ (n \to \infty), \end{eqnarray*} and assume condition (A) and (\ref{minvar}) (while for the latter, the quantity does not need to be bounded away from zero). Then, for $j \in \{1,\ldots ,p_n\}$ and if $H_{0,j}$ holds: for all $u \in \mathbb{R}^⁺$, \begin{eqnarray*} \limsup_{n \to \infty} \big(\mathbb{P}[a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| > u] - \mathbb{P}[|W| + \Delta_j > u]\big) \le 0, \end{eqnarray*} where $W \sim {\cal N}(0,1)$. Similarly, for any sequence of subsets $\{G_n\}_n,\ G_n \subseteq \{1,\ldots ,p_n\}$ and if $H_{0,G_n}$ holds: for all $u \in \mathbb{R}^+$, \begin{eqnarray*} \limsup_{n \to \infty} \big(\mathbb{P}[\max_{j \in G_n} a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| > u] - \mathbb{P}[\max_{j \in G_n} (a_{n,p;j}(\sigma) |Z_j| + \Delta_j) > u] \big) \le 0, \end{eqnarray*} where $Z_1,\ldots ,Z_P$ are as in Proposition \ref{prop-repr}. \end{theo} A proof is given in Section \ref{sec.proofs}. As written above already, due to the third statement in Lemma \ref{lemm1}, the condition for $\lambda_n$ is reasonable. We note that the distribution of $\max_{j \in G_n} (a_{n,p;j}(\sigma) |Z_j| + \Delta_j)$ does not depend on $\sigma$ and can be easily computed via simulation. \subsubsection{Bounds $\Delta_j$ in assumption (A)}\label{subsec.Delta} We discuss an approach for constructing the bounds $\Delta_j$. As mentioned above, they should not involve any unknown quantities so that we can use them for constructing p-values from the distribution of $|W| + \Delta_j$ or $\max_{j \in G_n} (a_{n,p;j}(\sigma) |Z_j| + \Delta_j)$, respectively. We rely on the (crude) bound \begin{eqnarray}\label{crudebound} |a_{n,p;j}(\sigma) \sum_{k \neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k)| \le a_{n,p;j}(\sigma) \max_{k \neq j}|(P_{\mathbf{X}})_{jk}| \|\hat{\beta}_{\mathrm{init}} - \beta^0\|_1. \end{eqnarray} To proceed further, we consider the Lasso as initial estimator. Due to (\ref{oracle-ineq}) we obtain \begin{eqnarray}\label{crudebound2} |a_{n,p;j}(\sigma) \sum_{k \neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k)| \le \max_{k \neq j} |a_{n,p;j}(\sigma)(P_{\mathbf{X}})_{jk}| 4 \lambda_{\mathrm{Lasso}} s_0 \phi_0^{-2}, \end{eqnarray} where the last inequality holds on a set with probability at least $1 - 2\exp(-t^2/2)$ when choosing $\lambda_{\mathrm{Lasso}}$ as in (\ref{slow-rate}). The assumptions we require are summarized next. \begin{lemm}\label{lemm.bound} Consider the linear model (\ref{mod.lin2}) with fixed design, having normalized columns $\hat{\Sigma}_{jj} \equiv 1$, which satisfies the compatibility condition with constant $\phi_0^2 = \phi_{0,n}^2$. Consider the Lasso as initial estimator $\hat{\beta}_{\mathrm{init}}$ with regularization parameter $\lambda_{\mathrm{Lasso}} = 4 \sigma \sqrt{C\log(p_n)/n}$ for some $2 < C < \infty$. Assume that the sparsity $s_0 = s_{0,n} = o((n/\log(p_n))^{\xi})\ (n \to \infty)$ for some $0 < \xi < 1/2$, and that $\liminf_{n \to \infty} \phi_{0,n}^2 > 0$. Then, \begin{eqnarray}\label{bound1} \Delta_j :\equiv \max_{k \neq j} |a_{n,p;j}(\sigma) (P_{\mathbf{X}})_{jk}|(\log(p)/n)^{1/2 - \xi} \end{eqnarray} satisfies assumption (A). \end{lemm} A proof follows from (\ref{crudebound2}). We summarize the results as follows. \begin{corr} Assume the conditions of Theorem \ref{th1} without condition (A) and the conditions of Lemma \ref{lemm.bound}. Then, when using the Lasso as initial estimator, the statements in Theorem \ref{th1} hold. \end{corr} The construction of the bound in (\ref{bound1}) requires the compatibility condition on the design and an upper bound for the sparsity $s_0$. While the former is an identifiability condition, and some form of identifiability assumption is certainly necessary, the latter condition about knowing the magnitude of the sparsity is not very elegant. When assuming bounded sparsity $s_{0,n} \le M < \infty$ for all $n$, we can choose $\xi = 0$ with an additional constant $M$ on the right-hand side of (\ref{bound1}). In our practical examples in Section \ref{sec.numeric}, we use $\xi = 0.05$. \subsection{P-values}\label{sec.pvalues} Our construction of p-values is based on the asymptotic distributions in Theorem \ref{th1}. For an individual hypothesis $H_{0,j}$, we define the p-value for the two-sided alternative as \begin{eqnarray}\label{pvalue1} P_j = 2 \big(1 - \Phi \big((a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| - \Delta_j)_+ \big) \big). \end{eqnarray} Of course, we could also consider one-sided alternatives with the obvious modification for $P_j$. For a more general hypothesis $H_{0,G}$ with $|G| > 1$, we use the maximum as test statistics (but other statistics such as weighted sums could be chosen as well) and denote by \begin{eqnarray*} & &\hat{\gamma}_{G} = \max_{j \in G} a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr},_j}|,\\ & &J_G(c) = \mathbb{P}[\max_{j \in G} (a_{n,p;j}(\sigma) |Z_j| + \Delta_j) \le c], \end{eqnarray*} where the latter is independent of $\sigma$ and can be easily computed via simulation ($Z_1,\ldots ,Z_p$ are as in Proposition \ref{prop-repr}). Then, the p-value for $H_{0,G}$, against the alternative being the complement $H_{0,G}^c$, is defined as \begin{eqnarray}\label{pvalue2} P_{G} = 1 - J_G(\hat{\gamma}_{G}). \end{eqnarray} We note that when $\Delta_j \equiv \Delta$ is the same for all $j$, we can rewrite $P_G = 1 - \mathbb{P}[\max_{j \in G} a_{n,p;j}(\sigma)|Z_j| \le (\hat{\gamma}_{G} - \Delta)_+]$ which is a direct analogue of (\ref{pvalue1}). Error control follows immediately by the construction of the p-values. \begin{corr}\label{corr.pvalue} Assume the conditions in Theorem \ref{th1}. Then, for any $0 < \alpha < 1$, \begin{eqnarray*} & &\limsup_{n \to \infty} \mathbb{P}[P_j \le \alpha] - \alpha \le 0\ \mbox{if $H_{0,j}$ holds},\\ & &\limsup_{n \to \infty} \mathbb{P}[P_G \le \alpha] - \alpha \le 0\ \mbox{if $H_{0,G}$ holds}. \end{eqnarray*} Furthermore, for any sequence $\alpha_n \to 0\ (n \to \infty)$ which converges sufficiently slowly, the statements also hold when replacing $\alpha$ by $\alpha_n$. \end{corr} A discussion about detection power of the method is given in Section \ref{subsec.detection}. Further remarks about these p-values are given in Section \ref{subsec.outlookbound}. \subsubsection{Estimation of $\sigma$} In practice, for the p-values in (\ref{pvalue1}) and (\ref{pvalue2}), we use the normalizing factor $a_{n,p;j}(\hat{\sigma})$ with an estimate $\hat{\sigma}$. These p-values are asymptotically controlling the type I error if $\mathbb{P}[\hat{\sigma} \ge \sigma] \to 1\ (n \to \infty)$. This follows immediately from the construction. We propose to use the estimator $\hat{\sigma}$ from the Scaled Lasso method \citep{sunzhang11}. Assuming $s_{0} \log(p)/n = o(1)\ (n \to \infty)$ and the compatibility condition for the design, \citet{sunzhang11} prove that $|\hat{\sigma}/\sigma - 1| = o_P(1)\ (n \to \infty)$. \section{Multiple testing}\label{sec.multtest} We aim to strongly control the familywise error rate $\mathbb{P}[V>0]$ where $V$ is the number of false positive selections. For simplicity, we consider first individual hypotheses $H_{0,j}\ (j \in \{1,\ldots ,p\})$. The generalization to multiple testing of general hypotheses $H_{0,G}$ with $|G| > 1$ is discussed in Section \ref{subsec.multtestgen}. Based on the individual p-values $P_j$, we want to construct corrected p-values $P_{\mathrm{corr};j}$ corresponding to the following decision rule: \begin{eqnarray*} \mbox{reject $H_{0,j}$ if $P_{\mathrm{corr};j} \le \alpha$}\ (0 < \alpha < 1). \end{eqnarray*} We denote the associated estimated set of rejected hypotheses (the set of significant variables) by $\hat{S}_{\alpha} = \{j;\ P_{\mathrm{corr};j} \le \alpha\}$. Furthermore, recall that $S_0 = \{j;\ \beta^0_j \neq 0\}$ is the set of true active variables. The number of false positives using the nominal significance level $\alpha$ is the denoted by \begin{eqnarray*} V_{\alpha} = \hat{S}_{\alpha} \cap S_0^c. \end{eqnarray*} The goal is to construct $P_{\mathrm{corr};j}$ such that $\mathbb{P}[V_{\alpha} > 0] \le \alpha$, or that the latter holds at least in an asymptotic sense. The method we describe here is closely related to the Westfall-Young procedure \citep{westyoung93}. Consider the variables $Z_1,\ldots ,Z_p \sim {\cal N}_p(0,\sigma^2 n^{-1} \Omega)$ appearing in Proposition \ref{prop-repr} or Theorem \ref{th1}. Consider the following distribution function: \begin{eqnarray*} F_Z(c) = \mathbb{P}[\min_{1 \le j \le p} 2(1 - \Phi(a_{n,p;j}(\sigma) |Z_j|)) \le c]. \end{eqnarray*} and define \begin{eqnarray}\label{pcorr} P_{\mathrm{corr};j} = F_Z(P_j + \zeta), \end{eqnarray} where $\zeta >0$ is an arbitrarily small number, e.g. $\zeta = 0.01$ for using the method in practice. Regarding the choice of $\zeta = 0$ (which we use in all empirical examples in Section \ref{sec.numeric}), see the Remark appearing after Theorem \ref{th2} below. The distribution function $F_Z(\cdot)$ is independent of $\sigma$ and can be easily computed via simulation of the dependent, mean zero jointly Gaussian variables $Z_1,\ldots ,Z_p$. It is computationally (much) faster than simulation of the so-called minP-statistics \citep{westyoung93} which would require fitting $\hat{\beta}_{\mathrm{corr}}$ many times. \subsection{Asymptotic justification of the multiple testing procedure} We first derive familywise error control in an asymptotic sense. For a finite sample result, see Section \ref{sec.finites}. We consider the framework as in (\ref{mod.lin2}). \begin{theo}\label{th2} Assume the conditions in Theorem \ref{th1}. For the p-value in (\ref{pvalue1}) and using the correction in (\ref{pcorr}) with $\zeta >0$ we have: for $0 < \alpha < 1$, \begin{eqnarray*} \limsup_{n \to \infty} \mathbb{P}[V_{\alpha}>0] \le \alpha. \end{eqnarray*} Furthermore, for any sequence $\alpha_n \to 0\ (n \to \infty)$ which converges sufficiently slowly, it holds that $\limsup_{n \to \infty} \mathbb{P}[V_{\alpha_n}>0] - \alpha_n \le 0$. \end{theo} A proof is given in Section \ref{sec.proofs}. \paragraph{Remark: Multiple testing correction in (\ref{pcorr}) with $\zeta=0$.} We could modify the correction in (\ref{pcorr}) using $\zeta=0$: the statement in Theorem \ref{th2} can then be derived when making the additional assumption that \begin{eqnarray}\label{derivGZ} \sup_{n \in \mathbb{N}} \sup_{u} |F'_{n,Z}(u)| < \infty, \end{eqnarray} where $F_{n,Z}(\cdot) = F_Z(\cdot)$ is the distribution function appearing in (\ref{pcorr}) which depends in the asymptotic framework on $n$ and (mainly on) $p = p_n$. Verifying (\ref{derivGZ}) may not be easy for general matrices $\Omega = \Omega_{n,p_n}$. However, for the special case where $Z_1,\ldots ,Z_p$ are independent, \begin{eqnarray*} F'_Z(u) = p \varphi(u) (1 - \Phi(u))^{p-1} \end{eqnarray*} which is nicely bounded as a function of $u$, over all values of $p$. \subsection{Multiple testing of general hypotheses}\label{subsec.multtestgen} The methodology for testing many general hypotheses $H_{0,G_j}$ with $|G_j| \ge 1$, $j=1,\ldots ,m$ is the same as before. Denote by $S_{0,G} = \{j;\ H_{0,G_j}\ \mbox{does not hold}\}$ and by $S_{0,G}^c = \{j;\ H_{0,G_j}\ \mbox{holds}\}$; note that these sets are determined by the true parameter vector $\beta^0$. Since the p-value in (\ref{pvalue2}) is of the form $P_{G_j} = 1 - J_{G_j}(\hat{\gamma}_{G_j})$, we consider \begin{eqnarray*} F_{G,Z} = \mathbb{P}[\min_{j =1,\ldots ,m} (1 - J_{G_j}(\gamma_{G_j,Z})) \le c],\ \gamma_{G,Z} = \max_{j \in G} (a_{n,p;j}(\sigma) |Z_j|) \end{eqnarray*} which can be easily computed via simulation (and it is independent of $\sigma$). We then define the corrected p-value as \begin{eqnarray*} P_{\mathrm{corr};G_j} = F_{G,Z} (P_{G_j} + \zeta), \end{eqnarray*} where $\zeta > 0$ is a small value such as $\zeta = 0.01$; see also the definition in (\ref{pcorr}) and the corresponding discussion for the case where $\zeta =0$ (which now applies to the distribution function $F_{G,Z}$ instead of $F_Z$). We denote by $\hat{S}_{G,\alpha} = \{j;\ P_{\mathrm{corr};G_j} \le \alpha\}$ and $V_{G,\alpha} = \hat{S}_{G,\alpha} \cap S_{0,G}^c$. If $J_{G_j}(\cdot)$ has a bounded first derivative, for all $j$, we can obtain the same result, under the same conditions, as in Theorem \ref{th2} (and without making a condition on the cardinalities of $G_j$). If $J_{G_j}(\cdot)$ has not a bounded first derivative, we can get around this problem by modifying the p-value $P_{G_j}$ in (\ref{pvalue2}) to $\tilde{P}_{G_j} = 1 - J_{G_j}(\hat{\gamma}_{G_j} - \nu)$ for any (small) $\nu >0$ and proceeding with $\tilde{P}_{G_j}$. \section{Sufficient conditions for detection}\label{subsec.detection} We consider detection of alternatives $H_{0,j}^c$ or $H_{0,G}^c$ with $|G| > 1$. We use again the notation $S_0$ as in Section \ref{sec.multtest} and denote by $a_n \gg b_n$ that $a_n/b_n \to \infty\ (n \to \infty)$. \begin{theo}\label{th.detection} Consider the setting and assumptions as in Theorem \ref{th1}. \begin{enumerate} \item When considering individual hypotheses $H_{0,j}$: if $j \in S_0$ with \begin{eqnarray*} |\beta^0_j| \gg a_{n,p;j}(\sigma)^{-1}|(P_{\mathbf{X}})_{jj}|^{-1} \max(\Delta_j,1) \end{eqnarray*} there exists an $\alpha_n \to 0\ (n \to \infty)$ such that \begin{eqnarray*} \mathbb{P}[P_j \le \alpha_n] \to 1\ (n \to \infty), \end{eqnarray*} while we still have for $j \in S_0^c$: $\limsup_{n\to \infty} \mathbb{P}[P_j \le \alpha_n] - \alpha_n \le 0$ (see Corollary \ref{corr.pvalue}). \item When considering individual hypotheses $H_{0,G}$ with $G = G_n$ and $|G_n| > 1$: if $H_{0,G}^c$ holds, with \begin{eqnarray*} \max_{j \in G_n} |a_{n,p;j}(\sigma)(P_{\mathbf{X}})_{jj}^{-1} \beta_j^0| \gg \max(\max_{j \in G_n} |\Delta_j|, \sqrt{\log(|G_n|)}), \end{eqnarray*} there exists an $\alpha_n \to 0\ (n \to \infty)$ such that \begin{eqnarray*} \mathbb{P}[P_{G_n} \le \alpha_n] \to 1\ (n \to \infty), \end{eqnarray*} while if $H_{0,G}$ holds, $\limsup_{n\to \infty} \mathbb{P}[P_{G_n}\le \alpha_n] - \alpha_n \le 0$ (see Corollary \ref{corr.pvalue}). \item When considering multiple hypotheses $H_{0,j}$: if for all $j \in S_0$, \begin{eqnarray*} |\beta^0_j| \gg a_{n,p;j}(\sigma)^{-1}|(P_{\mathbf{X}})_{jj}|^{-1} \max(\Delta_j,\sqrt{\log(p_n)}) \end{eqnarray*} there exists an $\alpha_n \to 0\ (n \to \infty)$ such that \begin{eqnarray*} \mathbb{P}[P_{\mathrm{corr};j} \le \alpha_n] \to 1\ (n \to \infty)\ \mbox{for $j \in S_0$} \end{eqnarray*} while we still have that $\limsup_{n \to \infty} \mathbb{P}[V_{\alpha_n} > 0] - \alpha_n \le 0$ (see Theorem \ref{th2}). \item If in addition, $a_{n,p;j}(\sigma) \to \infty$ for all $j$ appearing in the conditions on $\beta_j^0$, we can replace in all the statements 1-3 the ``$\gg$'' relation by ``$\ge C$'', where $0 < C < \infty$ is a sufficiently large constant. \end{enumerate} \end{theo} A proof is given in Section \ref{sec.proofs}. Under the additional assumption of Lemma \ref{lemm.bound}, where the Lasso is used as initial estimator and using the bounds in (\ref{bound1}), we obtain the bound (for statement 1 in Theorem \ref{th.detection}): \begin{eqnarray}\label{detection2a} |\beta^0_j| \ge C \max \Big(\frac{\max_{k \neq j}|(P_{\mathbf{X}})_{jk}|}{|(P_{\mathbf{X}})_{jj}|} \big(\frac{\log(p_n)}{n}\big)^{1/2 - \xi}, \frac{1}{|(P_{\mathbf{X}})_{jj}|} a_{n,p;j}(\sigma)^{-1} \Big), \end{eqnarray} where $0 < \xi < 1/2$. This can be sharpened using the oracle bound, assuming known order of sparsity: \begin{eqnarray*} \Delta_{\mathrm{orac;j}} = D s_{0,n} \max_{k \neq j} a_{n,p;j}(\sigma) |(P_{\mathbf{X}})_{jk}| \sqrt{\log(p_n)/n} \end{eqnarray*} for some $D>0$ sufficiently large (for example, assuming $s_{0,n}$ is bounded, and replacing $s_{0,n}$ by $1$ and choosing $D >0$ sufficiently large). It then suffices to require \begin{eqnarray}\label{detection3} & &|\beta^0_j| \ge C\max \Big(\frac{\max_{k \neq j}|(P_{\mathbf{X}})_{jk}|}{|(P_{\mathbf{X}})_{jj}|} s_{0,n} \big(\frac{\log(p_n)}{n} \big)^{1/2},\frac{1}{|(P_{\mathbf{X}})_{jj}| a_{n,p;j}(\sigma)} \Big) \ \mbox{for 1. in Th. \ref{th.detection}},\nonumber\\ & &|\beta^0_j| \ge C \max \Big(\frac{\max_{k \neq j}|(P_{\mathbf{X}})_{jk}|}{|(P_{\mathbf{X}})_{jj}|} s_{0,n}\big(\frac{\log(p_n)}{n} \big)^{1/2},\frac{\sqrt{\log(p_n)}}{|(P_{\mathbf{X}})_{jj}| a_{n,p;j}(\sigma)} \Big) \ \mbox{for 3. in Th. \ref{th.detection}},\nonumber\\ & & \end{eqnarray} and analogously for the second statement in Theorem \ref{th.detection}. \subsection{Order of magnitude of normalizing factors}\label{subsec.anrate} The order of $a_{n,p;j}(\sigma)$ is typically much larger than $\sqrt{n}$ since in high dimensions, $\Omega_{jj}$ is very small. This means that the Ridge estimator $\hat{\beta}_j$ has a much faster convergence rate than $1/\sqrt{n}$ for estimating the projected parameter $\theta^0_j$. This looks counter-intuitive at first sight: the reason for the phenomenon is that $\|\theta^0\|_2$ can be much smaller than $\|\beta^0\|_2$ and hence, Ridge regression (which estimates the parameter $\theta^0$) is operating on a much smaller scale. This fact is essentially an implication of the first statement in Lemma \ref{lemm1} (without the ``$\min_j$'' part). We can write \begin{eqnarray*} \Omega_{jj} = \sum_{r=1}^n \frac{s_r^2}{(s_r^2 + \lambda)^2} V_{jr}^2 = \sum_{r=p-n+1}^p \frac{s_{r-p+n}^2}{(s_{r-p+n}^2 + \lambda)^2} U_{jr}^2, \end{eqnarray*} where the columns of $U = [U_{jr}]_{j,r=1,\ldots ,p}$ contain the $p$ eigenvectors of $\mathbf{X}^T\mathbf{X}$, satisfying $\sum_{j=1}^p U_{jr}^2 = 1$. For $n \ll p$, only very few, namely $n$ terms, are left in the summation while the normalization for $U_{jr}^2$ is over all $p$ terms. For further discussion about the fast convergence rate $a_{n,p;j}(\sigma)^{-1}$, see Section \ref{subsec.outlookbound}. While $a_{n,p;j}(\sigma)^{-1}$ is usually small, there is compensation with $(P_{\mathbf{X}})_{jj}^{-1}$ which can be rather large. In the detection bound in e.g. the first part of (\ref{detection3}), both terms appearing in the maximum are often of the same order of magnitude; see also Figure \ref{fig-supp1} in Section \ref{subsec.outlookbound}. Assuming such a balance of terms, we obtain in e.g. the first part of (\ref{detection3}): \begin{eqnarray*} |\beta^0_j| \ge C \frac{\max_{k \neq j} |(P_{\mathbf{X}})_{jk}|}{|(P_{\mathbf{X}})_{jj}|} s_{0,n} \sqrt{\log(p_n)/n}. \end{eqnarray*} The value of $\kappa_j = \max_{k \neq j} |(P_{\mathbf{X}})_{jk}|/|(P_{\mathbf{X}})_{jj}|$ is often a rather small number between 0.05 and 4, see Table \ref{tab1} in Section \ref{sec.numeric}. For comparison, \citet{zhangzhang11} establish under some conditions detection for single hypotheses $H_{0,j}$ with $\beta_j^0$ in the $1/\sqrt{n}$ range. For the extreme case with $G_n = \{1,\ldots ,p_n\}$, we are in the setting of detection of the global hypotheses, see for example \citet{ingsteretal10} for characterizing the detection boundary in case of independent covariables. Here, our analysis of detection is only providing sufficient conditions, for rather general (fixed) design matrices. \section{Numerical results}\label{sec.numeric} As initial estimator for $\hat{\beta}_{\mathrm{corr}}$ in (\ref{Ridgecorr}), we use the Scaled Lasso with scale independent regularization parameter $\lambda_{\mathrm{Scaled-Lasso}} = 2 \sqrt{\log(p)/n}$: it provides an initial estimate $\hat{\beta}_{\mathrm{init}}$ as well as an estimate $\hat{\sigma}$ for the standard deviation $\sigma$. The parameter $\lambda$ for Ridge regression in (\ref{Ridgetheta}) is always chosen as $\lambda = 1/n$, reflecting the assumption in Theorem \ref{th1} that it should be small. For single testing, we construct p-values as in (\ref{pvalue1}) or (\ref{pvalue2}) with $\Delta_j$ from (\ref{bound1}) with $\xi = 0.05$. For multiple testing with familywise error control, we consider p-values as in (\ref{pcorr}) with $\zeta=0$ (and $\Delta_j$ as above). \subsection{Simulations}\label{subsec.simul} We simulate from the linear model as in (\ref{mod.lin}) with $\varepsilon \sim {\cal N}_n(0,I)$, $n = 100$ and the following configurations: \begin{description} \item[(M1)] For both $p \in \{500,2500\}$, the fixed design matrix is generated from a realization of $n$ i.i.d. rows from ${\cal N}_p(0,I)$. Regarding the regression coefficients, we consider active sets $S_0 = \{1,2,\ldots ,s_0\}$ with $s_0 \in \{3,15\}$ and three different strengths of regression coefficients where $\beta^0_j \equiv b\ (j \in S_0)$ with $b \in \{0.25,0.5,1\}$. \item[(M2)] The same as in (M1) but for both $p \in \{500,2500\}$, the fixed design matrix is generated from a realization of $n$ i.i.d. rows from ${\cal N}_p(0,\Sigma)$ with $\Sigma_{jk} \equiv 0.8\ (j \neq k)$ and $\Sigma_{jj} = 1$. \end{description} The resulting signal to noise ratios $\mathrm{SNR} = \|\mathbf{X} \beta^0\|_2/\sigma$ are rather small: \begin{center} \begin{tabular}{l|cccccc} $p\in \{500,2500\}$ & $(3,0.25)$ & $(3,0.5)$ & $(3,1)$ & $(15,0.25)$ & $(15,0.5)$ & $(15,1)$\\ \hline (M1) &0.46 & 0.93 & 1.86 & 1.06 & 2.13 & 4.26\\ (M2) &0.65& 1.31 & 2.62 & 3.18 & 6.37 & 12.73 \end{tabular} \end{center} Here, a pair such as $(3,0.25)$ denotes the values of $s_0=3,\ b=0.25$ (where $b$ is the value of the active regression coefficients). We consider the decision-rule at significance level $\alpha = 0.05$ \begin{eqnarray}\label{desc-rule} \mbox{reject $H_{0,j}$ if $P_j \le 0.05$}, \end{eqnarray} for testing single hypotheses where $P_j$ is as in (\ref{pvalue1}) with plugged-in estimate $\hat{\sigma}$. The considered type I error is the average over non-active variables: \begin{eqnarray}\label{avetypeI} (p - s_0)^{-1} \sum_{j \in S_0^c} \mathbb{P}[P_j \le 0.05] \end{eqnarray} and the average power is \begin{eqnarray}\label{avepower} s_0^{-1} \sum_{j \in S_0} \mathbb{P}[P_j \le 0.05]. \end{eqnarray} For multiple testing, we consider the adjusted p-value $P_{\mathrm{corr};j}$ from (\ref{pcorr}): the decision is as in (\ref{desc-rule}) but replacing $P_j$ by $P_{\mathrm{corr};j}$. We report the familywise error rate (FWER) $\mathbb{P}[V_{0.05} > 0]$ and the average power as in (\ref{avepower}) but the latter with using $P_{\mathrm{corr};j}$. The results are displayed in Figure \ref{fig1}, based on 500 simulation runs per setting (with the same fixed design per setting). \begin{figure}[htb!] \centerline{ \subfigure[]{\includegraphics[scale=0.27]{RidgepvalM1}} \subfigure[]{\includegraphics[scale=0.27]{RidgepvalM2}}} \centerline{ \subfigure[]{\includegraphics[scale=0.27]{RidgepvalMTM1new}} \subfigure[]{\includegraphics[scale=0.27]{RidgepvalMTM2new}} } \caption{Simulated data as described in Section \ref{subsec.simul}. (a) and (b): Single testing with average type I error (\ref{avetypeI}) on x-axis (log-scale) and average power (\ref{avepower}) on y-axis. (c) and (d): Multiple testing with familywise error rate on x-axis (log-scale) and average power (\ref{avepower}), but using $P_{\mathrm{corr};j}$, on y-axis. Vertical dotted line is at abscissa $0.05$. Each point corresponds to a model configuration. (a) and (c): 12 model configurations generated from independent covariates (M1); (b) and (d): 12 model configurations generated from equi-dependent covariates (M2). When an error is zero, we plot it on the log-scale at abscissa $10^{-8}$.} \label{fig1} \end{figure} The subfigure (d) shows that the proposed method exhibits essentially four times a too large familywise error rate in multiple testing: it happens for scenarios with strongly correlated variables (model (M2)) and where the sparsity $s_0 = 15$ is large with moderate or large size of the coefficients (scenario (M2) with $s_0=15$ and coefficient size $b=0.25$ is unproblematic). The corresponding number of false positives are reported in Table \ref{tab-supp.1} in Section \ref{sec.falsepos}. \subsection{Values of $P_{\mathbf{X}}$} The detection results in (\ref{detection2a}) and (\ref{detection3}) depend on the ratio $\kappa_j = \max_{k \neq j}|(P_{\mathbf{X}})_{jk}|/|(P_{\mathbf{X}})_{jj}|$. We report in Table \ref{tab1} summary statistics of $\{\kappa_j\}_j$ for various datasets. \begin{table}[!htb] \begin{tabular}{l|ccccc} dataset, $(n,p)$ & $\min_j \kappa_j$ & $0.25$-q$\{\kappa_j\}_j$ & med$\{\kappa_j\}_j$ & $0.75$-q$\{\kappa_j\}_j$ & $\max_j \kappa_j$ \\ \hline (M1), $(100,500)$ & 0.21 & 0.27 & 0.29 & 0.31 & 0.44 \\ (M1), $(100,2500)$ & 0.27 & 0.34 & 0.36 & 0.39 & 0.54 \\ (M2), $(100,500)$ & 0.20 & 0.26 & 0.29 & 0.32 & 0.45 \\ (M2), $(100,2500)$ & 0.26 & 0.33 & 0.36 & 0.39 & 0.59 \\ Motif, $(143,287)$ & 0.05 & 0.10 & 0.13 & 0.18 & 0.47 \\ Riboflavin, $(71,4088)$ & 0.29 & 0.54 & 0.65 & 0.77 & 1.73\\ Leukemia, $(72,3571)$ & 0.32 & 0.44 & 0.50 & 0.58 & 1.57 \\ Colon, $(62,2000)$ & 0.28 & 0.50 & 0.57 & 0.67 & 1.36 \\ Lymphoma, $(62,4026)$ & 0.34 & 0.52 & 0.63 & 0.78 & 1.49 \\ Brain, $(34,5893)$ & 0.51 & 0.63 & 0.67 & 0.74 & 2.44 \\ Prostate, $(102,6033)$ & 0.26 & 0.45 & 0.57 & 0.74 & 3.67 \\ NCI, $(61,5244)$ & 0.37 & 0.52 & 0.61 & 0.79 & 1.76 \end{tabular} \caption{Minimum, maximum and three quartiles of $\{\kappa_j\}_{j=1}^p$ for various designs $\mathbf{X}$ from different datasets. The first four are from the simulation models in Section \ref{subsec.simul}. Although not relevant for the table, ``Motif'' (see Section \ref{subsec.realdata}) and ``Riboflavin'' have a continuous response while the last six have a class label \citep{dett04}.}\label{tab1} \end{table} We clearly see that the values of $\kappa_j$ are typically rather small which implies good detection properties as discussed in Section \ref{subsec.detection}. Furthermore, the values $\max_{k \neq j}|(P_{\mathbf{X}})_{jk}|$ occurring in the construction of $\Delta_j$ in Section \ref{subsec.Delta} are typically very small (not shown here). \subsection{Real data application}\label{subsec.realdata} We consider a problem about motif regression for finding the binding sites in DNA sequences of the HIF1$\alpha$ transcription factor. The binding sites are also called motifs, and they are typically 6-15 base pairs (with categorical values $\in \{A,C,G,T\}$) long. The data consists of a univariate response variable $Y$ from CHIP-chip experiments, measuring the logarithm of the binding intensity of the HIF1$\alpha$ transcription factor on coarse DNA segments. Furthermore, for each DNA segment, we have abundance scores for $p=195$ candidate motifs, based on DNA sequence data. Thus, for each DNA segment $i$ we have $Y_i \in \mathbb{R}$ and $X_i\in \mathbb{R}^p$, where $i=1,\ldots ,n_{\mathrm{tot}}=287$ and $p=195$. We consider a linear model as in (\ref{mod.lin}) and hypotheses $H_{0,j}$ for $j=1,\ldots ,p=195$: rejection of $H_{0,j}$ then corresponds to a significant motif. This dataset has been analyzed in \cite{memepb09} who found one significant motif using their p-value method for a linear model based on multiple sample splitting (which assumes the unpleasant ``beta-min'' condition in (\ref{beta.min})). Since the dataset has $n_{\mathrm{tot}} > p$ observations, we take one random subsample of size $n = 143 < p=195$. Figure \ref{fig3} reports the single-testing as well as the adjusted p-values for controlling the FWER. There is one significant motif with corresponding FWER-adjusted p-value equal to 0.007, and the method in \citet{memepb09} based on the total sample with $n_{\mathrm{tot}}$ found the same significant variable with FWER-adjusted p-value equal to 0.006. Interestingly, the weakly significant motif with p-value 0.080 is known to be a true binding site for HIF1$\alpha$, thanks to biological validation experiments. When compared to the Bonferroni-Holm procedure for controlling FWER based on the raw p-values as shown in Figure \ref{fig3}(a), we have for the variables with smallest p-values: \begin{eqnarray*} \mbox{method as in (\ref{pcorr}):}& &\ 0.007,\ 0.080,\ 0.180,\\ \mbox{Bonferroni-Holm:}& &\ 0.011,\ 0.098,\ 0.242. \end{eqnarray*} Thus, for this example, the multiple testing correction as in Section \ref{sec.multtest} does not provide large improvements in power over the Bonferroni-Holm procedure; but our method is closely related to the Westfall-Young procedure which has been shown to be asymptotically optimal for a broad class of high-dimensional problems \citep{memabu11}. \begin{figure}[htb!] \centerline{ \subfigure[]{\includegraphics[scale=0.35]{motif}} \subfigure[]{\includegraphics[scale=0.35]{motifMT}} } \caption{Motif regression with $n=143$ and $p=195$. (a) Single-testing p-values as in (\ref{pvalue1}); (b) Adjusted p-values as in (\ref{pcorr}) for FWER control. The p-values are plotted on the log-scale. Horizontal line is at $y=0.05$.} \label{fig3} \end{figure} \section{Finite sample results}\label{sec.finites} We present here finite sample analogues of Theorem \ref{th1} and \ref{th2}. Instead of assumption (A), we assume the following: \begin{description} \item[(A')] There are constants $\Delta_j > 0$ such that \begin{eqnarray*} \mathbb{P}|\cap_{j=1} \{a_{n,p;j}(\sigma) \sum_{k \neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k)| \le \Delta_j\}] \ge 1 - \kappa \end{eqnarray*} for some (small) $0 < \kappa < 1$. \end{description} We then have the following result. \begin{prop}\label{prop2} Assume model (\ref{mod.lin}) with Gaussian errors. Consider the corrected Ridge regression estimator $\hat{\beta}_{\mathrm{corr}}$ in (\ref{Ridgecorr}) with regularization parameter $\lambda > 0$, and assume (\ref{minvar}) and condition (A'). Then, with probability at least $1 - \kappa$, for $j \in \{1,\ldots ,p\}$ and if $H_{0,j}$ holds: \begin{eqnarray*} & &a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| \le a_{n,p;j}(\sigma) |Z_j| + \Delta_j + \|a_{n,p} b(\lambda)\|_{\infty},\\ & &\|a_{n,p} b(\lambda)\|_{\infty} = \max_{j=1,\ldots ,p} a_{n,p;j}(\sigma) |b_j(\lambda)| \le \frac{\lambda}{\Omega_{\mathrm{min}}(\lambda)^{1/2}} n^{1/2} \sigma^{-1} \|\theta^0\|_2 \lambda_{\mathrm{min} \neq 0}(\hat{\Sigma})^{-1}. \end{eqnarray*} Similarly, with probability at least $1 - \kappa$, for any subset $G \subseteq \{1,\ldots ,p\}$ and if $H_{0,G}$ holds: \begin{eqnarray*} & &\max_{j \in G} a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| \le \max_{j \in G} \big(a_{n,p;j}(\sigma) |Z_j| + \Delta_j \big) + \|a_{n,p} b(\lambda)\|_{\infty}. \end{eqnarray*} \end{prop} A proof is given in Section \ref{sec.proofs}. Due to the third statement in Lemma \ref{lemm1}, $\Omega_{\mathrm{min}}(\lambda)^{-1/2}$ is bounded for a bounded range of $\lambda \in (0,C]$. Therefore, the bound for $\|a_{n,p} b(\lambda)\|_{\infty}$ can be made arbitrarily small by choosing $\lambda > 0$ sufficiently small. Theorem \ref{th2} is a consequence of the following finite sample result. \begin{prop}\label{prop3} Consider the event ${\cal E}$ with probability $\mathbb{P}[{\cal E}] \ge 1 - \kappa$ where condition (A') holds. Then, when using the corrected p-values from (\ref{pcorr}), with $\zeta \ge 0$ (allowing also $\zeta=0$), we obtain approximate strong control of the familywise error rate: \begin{eqnarray*} \mathbb{P}[V_{\alpha}>0] \le F_Z(F_Z^{-1}(\alpha) - \zeta + 2 (2\pi)^{-1/2} \|a_{n,p} b(\lambda)\|_{\infty}) + (1 - \mathbb{P}[{\cal E}]). \end{eqnarray*} \end{prop} A proof is given in Section \ref{sec.proofs}. We immediately get the following bound for $\zeta \ge 0$: \begin{eqnarray*} \mathbb{P}[V_{\alpha}>0] \le \alpha + \sup_u|F'_Z(u)| 2(2\pi)^{-1/2} \|a_{n,p} b(\lambda)\|_{\infty} + (1 - \mathbb{P}[{\cal E}]). \end{eqnarray*} \section{Conclusions} We have proposed a novel construction of p-values for individual and more general hypotheses in a high-dimensional linear model with fixed design and Gaussian errors. We have restricted ourselves to max-type statistics for general hypotheses but modifications to e.g. weighted sums are straightforward using the representation in Proposition \ref{prop-repr}. A key idea is to use a linear, namely the Ridge estimator, combined with a correction for the potentially substantial bias due to the fact that the Ridge estimator is estimating the projected regression parameter vector onto the row-space of the design matrix. The finding that we can ``succeed'' with a corrected Ridge estimator in a high-dimensional context may come as a surprise, as it is well known that Ridge estimation can be very bad for say prediction. Nevertheless, our bias corrected Ridge procedure might not be optimal in terms of power, as indicated in Section \ref{subsec.anrate}. The main assumptions we make are the compatibility condition for the design, i.e., an identifiability condition, and knowledge of an upper bound of the sparsity (see Lemma \ref{lemm.bound}). A related idea of using a linear estimator coupled with a bias correction for deriving confidence intervals has been earlier proposed by \cite{zhangzhang11}. \emph{No tuning parameter.} Our approach approach does not require the specification of a tuning parameter, except for the issue that we crudely bound the true sparsity as in (\ref{bound1}); we always used $\xi = 0.05$, and the Scaled Lasso initial estimator does not require the specification of a regularization parameter. All our numerical examples were run without tuning the method to a specific setting, and error control with our p-value approach is often conservative while the power seems reasonable. Furthermore, our method is generic which allows to test for any $H_{0,G}$ regardless whether the size of $G$ is small or large: we present in the Section \ref{sec.addsimul} an additional simulation where $|G|$ is large. For multiple testing correction or for general hypotheses with sets $G$ where $|G| > 1$, we rely on the power of simulation since analytical formulae for max-type statistics under dependence seem in-existing: yet, our simulation is extremely simple as we only need to generate dependent multivariate Gaussian random variables. \emph{Small variance of Ridge estimator.} As indicated before, it is surprising that corrected Ridge estimation performs rather well for statistical testing. Although the bias due to the projection $P_{\mathbf{X}}$ can be substantial, it is compensated by small variances $\sigma^2 n^{-1} \Omega_{jj}$ of the Ridge estimator. It is \emph{not} true that $\Omega_{jj}$'s become large as $p$ increases: that is, the Ridge estimator has small variance for an individual component when $p$ is very large, see Section \ref{subsec.anrate}. Therefore, the detection power of the method remains reasonably good as discussed in Section \ref{subsec.detection}. Viewed from a different perspective, even though $|(P_{\mathbf{X}})_{jj} \beta^0_j|$ may be very small, the normalized version $a_{n,p;j}(\sigma) |(P_{\mathbf{X}})_{jj} \beta^0_j|$ can be sufficiently large for detection since $a_{n,p;j}(\sigma)$ may be very large (as the inverse of the square root of the variance). The values of $P_{\mathbf{X}}$ can be easily computed for a given problem: our analysis about sufficient conditions for detection in Section \ref{subsec.detection} could be made more complete by invoking random matrix theory for the projection $P_{\mathbf{X}}$ (assuming that $\mathbf{X}$ is a realization of i.i.d. row-vectors whose entries are potentially dependent). However, currently, most of the results on singular values and similar quantities of $\mathbf{X}$ are for the regime $p \le n$ \citep{versh12}, which leads in our context to the trivial projection $P_{\mathbf{X}} = I$, or for the regime $p/n \to C$ with $0\le C <\infty$ \citep{elkar08}. \emph{Extensions.} Obvious but partially non-trivial model extensions include random design, non-Gaussian errors or generalized linear models. From a practical point of view, the second and third issue would be most valuable. Relaxing the fixed design assumption makes part of the mathematical arguments more complicated, yet a random design is better posed in terms of identifiability. \section{Appendix}\label{sec.supplement} \subsection{Proofs}\label{sec.proofs} \emph{Proof of Proposition \ref{prop1}.}\\ The statement about the bias is given in \citet{shadeng11} (proof of their Theorem 1). The covariance matrix of $\hat{\beta}$ is \begin{eqnarray*} n^{-1} \Omega = n^{-1} (\hat{\Sigma} + \lambda I)^{-1} \hat{\Sigma} (\hat{\Sigma} + \lambda I)^{-1}. \end{eqnarray*} Then, for the variance we obtain $\mbox{Var}(\hat{\beta}_j) = n^{-1} \sigma^2 \Omega_{jj} \ge n^{-1} \sigma^2 \Omega_{\mathrm{min}}(\lambda)$.\hfill$\Box$ \bigskip\noindent \emph{Proof of Proposition \ref{prop-repr}.}\\ We write \begin{eqnarray*} \hat{\beta}_{\mathrm{corr};j} = (\hat{\beta}_j - \mathbb{E}[\hat{\beta}_j]) + \theta^0_j - \sum_{k \neq j} (P_{\mathbf{X}})_{jk} \hat{\beta}_{\mathrm{init};k} + (\mathbb{E}[\hat{\beta}_j] - \theta^0_j). \end{eqnarray*} The result then follows by defining $Z_j = \hat{\beta}_j - \mathbb{E}[\hat{\beta}_j]$ and using that $\theta^0_j = (P_{\mathbf{X}} \beta^0)_j = (P_{\mathbf{X}})_{jj} \beta^0_j + \sum_{k \neq j} (P_{\mathbf{X}})_{jk} \beta^0_k$.\hfill$\Box$ \bigskip\noindent \emph{Proof of Proposition \ref{prop2} (basis for proving Theorem \ref{th1}).}\\ The bound from Proposition \ref{prop1} for the estimation bias of the Ridge estimator leads to: \begin{eqnarray*} \|a_{n,p} b(\lambda)\|_{\infty} &=& \max_{j=1,\ldots ,p} a_{n,p;j}(\sigma) |\mathbb{E}[\hat{\beta}_j - \theta^0_j|\\ &\le & \frac{\lambda \|\theta^0\|_2 \lambda_{min \neq 0}(\hat{\Sigma})^{-1}}{\sigma n^{-1/2} \Omega_{jj}^{1/2}} \nonumber\\ &\le&\lambda \|\theta^0\|_2 \lambda_{min \neq 0}(\hat{\Sigma})^{-1} \sigma^{-1} n^{1/2} \Omega_{\mathrm{min}}(\lambda) ^{-1/2} . \end{eqnarray*} By using the representation from Proposition \ref{prop-repr}, invoking assumption (A') and assuming that the null-hypothesis $H_{0,j}$ or $H_{0,G}$ holds, respectively, the proof is completed.\hfill$\Box$ \bigskip\noindent \emph{Proof of Theorem \ref{th1}.}\\ Due to the choice of $\lambda = \lambda_n$ we have that $\|a_{n,p} b(\lambda_n)\|_{\infty} = o(1)\ (n \to \infty)$. The proof then follows from Proposition \ref{prop2} and invoking assumption (A) saying that the probabilities for the statements in Proposition \ref{prop2} converge to 1 as $n \to \infty$. \hfill$\Box$ \bigskip\noindent \emph{Proof of Proposition \ref{prop3} (basis for proving Theorem \ref{th2}).}\\ Consider the set ${\cal E}$ where assumption (A') holds (whose probability is at least $\mathbb{P}[{\cal E}] \ge 1- \kappa$). Without loss of generality, we consider $P_j = 2\big(1 - \Phi(a_{n,p;j}(\sigma)|\hat{\beta}_{\mathrm{corr};j}| - \Delta_j)$ without the truncation at value 1 (implied by the positive part $(a_{n,p;j}(\sigma)|\hat{\beta}_{\mathrm{corr};j}| - \Delta_j)_+$); in terms of decisions (rejection or non-rejection of a hypothesis), both versions for the p-value are equivalent. Then, on ${\cal E}$ and for $j \in S_0^c$: \begin{eqnarray*} P_j &=& 2\big(1 - \Phi(a_{n,p;j}(\sigma)|\hat{\beta}_{\mathrm{corr};j}| - \Delta_j) \big)\\ &\ge& 2 \Big(1 - \Phi (a_{n,p;j}(\sigma) \big |\hat{\beta}_{\mathrm{corr};j} - \sum_{k \neq j} (P_{\mathbf{X}})_{jk} (\hat{\beta}_{\mathrm{init};k} - \beta^0_k) \big| ) \Big) \\ &\ge & 2 \big(1 - \Phi(a_{n,p;j}(\sigma)|Z_j|) \big) - 2 (2\pi)^{-1/2} \|a_{n,p} b(\lambda)\|_{\infty}, \end{eqnarray*} where in the last inequality we used Proposition \ref{prop-repr} and Taylor's expansion. Thus, on ${\cal E}$: \begin{eqnarray*} \min_{j \in S_0^c} P_j &\ge& \min_{j \in S_0^c} 2 \big(1 - \Phi(a_{n,p;j}(\sigma)|Z_j|)\big) - 2(2 \pi)^{-1/2} \|a_{n,p} b(\lambda)\|_{\infty} \\ &\ge & \min_{j=1,\ldots ,p} 2 \big(1 - \Phi(a_{n,p;j}(\sigma)|Z_j|) \big) - 2(2 \pi)^{-1/2} \|a_{n,p} b(\lambda)\|_{\infty}. \end{eqnarray*} Therefore, \begin{eqnarray*} & &\mathbb{P}[\min_{j \in S_0^c} P_j \le c] \le \mathbb{P}[{\cal E} \cap \{\min_{j \in S_0^c} P_j \le c] + \mathbb{P}[{\cal E}^c] \\ &\le & \mathbb{P}[\min_{j=1,\ldots ,p} 2 \big(1 - \Phi(a_{n,p;j}(\sigma)|Z_j|) \big) \le c + 2(2 \pi)^{-1/2} \|a_{n,p}b(\lambda)\|_{\infty}] + \mathbb{P}[{\cal E}^c]\\ &=& F_Z\big(c + 2(2 \pi)^{-1/2} \|a_{n,p}b(\lambda)\|_{\infty} \big) + \mathbb{P}[{\cal E}^c]. \end{eqnarray*} Using this we obtain: \begin{eqnarray*} \mathbb{P}[V_{\alpha}>0] &=& \mathbb{P}[\min_{j \in S_0^c} P_{\mathrm{corr};j} \le \alpha] = \mathbb{P}[\min_{j \in S_0^c} P_j \le F_Z^{-1}(\alpha) - \zeta] \\ &\le& F_Z\big(F_Z^{-1}(\alpha) - \zeta + 2 (2\pi)^{-1/2} \|a_{n,p} b(\lambda)\|_{\infty} \big) + \mathbb{P}[{\cal E}^c], \end{eqnarray*} This completes the proof.\hfill$\Box$ \bigskip\noindent \emph{Proof of Theorem \ref{th2}.}\\ Due to the choice of $\lambda = \lambda_n$ we have that $\|a_{n,p} b(\lambda_n)\|_{\infty} = o(1)\ (n \to \infty)$. Furthermore, using the formulation in Proposition \ref{prop3}, assumption (A) translates to a sequence of sets ${\cal E}_n$ with $\mathbb{P}[{\cal E}_n] \to 1\ (n \to \infty)$. We then use Proposition \ref{prop3} and observe that for sufficiently large $n$: $F_Z(F_Z^{-1}(\alpha) - \zeta + 2(2 \pi)^{-1/2} \|a_{n,p} b(\lambda_n)\|_{\infty}) \le F_Z(F_Z^{-1}(\alpha)) \le \alpha$. The modification for the case with $\alpha_n \to 0$ sufficiently slowly follows analogously: note that the second last inequality in the proof above follows by monotonicity of $F_Z(\cdot)$ and $\zeta > 2 (2 \pi)^{-1/2} \|a_{n,p} b(\lambda_n)\|_{\infty}$ for $n$ sufficiently large. This completes the proof.\hfill$\Box$ \bigskip\noindent \emph{Proof of Theorem \ref{th.detection}.}\\ Throughout the proof, $\alpha_n \to 0$ is converging sufficiently slowly, possibly depending on the context of the different statements we prove. Regarding statement 1: it is sufficient that for $j \in S_0$, \begin{eqnarray*} a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| \gg \max(\Delta_j,1). \end{eqnarray*} From Proposition \ref{prop-repr} we see that this can be enforced by requiring \begin{eqnarray*} a_{n,p;j}(\sigma) \big(|(P_{\mathbf{X}})_{jj} \beta^0_j| - |\sum_{k \neq j} (P_{\mathbf{X}})_{jk} (\hat{\beta}_{\mathrm{init};k} - \beta^0_k)| - |Z_j| - |b_j(\lambda)| \big) \gg \max(\Delta_j,1). \end{eqnarray*} Since $|a_{n,p;j}(\sigma)\sum_{k \neq j} (P_{\mathbf{X}})_{jk} (\hat{\beta}_{\mathrm{init};k} - \beta^0_k)| \le \Delta_j$, this holds if \begin{eqnarray}\label{add-detect1} |\beta^0_j| \gg \frac{1}{|(P_{\mathbf{X}})_{jj}| a_{n,p;j}(\sigma)} \max(\Delta_j,a_{n,p;j}(\sigma) Z_j, a_{n,p;j}(\sigma) b_j(\lambda),1). \end{eqnarray} Due to the choice of $\lambda = \lambda_n$ (as in Theorem \ref{th1}) we have $a_{n,p;j}(\sigma) b_j(\lambda) \le \|a_{n,p}(\sigma b(\lambda)\|_{\infty} = o(1)$. Hence (\ref{add-detect1}) holds with probability converging to one if \begin{eqnarray*} |\beta^0_j| \gg \frac{1}{|(P_{\mathbf{X}})_{jj}| a_{n,p;j}(\sigma)} \max(\Delta_j,1), \end{eqnarray*} completing the proof for statement 1. For proving the second statement, we recall that \begin{eqnarray*} 1 - J_G(c) = \mathbb{P}[\max_{j \in G} \big(a_{n,p;j}(\sigma) |Z_j| + \Delta_j \big) > c]. \end{eqnarray*} Denote by $W = \max_{j \in G} (a_{n,p;j}(\sigma) |Z_j| + \Delta_j) \le \tilde{W} = \max_{j \in G} a_{n,p;j}(\sigma) |Z_j| + \max_{j \in G} \Delta_j$. Thus, \begin{eqnarray*} \mathbb{P}[W > c] \le \mathbb{P}[\tilde{W} > c]. \end{eqnarray*} Therefore, the statement for the p-value $\mathbb{P}[P_G \le \alpha_n]$ is implied by \begin{eqnarray}\label{add-detect2} \mathbb{P}_{\tilde{W}}[\tilde{W} > \hat{\gamma}_G] \le \alpha_n. \end{eqnarray} Using the union bound and the fact that $a_{n,p;j}(\sigma) |Z_j| \sim {\cal N}(0,1)$ (but dependent over different values of $j$), we have that \begin{eqnarray*} \max_{j \in G} a_{n,p;j}(\sigma) |Z_j| = O_P(\sqrt{\log(|G|)}). \end{eqnarray*} Therefore, (\ref{add-detect2}) holds if \begin{eqnarray*} \hat{\gamma}_G = \max_{j \in G} a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| \gg \max(\max_{j \in G} \Delta_j,\sqrt{\log(|G|)}). \end{eqnarray*} The argument is now analogous to the proof of the first statement above, using the representation from Proposition \ref{prop-repr}. Regarding the third statement, we invoke the rough bound \begin{eqnarray*} P_{\mathrm{corr};j} \le p P_j, \end{eqnarray*} with the non-truncated Bonferroni corrected p-value at the right-hand side. Hence, \begin{eqnarray*} \max_{j \in S_0} P_{\mathrm{corr};j} \le \alpha_n \end{eqnarray*} is implied by \begin{eqnarray*} \max_{j \in S_0} pP_j = \max_{j \in S_0} 2p (1 - \Phi((a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| - \Delta_j)_+) \le \alpha_n. \end{eqnarray*} Since this involves a standard Gaussian two-sided tail probability, the inequality can be enforced (for certain slowly converging $\alpha_n$) by \begin{eqnarray*} \max_{j \in S_0} 2\exp \big(\log(p) - (a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| - \Delta_j)_+^2/2 \big) = o_P(1). \end{eqnarray*} The argument is now analogous to the proof of the first statement above, using the representation from Proposition \ref{prop-repr}. The fourth statement involves slight obvious modifications of the arguments above.\hfill$\Box$ \subsection{P-values for $H_{0,G}$ with $|G|$ large}\label{sec.addsimul} We report here on a small simulation study for testing $H_{0,G}$ with $G = \{1,2,\ldots ,100\}$. We consider model (M2) from Section \ref{subsec.simul} with 4 different configurations and we use the p-value from (\ref{pvalue2}) with corresponding decision rule for rejection of $H_{0,G}$ if the p-value is smaller or equal to the nominal level 0.05. Table \ref{tab-supp2} describes the result based on 500 independent simulations (where the fixed design remains the same). \begin{table}[!htb] \begin{tabular}{l|ccc} model & $\mathbb{P}[\mbox{false rejection}]$ & $\mathbb{P}[\mbox{true rejection}]$ & (power mult., power indiv.)\\ \hline (M2), $p=500$, $s=3$, $b=0.5$ & 0.00 & 0.10 & (0.01,1.00)\\ (M2), $p=500$, $s=3$, $b=1$ & 0.00 & 0.91 & (0.37,1.00)\\ (M2), $p=2500$, $s=3$, $b=0.5$ & 0.01 & 0.02 & (0.00,1.00)\\ (M2), $p=2500$, $s=3$, $b=1$ & 0.00 & 0.83 & (0.17,1.00) \end{tabular} \caption{Testing of general hypothesis $H_{0,G}$ with $|G| = 100$ using the p-value in (\ref{pvalue2}) with significance level $0.05$. Second column: type I error; Third column: power; Fourth column: comparison with power using multiple individual testing and average power using individual testing without multiplicity adjustment (both for all $p$ hypotheses $H_{0,j}\ (j=1,\ldots ,p)$).}\label{tab-supp2} \end{table} The method works well with much better power than multiple testing of individual hypotheses but worse than average power for testing individual hypotheses without multiplicity adjustment (which is not a proper approach). This is largely in agreement with the theoretical results in Theorem \ref{th.detection}. Furthermore, the type I error control is good. \subsection{Number of false positives in simulated examples}\label{sec.falsepos} We show here the number of false positives $V = V_{0.05}$ in the simulated scenarios where the FWER (among individual hypotheses) was found too large. \begin{table}[!htb] \begin{tabular}{l|cccccc} model & $\mathbb{P}[V=0]$ & $\mathbb{P}[V=1]$ & $\mathbb{P}[V=2]$ & $\mathbb{P}[V = 3]$ & $\mathbb{P}[V =4]$ & $\mathbb{P}[V \ge 5]$ \\ \hline (M2), $p=500$, $s=15$, $b=1$ & 0.482 & 0.336 & 0.138 & 0.028 & 0.010 & 0.006 \\ (M2), $p=500$, $s=15$, $b=0.5$ & 0.746 & 0.218 & 0.034 & 0.000 & 0.002 & 0.000 \\ (M2), $p=2500$, $s=15$, $b=1$ & 0.012 & 0.044 & 0.098 & 0.126 & 0.172 & 0.548 \\ (M2), $p=2500$, $s=15$, $b=0.5$ & 0.504 & 0.328 & 0.132 & 0.032 & 0.004 & 0.000 \end{tabular} \caption{Probabilities for false positives for simulation models from Section \ref{subsec.simul} in scenarios where the FWER is clearly overshooting the nominal level $0.05$.}\label{tab-supp.1} \end{table} Although the FWER is larger than 0.05, the number of false positives is relatively small, except for the extreme model (M2),p=2500,s=15,b=1 which has a too large sparsity and a too strong signal strength. For the latter model, we would need to increase $\xi$ in (\ref{bound1}) to achieve better error control. \subsection{Further discussion about p-values and bounds $\Delta_j$ in assumption (A)}\label{subsec.outlookbound} The p-values in (\ref{pvalue1}) and (\ref{pvalue2}) are crucially based on the idea of correction with the bounds $\Delta_j$ in Section \ref{subsec.Delta}. The essential idea is contained in Proposition \ref{prop-repr}: \begin{eqnarray*} & &a_{n,p;j}(\sigma)\hat{\beta}_{\mathrm{corr};j}\\ &=& a_{n,p;j}(\sigma)(P_{\mathbf{X}})_{jj} - a_{n,p;j}(\sigma) \sum_{k\neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k) + a_{n,p;j}(\sigma) Z_j + \mbox{negligible term}. \end{eqnarray*} There are three cases. If \begin{eqnarray}\label{case1} a_{n,p;j}(\sigma) \sum_{k\neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k) = o_P(1), \end{eqnarray} a correction with the bound $\Delta_j$ would not be necessary, but of course, it does not hurt in terms of type I error control. If \begin{eqnarray}\label{case2} a_{n,p;j}(\sigma) \sum_{k\neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k) \asymp V, \end{eqnarray} for some non-degenerate random variable $V$, the correction with the bound $\Delta_j$ is necessary and assuming that $\Delta_j$ is of the same order of magnitude as $V$, we have a balance between $\Delta_j$ and the stochastic term $a_{n,p;j}(\sigma) Z_j$. In the last case where \begin{eqnarray}\label{case3} a_{n,p;j}(\sigma) \sum_{k\neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k) \to \infty, \end{eqnarray} the bound $\Delta_j$ would be the dominating element in the p-value construction. We show in Figure \ref{fig-supp1} that there is empirical evidence that (\ref{case2}) applies most often. \begin{figure}[htb!] \centerline{ \includegraphics[scale=0.47]{projection3}} \caption{Histogram of projection bias $a_{n,p;j}(\sigma)\sum_{k\neq j} (P_{\mathbf{X}})_{jk}(\hat{\beta}_{\mathrm{init};k} - \beta^0_k)$ over all values $j=1,\ldots,p$ and over 100 independent simulation runs. Left: model (M2),p=2500,s=3,b=1; Right: model (M2),p=2500,s=15,b=1.}\label{fig-supp1} \end{figure} Case (\ref{case3}) is comparable to a crude procedure which makes a hard decision about relevance of the underlying coefficients: \begin{eqnarray*} \mbox{if}\ a_{n,p;j}(\sigma) |\hat{\beta}_{\mathrm{corr};j}| > \Delta_j\ \mbox{holds, then $H_{0,j}$ is rejected}, \end{eqnarray*} and the rejection would be ``certain'' corresponding to a p-value with value equal to $0$; and in case of a ``$\le$'' relation, the corresponding p-value would be set to one. This is an analogue to the thresholding rule: \begin{eqnarray}\label{hard-rule} \mbox{if}\ |\hat{\beta}_{\mathrm{init};j}| > \Delta_{\mathrm{init}}\ \mbox{holds, then $H_{0,j}$ is rejected}, \end{eqnarray} where $\Delta_{\mathrm{init}} \ge \|\hat{\beta}_{\mathrm{init}} - \beta^0\|_{\infty}$, e.g. using a bound where $\Delta_{\mathrm{init}} \ge \|\hat{\beta}_{\mathrm{init}} - \beta^0\|_{1}$. For example, (\ref{hard-rule}) could be the variable selection estimator with the thresholded Lasso procedure \citep{geer11}. An accurate construction of $\Delta_{\mathrm{init}}$ for practical use is almost impossible: it depends on $\sigma$ and in a complicated way on the nature of the design through e.g. the compatibility constant, see (\ref{oracle-ineq}). Our proposed bound $\Delta_j$ in (\ref{bound1}) is very simple. In principle, its justification also depends on a bound for $\|\hat{\beta}_{\mathrm{init}} - \beta^0\|_{1}$, but with the advantage of ``robustness''. First, the bound $a_{n,p;j}(\sigma) \max_{k \neq j}|(P_{\mathbf{X}})_{jk}| \|\hat{\beta}_{\mathrm{init}} - \beta^0\|_1$ appearing in (\ref{crudebound}) is not depending on $\sigma$ anymore (since $\|\hat{\beta}_{\mathrm{init}} - \beta^0\|_1$ scales linearly with $\sigma$). Secondly, the inequality in (\ref{crudebound}) is crude implying that $\Delta_j$ in (\ref{bound1}) may still satisfy assumption (A) even if the bound of $\|\hat{\beta}_{\mathrm{init}} - \beta^0\|_1$ is misspecified and too small. The construction of p-values as in (\ref{pvalue1}) and (\ref{pvalue2}) is much better for practical purposes (and for simulated examples) than using a rule as in (\ref{hard-rule}). \section*{Acknowledgments} I would like to thank Cun-Hui Zhang for fruitful discussions and Stephanie Zhang for providing an R-program for the Scaled Lasso. \bibliographystyle{chicago}
1,108,101,563,791
arxiv
\section{Introduction} Cloud computing enables industries to develop and deploy highly available and scalable applications to provide affordable and on-demand access to compute and storage resources. Server virtualization in the form of virtual machines (VMs) is an essential part of cloud computing technology to provide infrastructure-as-a-service (IaaS) with the use of a hypervisor or Virtual Machine Monitor (VMM)~\cite{6530588}. Users can then deploy their applications on these VMs with only the required resources. This allows the efficient usage of the physical hardware and reduces the overall cost. The virtualization layer, especially the hypervisors, is prone to temporary hardware errors caused by manufacturing defects, a sudden increase in CPU utilization caused by some task or disconnection of externally mounted storage devices, etc. The VMs running on these VMMs are then susceptible to errors from the underneath stack, as a result, can impact the performance of the applications running on these VMs~\cite{10.1145/1952682.1952692, Li2008UnderstandingTP}. Figure~\ref{fig_example_motivation} shows an example propagation of anomalies in a virtualization stack using a type-1 hypervisor to the VM hosted on it. These anomalies may lead to the failure of all VMs and, ultimately, the applications hosted on them. \begin{figure}[t] \centerline{\includegraphics[width=0.5\linewidth]{figures/motivation.pdf}} \caption{An example showcasing the propagation of anomalies in a Type-1 hypervisor or VMM to the virtual machines (VMs) hosted on it. } \label{fig_example_motivation} \end{figure} In the development environment, these anomalous VMMs are relatively easily detectable by analyzing the logs from the hypervisor dumps. But in the production environment running on the cloud, anomalous VMMs detection is a challenge since a cloud user does not have access to the VMMs logs. Additionally, many anomalous VMM detection techniques have been proposed~\cite{6957243, 10.1145/339647.339652, Nikolai2014HypervisorbasedCI}. However, these works either require the monitoring data of the hypervisor or inject custom probes into the hypervisor. Therefore, the usage of such solutions becomes infeasible. Furthermore, due to the low downtime requirements for the applications running on the cloud, detecting such anomalous VMMs and their resolutions is to be done as quickly as possible. Therefore, this challenge is addressed in this paper for detecting anomalous VMMs \textit{ by solely using the VM's resources utilization data hosted on those VMMs} by creating a novel algorithm called \textbf{IAD}: \textbf{I}ndirect \textbf{A}nomalous VMMs \textbf{D}etection. We call the algorithm indirect since the detection must be done without any internal knowledge or data from the VMM; it should be solely based on the virtual machine's data hosted on it. The key contributions are : \begin{itemize} \item We present an online novel machine learning-based algorithm \textbf{IAD} for accurate and efficient detection of anomalous VMMs by solely using the resource's utilization data of the VM's hosted on them as the main metric (\S\ref{sec:iad_algorithm}). \item We evaluate the performance of the \textit{IAD} on two different aspects: 1) Anomalous VMMs finding accuracy (\S\ref{sec:est_time_accuracy}), and 2) Anomalous VMMs finding efficiency and scalability (\S\ref{sec:config_finding_efficency_scalability}), and compare it against five other popular algorithms which can also be applied to some extent on the described problem. \item We evaluate the \textit{IAD} algorithm and other five popular algorithms on synthetic and two real datasets. \end{itemize} \textit{Paper Organization: } Section~\ref{sec:problem_statement} describes the overall problem statement addressed in this paper along with an illustrative example. The design and details of the proposed \textit{IAD} algorithm are presented in Section~\ref{sec:iad_algorithm} . Section~\ref{sec:exp_settings} provides experimental configuration details along with the algorithms and the datasets used in this work for evaluation. In Section~\ref{sec:results}, the evaluation results are presented. Finally, Section~\ref{sec:conclusion} concludes the paper and presents an outlook. \section{Problem Definition} \label{sec:problem_statement} This section presents the overall problem definition of indirectly detecting anomalous VMMs in a cloud-based environment. Table~\ref{tab1:symbols} shows the symbols used in this paper. \begin{table}[t] \caption{Symbols and definitions.}\label{tab1:symbols} \begin{tabular*}{\textwidth}{l @{\hskip 0.1in} l} \hline \textbf{Symbol} & \textbf{Interpretation}\\ \hline $n$ & Number of time ticks in data \\ $d$ & Number of virtual machines hosted on a VMM \\ $X_t$ & The percentage utilization of a resource (for example, CPU \\ & or disk usage) by a VM at a time $t$ \\ $X_{t}^j$ & The percentage utilization of a resource at a time $t$ for $j^{th}$ VM \\ $\{c_{t}^1, c_{t}^2, ..., c_{t}^m\}$ & a set of m $\leq$d VMs with change point at time tick $t$\\ $w$ & Window size\\ minPercentVMsFault & Minimum \% of total number of VMs on a VMM which must\\ & have a change point for classifying the VMM anomalous. \\ \hline \end{tabular*} \end{table} We are given $X$ = $n \times d$ dataset, with $n$ representing the number of time ticks and $d$ the number of virtual machines hosted on a VMM. $X_{t}^j$ denotes the percentage utilization of a resource (for example, CPU or disk usage) at a time $t$ for $j^{th}$ VM. Our goal is to detect whether the VMM on which the $d$ virtual machines are hosted is anomalous or not. Formally: \begin{problem}{ (Indirect Anomalous VMM Detection ) } \begin{itemize} \item \textbf{Given} \textit{a multivariate dataset of $n$ time ticks, with $d$ virtual machines ($X_{t}^j$ for $j=\{1,\cdots,d\}$ and $t = \{1,\cdots , n\}$) representing the CPU utilization observations of VMs hosted on a VMM}. \item \textbf{Output} \textit{ a subset of time ticks or a time tick where the behavior of the VMM is anomalous}. \end{itemize} \end{problem} One of the significant challenges in this problem is the online detection, in which we receive the data incrementally, one time tick for each VM at a time, i.e., $X_{1}^j, X_{2}^j, \cdots$, for the $j^{th}$ VM. As we receive the data, the algorithm should output the time ticks where the behavior of the VMM is observed as anomalous. However, without looking at the future few time ticks after time $t$, it would be impractical to determine whether at time point $t$, the VMM is anomalous or not since the time ticks ${t + 1, t + 2, \cdots}$, are essential in deciding whether an apparent detection at time $t$ was an actual or simply noise. Hence, we introduce a window parameter $w$, upon receiving a time tick $t + w$, the algorithm outputs whether at time $t$ the VMM showcased anomalous behavior or not. Additionally, as the change points for VMs hosted on VMM could be spread over a specific duration due to the effect of the actual fault being propagating to the VMs and the granularity of the collected monitoring data, therefore, using an appropriate window size can provide a way for getting those change points. \subsection{Illustrative Example} \begin{figure}[t] \centerline{\includegraphics[width=0.55\linewidth]{figures/example.png}} \caption{Examples showing CPU utilization of two virtual machines hosted on a VMM. The left sub-figure shows an application running only on VM 2, while the right sub-figure shows the application running on both VMs. We can see a significant decrement in the CPU utilization of the two VMs when an anomaly (high-CPU load) is generated on the VMM (shown by dotted red lines).} \label{fig_example_problem} \end{figure} Here we illustrate the problem with two examples in Fig.~\ref{fig_example_problem} showcasing the CPU utilization of two virtual machines hosted on a VMM. In the left sub-figure, an application is running only on VM 2, while in the right, an application is running on both VMs. During the application run time, an anomaly, i.e., high CPU load, was generated on the hypervisor for some time (shown by dotted red lines). During this time, we can observe a significant drop in the CPU utilization by the application (affecting the performance of the application) of the two VMs (especially when an application is running on the VM). The load on a VMM affects all or most of the VMs hosted on it, which ultimately can significantly affect the performance of the applications running on the two VMs; therefore, we call such a VMM anomalous when the load was generated on it. \section{Indirect Anomaly Detection (IAD) Algorithm } \label{sec:iad_algorithm} This section presents our proposed Indirect Anomaly Detection (IAD) algorithm along with the implemented system for evaluating it. The overall system workflow diagram is shown in Figure~\ref{fig_iad_workflow} and mainly consists of two parts: the main \textit{IAD Algorithm}, and the \textit{Test Module} for evaluating the algorithm. \begin{figure}[t] \centerline{\includegraphics[width=0.8\linewidth]{figures/iad.png}} \caption{High-level system workflow of the implemented system for evaluating IAD algorithm and the interaction between its components in a general use case.} \label{fig_iad_overall_workflow} \end{figure} \subsection{IAD Algorithm} Our principal intuition behind the algorithm is that if a time tick $t$ represents a change point for some resource utilization (such as CPU utilization) in most VMs hosted on a VMM; then the VMM is also anomalous at that time tick. This is based on the fact that a fault in VMM will affect most of the VMs hosted on it, and therefore those VMs would observe a change point at a similar point of time (in the chosen window $w$ (Table~\ref{tab1:symbols})) in their resource's utilization. IAD algorithm consists of two main parts, described below: \subsubsection{Change Points Detector}: We first explain how the change point, i.e., time tick where the time series changes significantly, is calculated. Recall from §\ref{sec:problem_statement} that, we have introduced a window parameter $w$, upon receiving the time tick $t+w$, the \textit{Change Points Detector} outputs whether the time tick $t$ is a change point or not. Given a dataset $X^j$ of size $w$ for $j^{th}$ VM, this component is responsible for finding the change points in that VM. This can be calculated in two ways: Mean-based detector and Z-score-based detector. \begin{itemize} \item \textbf{Mean-based Detector}: In this detector, a $windowed\_mean$, i.e., the mean of all the values in the window, and the $global\_mean$, i.e., the mean of all the values until the current time tick is calculated. Since the IAD algorithm is designed for running it in an online way, therefore not all the values can be stored. Thus $global\_mean$ is calculated using Knuth’s algorithm~\cite{10.5555/270146, doi:10.1080/01621459.1974.10480219}. We then calculate the absolute percentage difference between the two means: $windowed\_mean$ and $global\_mean$. If the percentage difference is more significant than the specified threshold (by default is 5\%), then the time tick $t$ for $j^{th}$ VM is regarded as the change point. \item \textbf{Z-score-based Detector}: This detector is based on the calculation of the Z-scores~\cite{zscore, 9071555}. Similar to the Mean-based detector, here also a $windowed\_mean$, i.e., the mean of all the values in the window, and the $global\_mean$, i.e., the mean of all the values until the current time tick is calculated. We additionally calculate the $global\_stand\_deviation$, i.e., the standard deviation of all the values until the current time tick. Since the IAD algorithm is designed for running it in an online way, $global\_stand\_deviation$ is calculated using Welford's method~\cite{doi:10.1080/01621459.1974.10480219}. These statistics are then used for the calculation of the z-scores for all the data points in the window using Equation \ref{eq:1}. \begin{equation} z\_scores = \frac{(windowed\_mean - global\_mean)}{\frac{global\_stand\_deviation}{\sqrt{w}}}\label{eq:1} \end{equation} If the Z-scores of all windowed observations are greater than the defined threshold (3 $\times$ $global\_stand\_deviation$) then the time tick $t$ for $j^{th}$ VM is regarded as the change point. \end{itemize} In the main algorithm, only \textit{Z-Ssore-based Detector} is used as it provides higher accuracy and has fewer false positives. \subsubsection{Anomaly Detector} This component receives the input resource utilization data $X$ of size $n \times d$ where $d$ is the number VMs hosted on a VMM along with the \texttt{minPercentVMsFault} (Table~\ref{tab1:symbols})) as the input parameter. We first check the input timeseries of $w$ length for 1) zero-length timeseries and 2) if the input timeseries of all VMs are of the same length or not. If any of the two initial checks are true, then we quit and don't proceed ahead. We assume that all the VM's resources utilization data is of the same length only. After doing the initial checks, each of the VM's windowed timeseries belonging to the VMM is sent to the \textit{Change Points Detector} for the detection of whether the time tick $t$ is a change point or not. If the percentage number of VMs ($\{c_{t}^1, c_{t}^2, ..., c_{t}^m\}$ out of $d$) having the change point at time tick $t$ is greater than the \texttt{minPercentVMsFault} input parameter, then the VMM is reported as anomalous at time tick $t$. The above procedure is repeated for all time ticks. Figure~\ref{fig_iad_workflow} shows the workflow sequence diagram of the IAD algorithm. Furthermore, the developed approach can be applied for multiple VMMs as well. \begin{figure}[t] \centerline{\includegraphics[width=0.6\linewidth]{figures/iad_flow.png}} \caption{Indirect Anomaly Detection (IAD) Algorithm workflow sequence diagram} \label{fig_iad_workflow} \end{figure} \subsection{Test Module} This component is responsible for generating the synthetic data and evaluating the algorithm performance by calculating the F1-score on the results from the algorithm. It consists of multiple sub-component described below: \begin{itemize} \item \textbf{Synthetic Data Generator}: It takes the number of VMMs, number of VMs per VMM, percentage of the VMs with a fault; as the input for generating synthetic timeseries data. This synthetic data follows a Gaussian distribution based on the input parameters. This component also automatically divides the generated data into true positive and true negative labels based on the percentage of the VMs with a fault parameter. \item \textbf{Algorithm Tester}: It is responsible for invoking the algorithm with various parameters on the synthetic data and tune the algorithm's hyperparameters. \item \textbf{Evaluation}: The results from the algorithm are passed as the input to this sub-component, where the results are compared with the actual labels, and the overall algorithm score in terms of F1-score is reported. \end{itemize} \section{Experimental Settings} \label{sec:exp_settings} We design our experiments to answer the following questions: \textbf{Q1. Indirect Anomaly Detection Accuracy}: how accurate is IAD in the detection of anomalous VMM when compared to other popular algorithms? \textbf{Q2. Anomalous VMMs finding efficiency and scalability}: How does the algorithm scale with the increase in the data points and number of VMs? \subsection{Datasets} \label{sec:evaluated_datasets} For evaluating the IAD algorithm, we considered four types of datasets listed in Table~\ref{tab1:datasets} along with their information and are described below: \textbf{Synthetic:} This is the artificially generated dataset using the \textit{Test Module} component described in \S\ref{sec:iad_algorithm}. \textbf{Experimental-Synthetic Merged:} This is a dataset with a combination of experimental data and synthetic data. We created two nested virtual machines on a VM in the Google Cloud Platform to collect the experimental dataset. The underneath VM instance type is n1-standard-4 with four vCPUs and 15 GB of memory, and Ubuntu 18.04 OS was installed on it. This VM instance acts as a host for the above VMs. \textit{libvirt} toolkit is used to manage and create nested virtualization on top of the host machine. Kernel-based Virtual Machine (KVM) is used as a VMM. The configuration of the two nested VMs are i) 2vCPU and 2GB memory, ii) 1vCPU and 1GB memory. Cloud-native web applications were run on these two VMs. Monitoring data from the two VMs and underneath host is exported using the Prometheus agent deployed on each of them to an external virtual machine. \textit{stress-ng} is used for generating the load on the VMM. Based on this infrastructure, we collected a dataset for various scenarios and combined it with the synthetic data. \textbf{Azure Dataset:} This dataset is based on the publicly available cloud traces data from Azure~\cite{10.1145/3132747.3132772}. We used the VMs data from it and created random groups of VMs, with each group representing the VMs hosted on a VMM. Afterward, we feed these timeseries groups in our synthetic data generator for randomly increasing or decreasing the CPU utilization of the VMs within a VMM based on the input parameters to create anomalous and non-anomalous VMMs. \textbf{Alibaba Dataset:} This dataset is based on the publicly available cloud traces and metrics data from Alibaba cloud~\cite{10.5555/3291168.3291175}. A similar method as the \textit{Azure Dataset} was also applied to form this dataset. Figure~\ref{datasets_examples} shows an example profile of an anomalous VMM for all the datasets. \begin{table}[t] \centering \caption{Datasets used in this work for evaluating the algorithms.}\label{tab1:datasets} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Dataset} & \textbf{Anomalous}& \textbf{Non-Anomalous}& \textbf{VMs} &\textbf{TimeTicks} \\ \textbf{Name} & \textbf{VMMs}& \textbf{VMMs}& \textbf{Per VMM} &\textbf{per VM} \\ \hline Synthetic & 5 & 5 & 10 & 1000 \\ Exp-Synthetic Merged & 42 & 17 & 2 (experimental) & 5400\\ & & & 8 (synthetic) & \\ Azure\textsuperscript{$\dagger$}~\cite{10.1145/3132747.3132772} & 16 & 10 & 10 & 5400 \\ Alibaba\textsuperscript{$\dagger$}~\cite{10.5555/3291168.3291175} & 10 & 10 & 10 & 5400 \\ \hline \end{tabular} {\raggedright \textsuperscript{$\dagger$}These are modified for our usecase. \par} \end{table} \begin{figure*}[t] \begin{subfigure}{.245\textwidth} \centering \includegraphics[width=1\linewidth]{figures/synthetic-dataset.pdf} \captionof{figure}{Synthetic} \label{fig:synthetic_dataset} \end{subfigure}% \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{figures/exp-synthetic-dataset.pdf} \captionof{figure}{Exp-Synthetic} \label{fig:exp_synthetic_dataset} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{figures/microsoft_dataset.pdf} \captionof{figure}{Azure} \label{fig:azure_dataset} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{figures/alibaba_dataset.pdf} \captionof{figure}{Alibaba} \label{fig:alibaba_dataset} \end{subfigure} \caption{An example profile of an anomalous VMM having 10 VMs in all the datasets used in this work for evaluation.} \label{datasets_examples} \end{figure*} \subsection{Evaluated Algorithms} \label{sec:algos_evaluated} We compare IAD to the five other algorithms listed in Table~\ref{tab1:algos_used} along with their input dimension and parameters. ECP is a non-parametric-based change detection algorithm that uses the E-statistic, a non-parametric goodness-of-fit statistic, with hierarchical division and dynamic programming for finding them~\cite{james2013ecp}. BnB (Branch and Border) and its online version (BnBO) are also non-parametric change detection methods that can detect multiple changes in multivariate data by separating points before and after the change using an ensemble of random partitions~\cite{Hooi2019BranchAB}. Lastly, we use the popular anomaly detection algorithm: isolation forest for detecting anomalous VMM~\cite{4781136}. The primary isolation forest (IF) works on the input data directly, while we also created a modified version of it called the isolation forest features (IFF), which first calculates several features such as mean, standard deviation, etc., for all values within a window on the input dataset and then apply isolation forest on it. The downside of the IF and IFF is that they require training. \subsection{Other Settings} We have used F1-Score (denoted as F1) to evaluate the performance of the algorithm. Evaluation tests have been executed on 2.6 GHz 6-Core Intel Core i7 MacBook Pro, 32 GB RAM running macOS BigSur version 11. We implement our method in Python. For our experiments, hyper-parameters are set as follows. The window size $w$ is set as 1 minute (60 samples, with sampling done per second), threshold $k$ as 5\%, and percentVMsFault $f$ as 90\%. However, we also show experiments on parameter sensitivity in this section. \begin{table}[t] \centering \caption{The details of the algorithms used in this work for evaluation, along with their input dimension and parameters.}\label{tab1:algos_used} \begin{tabular}{|l|c|l|} \hline \textbf{Algorithm} & \textbf{Input Dimension}& \textbf{Parameters}\\ \hline\hline IAD & n $\times$ d & w, minPercentVMsFault \\ ECP~\cite{james2013ecp} & n $\times$ d & change points, Min. points b/w change points \\ BNB~\cite{Hooi2019BranchAB} & n $\times$ d & w, number of trees, threshold for change points \\ BNBOnline~\cite{Hooi2019BranchAB} & n $\times$ d & w, number of trees, threshold for change points \\ IF~\cite{4781136} & n $\times$ d & contamination factor \textcolor{red}{(requires training)} \\ IFF~\cite{4781136} & n $\times$ features & contamination factor \textcolor{red}{(requires training)} \\ \hline \end{tabular} \end{table} \section{Results} \label{sec:results} Our Initial experiments showed that 1) CPU metric is the most affected and visualized parameters in the VMs when some load is generated on the VMM; 2) All or most VMs are affected when a load is introduced on the VMM. \subsection{Q1. Indirect Anomaly Detection Accuracy} \label{sec:est_time_accuracy} Table~\ref{tab1:accuracy_score} shows the best F1-score corresponding to each algorithm evaluated in this work (\S\ref{sec:algos_evaluated}) and on all the datasets (\S\ref{sec:evaluated_datasets}). We can observe that \textit{IAD} algorithm outperforms the others on two datasets, except for the Experiment-Synthetic dataset (BNB performed best with F1-Score of \texttt{0.90}) and Alibaba dataset (IFF performed best with F1-Score of \texttt{0.66}. However, if one wants to find an algorithm that is performing well on all the datasets (Average F1-score column in Table~\ref{tab1:accuracy_score}), in that case, \textit{IAD} algorithm outperforms all the others with an average F1-score of \texttt{0.837} across all datasets. \begin{table}[t] \centering \caption{F1-score corresponding to each algorithm evaluated in this work (\S\ref{sec:algos_evaluated}) and on all the datasets (\S\ref{sec:evaluated_datasets})}\label{tab1:accuracy_score} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \textbf{Algorithm} & \textbf{Synthetic}& \textbf{Exp-Synthetic}& \textbf{Azure}& \textbf{Alibaba} & \textbf{Average F1-score}\\ \hline\hline IAD & \textbf{0.96} & 0.86 & \textbf{0.96} &0.57 & \textbf{0.837} \\ ECP & 0.67 & - & 0.76 &0.51 &0.64\\ BNB & 0.62 & \textbf{0.90} & 0.8 &0.33 &0.662\\ BNBOnline & 0.87 & 0.81 & 0.86 &0.4 &0.735\\ IF & 0.76 & 0.83 & 0.76 & 0.2 &0.637\\ IF Features (IFF) & 0.76 & 0.83 & 0.76 & \textbf{0.66} &0.75 \\ \hline \end{tabular} \end{table} Furthermore, we present the detailed results of the algorithms on all four datasets varying with the number of VMs and are shown in Figure~\ref{algorithms_f1_results}. One can observe that \textit{IAD} performs best across all the datasets, and its accuracy increases with the increase in the number of VMs. Additionally, after a certain number of VMs, the F1-score of \textit{IAD} becomes stable. This shows that if, for example, we have the synthetic dataset, then the best performance is possible with VMs $\geq$ \texttt{9}. Similarly, in the case of the Azure dataset, while for the Exp-Synthetic dataset, one needs at least five VMs, and for the Alibaba dataset, seven VMs for the algorithm to perform well. \begin{figure*}[t] \centering \begin{subfigure}{.35\textwidth} \centering \includegraphics[width=1\linewidth]{figures/synthetic-dataset-f1.pdf} \captionof{figure}{Synthetic} \label{fig:synthetic_f1} \end{subfigure}% \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=1\linewidth]{figures/exp_synthetic_dataset_f1.pdf} \captionof{figure}{Exp-Synthetic} \label{fig:exp_synthetic_f1} \end{subfigure} \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=1\linewidth]{figures/microsoft_dataset_f1.pdf} \captionof{figure}{Azure} \label{fig:azure_f1} \end{subfigure} \begin{subfigure}{0.35\textwidth} \centering \includegraphics[width=1\linewidth]{figures/alibaba_dataset_f1.pdf} \captionof{figure}{Alibaba} \label{fig:alibaba_f1} \end{subfigure} \caption{F1-score variation with the number of VMs corresponding to each algorithm evaluated in this work (\S\ref{sec:algos_evaluated}) and on all the datasets (\S\ref{sec:evaluated_datasets})} \label{algorithms_f1_results} \end{figure*} \subsection{ Q2. Anomalous VMMs finding efficiency and scalability} \label{sec:config_finding_efficency_scalability} Next, we verify that our algorithm's detection method scale linearly and compare it against other algorithms. This experiment is performed with the synthetic dataset, since we can increase the number of VMs per VMM in it. We linearly increased the number of VMs from 1 to 100 and repeatedly duplicated our dataset in time ticks by adding Gaussian noise. Figure~\ref{algorithms_time_results_scalability} shows various algorithm's detection method scalability for different parameters. One can observe that \textit{IAD's} detection method scale linearly in terms of both the parameters. However, when the number of VMs are scaled to \texttt{100}, \textit{IAD} takes a longer time as compared to others, but it provides results under \texttt{2.5s} which if we see is not that much considering the accuracy we get with that algorithm. However, on the time ticks parameter, \textit{BNB}, \textit{BNBOnline} and \textit{IAD} performed similar to each other, while \textit{IF} and \textit{IFF} provides results under \texttt{1} second, but its accuracy is worse as compared to the others on all the datasets, and it has the extra overhead of training. \textit{ECP} algorithm's results are not shown, since it requires more than an hour for performing the detection with \texttt{100} VMs and \texttt{100,000} time ticks. \begin{figure*}[t] \centering \begin{subfigure}{.35\linewidth} \centering \includegraphics[width=1\linewidth]{figures/scalability_num_vms.pdf} \captionof{figure}{With number of VMs} \label{fig:train_time} \end{subfigure} \begin{subfigure}{0.35\linewidth} \centering \includegraphics[width=1\linewidth]{figures/scalability_num_time_ticks.pdf} \captionof{figure}{With number of time ticks } \label{fig:predict_time} \end{subfigure} \caption{Algorithm's detection method scalability with respect to different parameters.} \label{algorithms_time_results_scalability} \end{figure*} \section{Conclusion} \label{sec:conclusion} We propose \textit{IAD} algorithm for indirect detection of anomalous VMMs by solely using the resource's utilization data of the VM's hosted on them as the primary metric. We compared it against the popular change detection algorithms, which could also be applied to the problem. We showcased that \textit{IAD} algorithm outperforms all the others on an average across four datasets by \texttt{11\%} with an average accuracy score of \texttt{83.7\%}. We further showcased that \textit{IAD} algorithm scale's linear with the number of VMs hosted on a VMM and number of time ticks. It takes less than \texttt{2.5} seconds for \textit{IAD} algorithm to analyze 100 VMs hosted on a VMM for detecting if that VMM is anomalous or not. This allows it to be easily usable in the cloud environment where the fault-detection time requirement is low and can quickly help DevOps to know the problem is of the hypervisor or not. The future direction includes using other metrics like network and storage utilization to enhance the algorithm's accuracy further. \bibliographystyle{splncs04}
1,108,101,563,792
arxiv
\section{Objective: Low mass dark matter searches} There is a shifting paradigm in the dark matter community to focus its efforts on particle masses outside of the WIMP mass range from 1GeV to 1TeV [1]. The low mass dark matter range (LDM) from 1keV to 1GeV is an interesting mass range for near-term experiments because of the rich population of theory targets and the ease with which existing experiments can adapt to this mass range. 1MeV dark matter mass corresponds to 1eV energy thresholds in particle detectors, requiring sub-eV resolutions on energy deposits. Since these energies are below electronic excitation levels, phonon detectors are naturally suited for probing this range. A TES-based phonon detector from the SuperCDMS collaboration has demonstrated 3.86eV resolution [2]. Kinetic inductance detectors (KIDs) are already competitive with this resolution and are well-suited for bringing sub-eV energy resolutions to phonon detectors. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.8\linewidth, keepaspectratio]{LTD19_LDM_mass_range.png} \caption{Energy range of interest for kinetic inductance detectors, compared against existing dark matter direct detection experiments. (Color figure online.)} \end{center} \label{LDM} \end{figure} \section{KID design: optimizing for lower energy thresholds} There are two major design choices that were made to fabricate a device optimized for lower energy thresholds: First, there is only a single resonator designed to be the phonon collecting resonator. The baseline energy resolution of a KID-based device scales with $\sqrt{\textrm{\# of KIDs}}$, so to minimize threshold, we design for a single phonon collecting resonator in the middle of the device. This resonator is made out of aluminum. There are 10 other resonators on the device, which are used for calibration purposes. Second, we minimize the amount ``dead metal'', which is any metal on the chip that might absorb phonons and not contribute to signal. This is done by fabricating all other features on the device out of niobium. Niobium's $T_c$ is 10x greater than aluminum's, which means that a phonon must be 10x more energetic to break a Cooper pair in the niobium film. These other features include: bonding pads, the feedline, the other resonators, and even the capacitor of the phonon-collecting resonator. \begin{figure}[htbp] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth, keepaspectratio]{LTD19_device_design.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth, keepaspectratio]{LTD19_device.png} \end{subfigure} \caption{{\it Left}: Device design; {\it red} is niobium and {\it green} is aluminum. {\it Right}: Device in its device box. {\it Both zoom-ins}: aluminum resonator; top part is the interdigitated capacitor and the bottom is the meandering inductor. (Color figure online.)} \label{device} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth, keepaspectratio, trim= {0 4cm 0 4cm},clip]{LTD19_mattis_bardeen_fits_v2.pdf} \caption{ {\it Left}: Fractional change in resonator frequency versus temperature. {\it Blue}, {\it orange}, and {\it red} correspond to 65mK, 285mK, and 335mK. {\it Left inset}: Magnitude of $S_{21}$ at the entire range of temperatures, with three temperatures highlighted in their corresponding colors to show the shift in resonance frequency. {\it Right}: Resonator internal quality factor versus temperature. Color coding same as before. {\it Right inset}: Complex $S_{21}$ at the entire range of temperatures, with three temperatures highlighted in their corresponding colors to show the changing quality factor. Fit values are: $\Delta$ = 0.184meV; $\alpha$ = 3.801\%; $f_0$ = 4.2401GHz; and $Q_{i0}$ = 405538. (Color figure online.)} \end{center} \label{mb_fits} \end{figure} \section{KID basics: resonator characterization} To measure the amount of energy that gets deposited in the resonator, we track the number of quasiparticles in the resonator that result from broken Cooper pairs. The conversion of our measurement from electronics units to quasiparticle units requires the superconducting bandgap $\Delta$, which is half of the phonon energy required to break a Cooper pair, and the kinetic inductance fraction $\alpha$. There is a straightforward procedure for measuring these values: \begin{enumerate} \item Measure $S_{21}$ versus frequency using a vector network analyzer. \item Fit for resonator's resonance frequency $f_r$ and quality factor $Q_r$. \item Repeat steps 1 and 2 at a range of temperatures between $\sim$10\% of $T_c$ and $\sim$30\% of $T_c$. \item Fit for $\Delta$ and $\alpha$ using Mattis-Bardeen equations [3]. \end{enumerate} The measured values for $\Delta$ and $\alpha$ are close to those seen in a device of similar design. \begin{figure} \centering \begin{subfigure}{0.55\textwidth} \centering \includegraphics[width=\textwidth]{LTD19_noise_and_vna.png} \end{subfigure} \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{LTD19_dissipation_and_frequency_directions.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{LTD19_k1_noise.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{LTD19_k2_noise.png} \end{subfigure} \caption{{\it Top}: $S_{21}$ noise traces are shown for seven different readout powers separated by 5dBm, corresponding to seven different colors, with {\it blue} being the lowest power and {\it pink} being the highest power. {\it Black} points are VNA scans. {\it Bottom}: Power spectral densities for the frequency and dissipation directions. (Color figure online.)} \label{k1 and k2 noise} \end{figure} \section{KID operation: $S_{21}$ readout and its noise} A kinetic inductance detector is readout by measuring $S_{21}(f_r)$ as a function of t: $S_{21}(f_r; t)$. The instrument we use to perform this measurement is an Ettus Research USRP Software Defined Radio device, which uses a 200MHz bandwidth signal, that is then digitally mixed to recover $S_{21}(f_r; t)$. During data-taking, $S_{21}(f_{\mathrm{off}};t)$ of an off-resonance tone $f_{\mathrm{off}}$ is simultaneously measured in order to monitor and clean out noise in $S_{21}(f_r; t)$ that is not caused by the resonance, e.g. multiplicative gain and phase noise from the amplifier and USRP. Further low-pass filtering and decimation is done during analysis. \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{LTD19_k2_noise_qp_cleaned.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{LTD19_TLS_fit_+_residuals.png} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{LTD19_TLS_power.png} \end{subfigure} \caption{{\it Top left}: frequency direction PSDs before and after cleaning, i.e. removal of correlated noise with dissipation direction. {\it Top right}: cleaned frequency direction PSD of the highest readout power, fit to a model composed of TLS noise and white noise. {\it Top right inset}: residuals of the fit for a subset of the frequency range, along with bounds given by the white noise. {\it Bottom}: TLS level in $\frac{df}{f}$ units versus readout power; power law fit results also shown. (Color figure online.)} \label{TLS figs} \end{figure} The $S_{21}$ versus frequency scan by the vector network analyzer is used to identify the frequency and dissipation directions for $S_{21}(f_r; t)$. This is the typical basis that is used for tracking quasiparticle production in the resonator; the two directions correspond to the directions of maximal change in $Q_r$ and $f_r$ [3]. The Mattis-Bardeen fits for $\Delta$ and $\alpha$ are used to convert $\delta S_{21}(f_r, t)$ into quasiparticle densities in the resonator: $\delta n_{qp,f}(t)$ and $\delta n_{qp,Q}(t)$. Power spectral densities (PSDs) of the quasiparticle density timestreams are calculated along these two directions and are shown for seven different powers in Fig.~\ref{k1 and k2 noise}. In the dissipation direction at the lowest powers ({\it blue} and {\it orange}), the noise is white, and the dependence on power is as expected: a 5dB increase in readout power corresponds to a half decade decrease in the power at all frequencies. This dependence on power persists up through the second highest power ({\it brown}) at frequencies above 10 kHz. At the highest readout powers, there is an unknown non-white noise source that dominates at frequencies below 10 kHz. This manifests itself as a tilt in the noise trace that is most presently seen at the highest power ({\it pink}). In the frequency direction, the shapes of the power spectral density curves resemble two-level system (TLS) noise [4], but the tilted noise trace is an indication that the unknown noise in the dissipation direction is also present in the frequency direction. Thus, we use the dissipation direction to clean the frequency direction; specifically, this means removal of the correlated components between $\delta n_{qp,Q}(t)$ and $\delta n_{qp,f}(t)$ from $\delta n_{qp,f}(t)$ via the following computation: \begin{equation} \delta n_{qp,f,\mathrm{cleaned}}(t) = \delta n_{qp,f}(t) - A_{Q,f}\delta n_{qp,Q}(t), \quad\mathrm{ where }\quad A_{Q,f} = \frac{\mathrm{Cov}(\delta n_{qp,Q},\delta n_{qp,f})}{\mathrm{Var}(\delta n_{qp,f})} \end{equation} After the correlated noise is removed from $\delta n_{qp,f}(t)$ and the noise PSDs are recalculated (Fig.~\ref{TLS figs} {\it Top left}), the cleaned noise PSDs are fit to a model that consists of white noise and TLS noise (Fig.~\ref{TLS figs} {\it Top right}). In a frequency band that is dominated by TLS noise, which coincides with the frequency band of the signal template, the fit agrees with the data to a level that is smaller than the white noise (Fig.~\ref{TLS figs} {\it Top right inset}). Furthermore, when the TLS level at 1kHz is plotted versus readout power, we find the dependence on readout power to be $P^{-0.43}$ (Fig.~\ref{TLS figs} {\it Bottom}), which is close to the usual power dependence of TLS: $P^{-0.5}$ [4]. \section{Calculating energy resolutions} The baseline energy resolution on quasiparticle density can be calculated from the following result of the optimal filter formalism [5]: \begin{equation} \sigma^2_A = \left[T\sum^{\frac{N}{2}-1}_{n=-\frac{N}{2}}\frac{|\tilde{s}_n|^2}{J(f_n)}\right]^{-1} \end{equation} where A is taken to be the quasiparticle density in units of $\mu$m$^{-3}$, $T$ is the total duration of the timestream, $N$ is the number of frequency bins, $\tilde{s}_n$ is the Fourier coefficients of the signal template, and $J(f_n)$ is the noise power spectral density. In the provided calculations, we set $J(f_n)$ as the $\delta n_{qp,f,\mathrm{cleaned}}$ PSD. To produce the signal template, we use a novel technique: we drive one of the niobium resonators at its resonance frequency with a large amount of readout power; this breaks Cooper pairs in the resonator that then recombine and send phonons into the substrate. The phonons are then absorbed by the aluminum readout resonator, and a pulse shape is formed. Importantly, we note that $\sigma_A$ is in units of quasiparticle density. This can be translated into $\sigma_{E,\mathrm{res}}$, the resolution on the energy that is absorbed by the resonator, using $\Delta$ and the resonator's geometry. To compute $\sigma_E$, the resolution on the energy deposited in the substrate, we must assume some phonon collection efficiency $\eta_{ph}$. We assume a 30\% collection efficiency, based off literature values [6]. We emphasize that this is the largest source of uncertainty in our measurement of $\sigma_E$ \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-1em} \centering \textbf{Table of energy resolutions} \\ \begin{tabular}{ | m{2em} | m{6em} | m{6em} | } \hline & TLS-limited device & amplifier-limited device \\ \hline $\sigma_{E,\mathrm{res}}$ & 6 eV & 1.5 eV \\ \hline $\sigma_{E}$ & 20 eV & 5 eV \\ \hline \end{tabular} \vspace{-1em} \end{wrapfigure} Finally, we present a hypothesis on the TLS-limited noise that we see in this device: since we have not seen TLS noise in a similarly designed aluminum only device [7] and we also see TLS noise in the accompanying niobium resonators, the TLS noise we observe is caused by the niobium cap on the aluminum resonator's capacitor. Thus, we propose a new device that replaces the niobium layer on the capacitor with stochiometric TiNx, a material that has demonstrated low TLS noise. This material can be tuned to have a $T_c$ in between that of aluminum and niobium, so that we need not degrade our phonon collection efficiency. We can project the noise performance of such a device by using our dissipation direction measurement of the white noise at the lowest readout power and propagating the white noise to its expected level at the highest readout power. \section{Future work: next steps toward sub-eV energy thresholds} The immediate next step is to perform an absolute energy calibration using an LED laser. This has already been installed into the cryostat and testing is already underway [8]. Next, once we have demonstrated amplifier-limited noise on our $S_{21}$ measurement, we can improve that noise by a factor of 5 through use of a kinetic inductance parametric amplifier. Lastly, there are also plans to replace the aluminum film with a lower $T_c$ material such as aluminum manganese, which may provide up to a factor of 10 improvement in resolution. \begin{acknowledgements} We acknowledge the support of the following institutions and grants: NASA, NSTGRO 80NSSC20K1223 Department of Energy, DE-SC0011925F Fermilab, LDRD Subcontract 672112 \end{acknowledgements} \pagebreak
1,108,101,563,793
arxiv
\section{Introduction} Our experimental knowledge about neutrinos is still relatively small. The results of terrestrial experiments agree with the prediction of the standard model (SM) where neutrinos are massless, left current interacting particles. As a consequence we do not even know if neutrinos have Dirac or Majorana character. There are however, astrophysical observations and cosmological estimations which, most probably, require massive neutrinos [1]. There is also the first terrestrial experiment in which there is some indications that a neutrino oscillates [2] and as a consequence at least one should be massive. The existence of such small mass neutrinos is predicted by many extensions of the SM. Usually the light neutrinos are accompanied by neutrinos with large mass in such a way that the so called see-saw mechanism [3] occurs. The production of heavy neutrinos in the future linear colliders depends on their masses and couplings to known leptons and bosons. The couplings of a neutrino below the $M_Z$ mass are strongly restricted by present LEP data [4] so we will concentrate on neutrinos with masses above the $Z_0$ mass. If the explanation of small neutrino masses is given by the see-saw mechanism then the present experimental bounds for the light (eV-keV-MeV region) and the heavy neutrinos $M_N>M_Z$ give very small mixing angles. With such mixing angles the heavy neutrinos decouple from low energy physics and the cross section for their production in the future linear colliders is beyond our experimental interest. There are, however, models where light-heavy neutrino mixings are not connected with the see-saw mechanism. The general idea can be explained by an elementary example of the `light' $(\nu)$ and the `heavy' (N) neutrino. Let us assume that in the $\left( \nu, N \right)^T$ basis the neutrino mass matrix is \begin{equation} M=\left( \matrix{ a & b \cr b& c } \right), \end{equation} where for simplicity we assume that all elements a,b,c are real numbers. The masses and the mixing angle are given by \begin{equation} m_{1,2}=\frac{1}{2}\left(a+c\mp\sqrt{(a-c)^2+4b^2} \right), \end{equation} and \begin{equation} \sin{2\xi}=\frac{2b}{\sqrt{(a-c)^2+4b^2}}. \end{equation} There are two ways of predicting the light-heavy spectrum of neutrino masses. One is the see-saw mechanism where a=0, $c>>b$ and then \begin{equation} \mid m_1 \mid \simeq \frac{b^2}{c},\;\;\; \mid m_2 \mid \simeq c>>m_1, \end{equation} and, unavoidably, \begin{equation} \xi \simeq \frac{b}{c} \simeq \sqrt{\frac{\mid m_1 \mid}{m_2}}<<1. \end{equation} The other one in which we assume that $a \neq 0$ and due to internal symmetry $ac=b^2$ gives \begin{eqnarray} m_1&=&0, \nonumber \\ m_2&=&a+c, \end{eqnarray} and \begin{equation} \sin{\xi}=\frac{2\sqrt{ac}}{a+c}. \end{equation} If the symmetry, which at the tree level gives the relation $ac=b^2$, is broken we obtain $$m_1 \neq 0 << m_2$$ in the higher order (see e.g. [5]). In this sort of models $\sin{2\xi}$ is not connected with the ratio $m_1/m_2$ and can be large $( \sin{2\xi} \simeq 1)$ for $a \simeq c$. Any model realizing this idea in the natural way is an alternative to the see-saw mechanism and helps to explain the spectrum of neutrino masses. Several kinds of such models were considered in literature [6]. In these scenarios the mixing angles are independent parameters not connected to the neutrino masses and are only bound by existing experimental data. In this paper we have found such boundary for mixing parameters which is model independent. We also assume that heavy neutrinos exist with such masses that they can be produced in future $e^+e^-$ colliders [7]. With these assumptions we determined the cross section for the production of heavy and light neutrinos in future $e^+e^-$ colliders. We have also considered the decay of heavy neutrinos $N \rightarrow W^+l^-$ or $W^-l^+$ and the angular distribution of charged leptons in the total CM system. The decay channel is easily distinguished from the charged lepton production in various background processes where $W^{\pm}$ pair production and decay are dominant. The effect of the lightest, SM Higgs particle on the process $e^+e^- \rightarrow \nu N (W^+e^-\; or\; W^-e^+)$ is also discussed. The process of production of heavy neutrinos in $e^+e^-$ colliders has already been considered in literature [8]. However, to our knowledge, the analysis with all details mentioned above have not been performed. In the next Chapter the bounds on mixing matrix elements using the full experimental information are given. In Chapter 3 the angular distribution for final electron (positron) in the process \arraycolsep0.5mm \begin{equation} e^+e^- \rightarrow \begin{array}[t]{ll} \nu & N \qquad \\ & \hookrightarrow e^{\pm}W^{\mp} \end{array} \end{equation} is calculated. Conclusions are given in Chapter 4. \section{Mixing matrix elements.} The cross sections for production and decay of heavy neutrinos (Eq.~(8)) are given in the Appendix (Eqs.~A.1,A.2,A.3). The mixing matrix elements $K_{Nl}$ and $\Omega_{N\nu}$ of the lepton sector analog of Kobayashi-Maskawa matrices [9], decide about the magnitude of the cross section. Precisely the helicity amplitudes are proportional to \begin{eqnarray} \left( K_{Ne} \right)^2 K_{\nu e} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; & \mbox{\rm in the t and u channels,} \nonumber \\ and \hspace{7 cm}\;\;\;\;\;\;\;\;\;\; && \nonumber \\ K_{Ne}\Omega_{N\nu}\;\;\;\; \mbox{\rm where } \;\;\; \Omega_{N\nu}=\sum\limits_ {l=e,\mu,\tau}K_{Nl} K_{\nu l}^{\ast}& \mbox{\rm in the s channel.} \end{eqnarray} From the present experimental data we are not able to determine all elements of the K matrix. Fortunately, with good approximation only one mixing matrix element $K_{Ne}$ between electron and the lightest heavy neutrino N will decide about the size of the cross section and it is possible to determine the bound on it from existing experimental data. Phases of $K_{Nl}$'s have no influence (there is no t-u interference) and this means that no effects of CP violation are seen in the process [10]. In the previous paper [11] we have analyzed the existing experimental data which restrict the mixing matrix elements. Three different combinations of light and heavy neutrino masses and their mixing with leptons are possible to be limited: \\ (i) from the lack of lepton number violation processes (e.g. $\mu \rightarrow e \gamma,\mu \rightarrow 3e, \mu \rightarrow e$ conversion in nuclei [12]) and from the number of light neutrino species $N_{\nu}$ it is possible to get \begin{equation} \sum_{N(heavy)} \mid K_{Ne} \mid^2 \leq \kappa^2, \end{equation} and the lack of a signal in neutrinoless double-$\beta$ decay $(\beta\beta)_{0\nu}$ gives two bounds \\ (ii) for the light neutrinos \begin{equation} \mid \sum_{\nu (light)}K_{\nu e}^2 m_{\nu} \mid < \kappa_{light}^2, \end{equation} (iii) for the heavy neutrinos \begin{equation} \mid \sum_{N(heavy)}K_{Ne}^2\frac{1}{m_N} \mid < \omega^2. \end{equation} The matrix K must be unitary and this means that \\ (iv) \begin{equation} \sum_{\nu (light)} \mid K_{\nu e} \mid^2+\sum_{N (heavy)} \mid K_{N e} \mid^2=1. \end{equation} In paper [11] we have also used the constraints which follow from the lack of Higgs triplets in considered gauge models. As a result, in the first order, the mass term for left-handed neutrinos does not appear. Here we will omit this assumption. In this way the limits which we get are model independent. To find the inequalities (10)-(12) only one model assumption is made, i.e. the lack of right-handed current hence our considerations are valid for any model without right-handed charged currents. We know however, that due to large mass of the right-handed gauge boson(s) $W_R^{\pm}$, the influence of right-handed current on the production of one light and one heavy neutrino is marginal [13]. Using restrictions (i)$\leftrightarrow$(iv) the upper bound on $K_{Nl}$ mixing depends on 1) the number of heavy neutrinos ($n_R$) and 2) their CP parities ($\eta_{CP}$). $\bullet$ $n_R=1$ For heavy neutrino with mass less than 1 TeV ($M<1$ TeV) we get from relation (12) \begin{equation} \mid K_{Ne} \mid^2 < \omega^2 M \end{equation} and the total cross section is bounded by the small value of $\omega$ (see next Chapter). $\bullet$ $n_R=2$ There are two heavy neutrinos with masses $M_1=M$ and $M_2=AM$ ($A \geq 1$). The couplings depend on the CP parities of both neutrinos. If they are the same e.g. $\eta_{CP}(N_1)=\eta_{CP}(N_2)=+i$ then mixing parameters can be treated as real $K_{N_1e}=x_1$ and $K_{N_2e}=x_2$. The relations (10) and (11) give \begin{eqnarray} x_1^2+x_2^2 & \leq & \kappa^2, \nonumber \\ \mid x_1^2+\frac{x_2^2}{A} \mid & \leq & \omega^2 M, \end{eqnarray} and the situation is the same as in case $n_R=1$ (Eq.~(14)), the coupling of the $N_1$ neutrino is small $x_1^2 \leq \omega^2 M$. If, however, heavy neutrinos have opposite CP parities $\eta_{CP}(N_1)= -\eta_{CP}(N_2)=i$ then $K_{N_1e}=x_1,\;K_{N_2e}=ix_2$ and the relations (10) and (12) give \begin{eqnarray} x_2^2 & \leq & \kappa^2-x_1^2, \nonumber \\ x_2^2 & \geq & A (x_1^2-\omega^2M), \nonumber \\ {\rm and} \;\;\;\;\;\;\;\;\;\; && \nonumber \\ x_2^2 & \leq & A (x_1^2+\omega^2M). \end{eqnarray} The sketch of the region in $x_1^2 \leftrightarrow x_2^2$ plane of still experimentally acceptable mixing parameters is shown in Fig.1. The maximum value of $K_{N_1e}^2$ is equal \begin{equation} ( K_{N_1e}^2 )_{max} = \frac{\kappa^2+AM\omega^2}{A+1}. \end{equation} \vspace{6 cm} \begin{figure}[h] \special{psfile=9fig1.ps angle=0 hscale=75 vscale=75 hoffset=25 voffset=-265} \end{figure} {\footnotesize Fig.1 Sketch of the region in $x_1^2 \leftrightarrow x_2^2$ plane of still experimentally acceptable mixing parameters for two heavy neutrinos. Maximum value of $x_1^2$ is equal $(x_1^2)_{max}=\frac{\kappa^2+AM\omega^2}{A+1}$ and approaches $\frac{\kappa^2+M\omega^2}{2}$ for $A \rightarrow 1$.} \newpage $\bullet$ $n_R=3$ If the CP parities of all neutrinos are the same $\eta_{CP}(N_1)=\eta_{CP}(N_2)=\eta_{CP}(N_3)=\pm i\;$ then all couplings are small and the same inequality (14) as in the $n_R=1,2$ cases restricts the $K_{N_1e}$ mixing. A more interesting situation arises if we assume that not all $\eta_{CP}$'s of neutrinos are the same. Let us assume that $\eta_{CP}(N_1)=\eta_{CP}(N_2)=-\eta_{CP}(N_3)=+i$ then $K_{N_1e}=x_1,\; K_{N_2e}=x_2$ and $K_{N_3e}=ix_3$. From relations (10) and (12) we obtain three inequalities ($M_1=M,\;M_2=AM,\;M_3=BM$) \begin{eqnarray} x_3^2 & \leq & -x_1^2-x_2^2+\kappa^2, \nonumber \\ x_3^2 & \geq & Bx_1^2+\frac{B}{A}x_2^2-BM\omega^2, \nonumber \\ and \;\;\;\;\;\;\;\;\;\; && \nonumber \\ x_3^2 & \leq & Bx_1^2+\frac{B}{A}x_2^2+BM\omega^2. \end{eqnarray} The region in $(x_1^2,x_2^2,x_3^2)$ frame of still experimentally acceptable parameters is shown in Fig.2. The maximum value of $K_{N_1e}^2$ is equal \begin{equation} (K_{N_1e}^2)_{max}=\frac{\kappa^2+BM\omega^2}{B+1} \end{equation} and can be as large as $(\kappa^2+M\omega^2)/2$ for $B \rightarrow 1$. \\ The other combination of $\eta_{CP}$'s leads to the bound on $K_{N_1e}^2$ which is the same as in the case $n_R=1$ or to this given by Eq.(19). So finally we can state that regardless of the number of heavy neutrinos the most optimistic bound on $\mid K_{Ne} \mid^2$ is equal $\mid K_{Ne} \mid^2< \omega^2 M$ if there are no correlations between elements of the K matrix or $\mid K_{Ne} \mid^2 < (\kappa^2+\omega^2M)/2$ if there are correlations and some $\eta_{CP}$'s of heavy neutrinos are opposite. \newpage \ \\ \vspace{8 cm} \begin{figure}[ht] \special{psfile=9fig2.ps angle=0 hscale=80 vscale=80 hoffset=50 voffset=0} \vspace{0.5 cm} \end{figure} {\footnotesize Fig.2 Sketch of the region in $(x_1^2,x_2^2,x_3^2)$ plane of still experimentally acceptable mixing parameters for three heavy neutrinos ($n_R=3$). The region of acceptable parameters is bound by three reference frame planes ($x_1^2,x_2^2$),($x_1^2,x_3^2$),($x_2^2,x_3^2$) and the planes indicated in the Figure. The maximum value of $(x_1^2)$ is equal $(x_1^2)_{max}=\frac{\kappa^2+BM\omega^2}{B+1}$ and approaches $\frac{\kappa^2+M\omega^2}{2}$ for $B \rightarrow 1$.} \section{Numerical results} \subsection{Production and decay of heavy neutrinos} The light neutrinos will not be detected in the process $e^+e^- \rightarrow \nu N$ and we can only measure the sum \begin{equation} \sigma_{tot}=\sum\limits_{i=e,\mu,\tau} \sigma( e^+e^- \rightarrow \nu_i N), \end{equation} over all light neutrinos. For N we take the lightest heavy neutrino $N=N_1$. But from Eq.(A.1) (neglecting charged lepton masses) \begin{eqnarray} \sigma_{tot} & \propto & \mid K_{Ne} \mid^2 \left( \mid K_{\nu_e e} \mid^2+\mid K_{\nu_{\mu}} \mid^2 + \mid K_{\nu_{\tau}} \mid^2 \right) \nonumber \\ &=& \mid K_{Ne} \mid^2 ( 1- \sum_N \mid K_{Ne} \mid^2 )^2 \simeq \mid K_{Ne} \mid^2. \end{eqnarray} To calculate the cross section $\frac{d\sigma}{d \cos{\Theta_e}}$ (Eq.(A.3)) we also need to know the total decay width $\Gamma_N$ for heavy neutrino decay. From Eqs. (A.2) we can calculate the partial decay width for $$N \rightarrow W^{\pm}l^{\mp}\;\;\;\; \mbox{\rm decay} $$ \begin{equation} \Gamma(N \rightarrow W^{\pm}l^{\mp})=\frac{ \mid K_{Nl} \mid^2}{8 \sqrt{2} \pi} \frac{G_F}{m_N^3}(m_N^2+2m_W^2)(m_N^2-m_W^2)^2, \end{equation} and $$N \rightarrow Z \nu_l \;\;\;\;\; \mbox{\rm decay} $$ \begin{equation} \Gamma(N \rightarrow Z \nu_l)=\frac{ \mid \Omega_{N\nu_l} \mid^2}{8 \sqrt{2} \pi} \frac{G_F}{m_N^3}(m_N^2+2m_Z^2)(m_N^2-m_Z^2)^2. \end{equation} Whether the decay channels of the N into the lightest Higgs particle H and light neutrinos $\nu_l$, $N \rightarrow \nu_l H$ are opened depends on the relation between masses $m_N$ and $m_H$, if $m_N>m_H$ the channels are opened and (see e.g. [5]) \begin{equation} \Gamma(N \rightarrow H \nu_l)=\frac{ \mid \Omega_{N \nu_l} \mid^2}{8 \sqrt{2} \pi} \frac{G_F}{m_N}(m_N^2-m_H^2)^2. \end{equation} We will consider both situations where $m_N>m_H$ and $m_N<m_H$ when the decay channel is closed. However, since we are looking for a relatively light $m_N$ ($\sim 100 \div 200$ GeV) the situation where $m_N<m_H$ (if $m_H \sim 300$ GeV) seems more plausible. The total decay width we calculate from \begin{equation} \Gamma_N=\sum_l \left( {2\Gamma(N \rightarrow l^+W^-)+ \Gamma(N \rightarrow \nu_l Z) +\Gamma(N \rightarrow \nu_l H)\Theta(m_N-m_H)} \right) \end{equation} where \begin{equation} \sum_l \Gamma(N \rightarrow l^+W^-) \propto \sum\limits_{l=e,\mu,\tau} \mid K_{Nl} \mid^2 \simeq \mid K_{Ne} \mid^2, \end{equation} \begin{equation} \sum_l \Gamma(N \rightarrow \nu_l H), \sum_l \Gamma(N \rightarrow \nu_l Z) \propto \sum_l \mid \Omega_{N\nu_l} \mid^2 \simeq \sum_l \mid K_{Nl} \mid^2 \simeq \mid K_{Ne} \mid^2. \end{equation} In the approximations made in Eqs. (21,26 and 27) we assume that in each column of K matrix ($l=e,\mu,\tau$) $$(K_{\nu_el}, K_{\nu_{\mu}l}, K_{\nu_{\tau}l}, K_{N_1l}, K_{N_2l}, ...)^T$$ one element $K_{\nu_ll} \simeq 1$ (lepton universality) and only one coupling between heavy neutrinos and lepton is visible $K_{Nl} \simeq x$. All other couplings are very small and we neglect them. The calculated decay width $\Gamma_N$ normalised to the factor $\mid K_{Ne} \mid^2$ for various masses $m_N$ is given in the Table 1. Now we have all the ingredients to calculate the electron angular distribution in the process $$ e^+e^- \rightarrow \nu N \rightarrow \nu e^-W^+.$$ In our approximation only one parameter $\mid K_{Ne} \mid^2$ decides about the value of the cross section. For $n_R=1$, regardless of the $\eta_{CP}$ of the heavy neutrino, and for $n_R>1$ with the assumption that $\eta_{CP}$'s of all neutrinos are the same, $\mid K_{Ne} \mid^2$ is bounded by the lack of neutrinoless double $\beta$ decay (Eq.(14)). There are problems with estimating the role of heavy neutrinos in the $(\beta\beta)_{0\nu}$ process as the nuclear structure matrix elements are calculated with limited accuracy [14]. The best limit is found from absence of neutrinoless double beta decay in $^{76}Ge$ by Heidelberg-Moscow collaboration [15] $$\omega^2 < 2 \cdot 10^{-5}\; \mbox{\rm TeV}^{-1}.$$ There are also other estimations of $\omega^2$. In paper [16] it was found that $$\omega^2 < 2.8 \cdot 10^{-5}\; \mbox{\rm TeV}^{-1}.$$ In Table II we give the maximum values of $\sigma_{tot} (e^+e^- \rightarrow \nu N)$ (Eq.20) for various heavy neutrino masses $m_N$ and different total energies $\sqrt{s}$. The value of $\omega^2$ decides about $\sigma_{tot}(max); \; \sigma_{tot} \propto \omega^2$ and the values of the total cross section for various $\omega^2$ can be easily obtained from the Table. As the maximum value of $\mid K_{Ne} \mid^2$ is proportional to $m_N$ (see Eq.(14)) the cross section (Eq.(20)) increases with the heavy neutrino mass with the exception when $m_N \rightarrow \sqrt{s}$ at the end of the phace space. For $n_R>1$ and for different values of $\eta_{CP}$ of heavy neutrinos the bound from $(\beta\beta)_{0\nu}$ (Eq.14) is not so crucial and $\mid K_{Ne} \mid^2$ can be much larger (Eqs. (17) and (19)). In the both considered cases $n_R=2$ and $n_R=3$ the largest possible value is \begin{equation} \mid K_{Ne} \mid^2_{max} \rightarrow \frac{ \kappa^2+M[TeV]\omega^2}{2} \stackrel{M \leq 1\;TeV}{\longrightarrow} \frac{\kappa^2}{2} \end{equation} for almost degenerate heavy neutrinos ($A \rightarrow 1$ for n=2, $A >> B, B \rightarrow 1$ for n=3). In the case B=1 there are two Majorana neutrinos with the same masses and opposite CP parities which form the Dirac neutrino. In our studies, however, calculation of the cross section for Dirac neutrino production is not performed. Different values of $\kappa^2$ are found for the model with singlet neutrinos: $\kappa^2 < 0.015$ [17] and the more recent one $\kappa^2<0.0054$ [18]. If we use the recent LEP result for the number of light neutrino species $N_{\nu}=2.989 \pm 0.012$ [19] we obtain $\kappa^2< 0.0055$, a value very close to the global fit given in [18]. In Table III the total cross section $\sigma( e^+e^- \rightarrow \nu N)$ for various $m_N$ and $\sqrt{s}$ is presented. Results are given for $\kappa^2=0.0054$. Since $\sigma_{tot} \propto \mid K_{Ne} \mid^2$, values of the $\sigma_{tot}$ for various $K_{Ne}$ can be easily obtained from this Table. In Fig. 3 we present the angular distribution for the final electron $e^-e^+ \rightarrow \nu (N \rightarrow e^-W^+)$ for various masses of heavy neutrino $m_N=100,150$ and 200 GeV calculated for the maximum possible value of $\mid K_{Ne} \mid^2 \simeq \frac{\kappa^2}{2}$. For $\kappa^2$ we take the value $\kappa^2=0.0054$. Results are given for the Next Linear Collider with CM energy $\sqrt{s}=500$ GeV. This distribution has forward-backward symmetry. To show the influence of Higgs particle we present results for $m_H=100$ GeV on the left side of the Figure $(-1 \leq \cos{\Theta_e} \leq 0)$ and on the right side $(0 \leq \cos{\Theta_e} \leq 1)$ the Higgs decay channels are excluded. For higher Higgs mass the total width $\Gamma_N$ is smaller and, due to the greater value of the branching ratio for the $N \rightarrow lW$ decay, the cross section $\frac{d\sigma}{d \cos{\Theta_e}}$ is larger. Numerically, Higgs has no influence on the cross section for $m_N=100$ GeV (for $m_H \geq 100$ GeV the $N \rightarrow \nu H$ decay channel is closed) and the influence of the Higgs particle ($m_H=100$ GeV) is approximately equal 10 \%, 15\% for $m_N=150,200$ GeV, respectively. For higher energies the final electron distribution is more peaked in the forward-backward direction $( \cos{\Theta_e}=\pm1)$. This is the result of $W^{\pm}$ exchange in t and u channels and small contribution of the s channel $Z^0$ exchange. For $\sqrt{s}=0.5$ TeV the $Z^0$ exchange mechanism gives only 2\% contribution to the total cross section [12] and is smaller for higher energies. As an example we compared final electron distribution produced by the decay of a heavy neutrino with mass $M_N=100$ GeV for $\sqrt{s}=500$ and 1000 GeV (Fig. 4). \newpage \ \\ \vspace{8 cm} \begin{figure}[h] \special{psfile=9fig3.ps angle=270 hscale=80 vscale=80 hoffset=0 voffset=400} \vspace{0.5 cm} \end{figure} \newline {\footnotesize Fig.3 Distribution of the final electron from a heavy neutrino decay for $\sqrt{s}=500$ GeV collider with $M_N=100$ GeV (solid line), $M_N=150$ GeV (long-dashed line) and $M_N=200$ GeV (short-dashed line). Left half of the Figure gives results for $m_H=100$ GeV. On the right-hand side the Higgs decay channels are excluded.} \newpage \ \\ \vspace{8 cm} \begin{figure}[h] \special{psfile=9fig4.ps angle=270 hscale=80 vscale=80 hoffset=-50 voffset=350} \vspace{0.5 cm} \end{figure} {\footnotesize Fig.4 Backward distribution of the final electron coming from a heavy neutrino decay ($M_N=100$ GeV) for two different energies: $\sqrt{s}=500$ GeV (dashed line) and $\sqrt{s}=1000$ GeV (solid line). Forward distribution is the same.} \ \\ Finally in Fig.5 we present the angular distribution $\frac{d\sigma} {d\cos{\Theta_e}}$ for various masses of heavy neutrino $M_N=100,300$ and 500 GeV (for $m_H=100$ GeV). The cross section becomes higher and more peaked in the forward-backward direction for smaller mass of heavy neutrinos. The effect of growing $\frac{d\sigma}{d \cos{\Theta_e}}$ is the result of increasing $BR(N \rightarrow lW)$ and increasing of $\sigma_{tot} (e^+e^- \rightarrow \nu N)$ (Table III) for smaller $m_N$. The effect of slope reducing with $m_N$ mass in the forward-backward direction is also kinematically understandable. \newpage \ \\ \vspace{8 cm} \begin{figure}[h] \special{psfile=9fig5.ps angle=270 hscale=80 vscale=80 hoffset=-50 voffset=350} \vspace{0.5 cm} \end{figure} {\footnotesize Fig.5 Backward distribution of the final electron coming from a heavy neutrino decay with mass $M_N=100$ GeV (solid line), $M_N=300$ GeV (short-dashed line) and $M_N=500$ GeV (long-dashed line) for $\sqrt{s}=1$ TeV. Forward distribution is the same.} \ \\ The main background process is the production of $W^+W^-$ pair and then the $W^{\pm} \rightarrow e^{\pm} \nu$ decay. The distribution of charged lepton coming from the heavy neutrino decay (N) already mentioned in this paper and from $W$'s decays by $e^+e^- \rightarrow W^+W^-$ process differs very much in forward-backward direction. For high energy ($\sqrt{s}>0.5$ TeV) angular distribution of electrons coming from the $W^-$ decay is peaked in the forward direction and has reducing slope in background direction. On the contrary, the $e^-$ coming from N decay will travel equally well both in the forward and the backward direction with increasing slope of angular distribution for $| \cos{\Theta_e} | \rightarrow 1$ (Figs.3-5). \section{Conclusions} We have found the cross section for heavy and light neutrino production in future electron-positron colliders with energy $\sqrt{s} \geq 0.5$ TeV. The bounds on mixing matrix element $K_{Ne}$ between heavy neutrino and electron are found from existing experimental data in models without right-handed currents. The maximum possible value of the $K_{Ne}$ is very small if there is only one heavy neutrino ($n_R=1$) or, in the case of a larger number of heavy neutrinos ($n_R>1$), if their CP eigenvalues are the same. This small bound results from the lack of a signal in neutrinoless double-$\beta$ decay. In this case the cross section for production of light and heavy neutrinos ($e^+e^- \rightarrow \nu N$) is very small from 0.16 fb for $\sqrt{s}=0.5$ TeV and $m_N=100$ GeV up to 1.6 fb for $\sqrt{s}=2$ TeV and $m_N=1$ TeV. The lack of any signal from neutrinoless double-$\beta$ decay does not give such a restrictive bound if the CP eigenvalues of two or more heavy neutrinos are not the same. Now the $e^+e^- \rightarrow \nu N$ cross section can be larger and equals $\sigma=240(287)$ fb for $\sqrt{s}=0.5(2)$ TeV and $m_N=100$ GeV. We have also found angular distribution of the final charged lepton in the total CM frame resulting from the heavy neutrino decay. The angular distribution has forward-backward symmetry, contrary to background process e.g. $e^+e^- \rightarrow W^+W^-(\rightarrow e^-\nu_e)$. This property could point to the existence of a heavy neutrino. The charged lepton angular distribution depends on CM energy, mass of the heavy neutrino and mass of the lightest Higgs boson. \section*{Appendix} \setcounter{equation}{0} \renewcommand{\theequation}{A.\arabic{equation}} We would like to present the cross section for production ($e^+e^- \rightarrow \nu N$) and decay of Majorana neutrino $N \rightarrow l^{\pm}W^{\mp},\nu'Z$ processes which are very useful in practical application. We consider the $Ne^-W^+$ interaction without the right-handed coupling and neglected the electron mass ($m_e=0$). \\ \underline{Production process $e^+e^- \rightarrow \nu N$} The production process $e^-(\sigma)+e^+(\bar{\sigma}) \rightarrow \nu(\lambda)+ N(\bar{\lambda})$ is described by 8 helicity amplitudes ($\Delta\sigma= \sigma-\bar{\sigma},\Delta\lambda=\bar{\lambda}-\lambda$) \begin{equation} M(\Delta\sigma;\lambda,\bar{\lambda})=\left( \sqrt{2} \right)^{1+ \mid \Delta\lambda \mid } \left\{ \frac{A_t}{t-M_W^2}-\frac{A_u}{u-M_W^2}+ \frac{A_s}{s-M_Z^2+iM_Z\Gamma_Z} \right\} D_{\Delta\sigma,\Delta\lambda}^ {1\;\;\ast} \left( \phi, \Theta, 0 \right), \end{equation} where $A_{t,u,s}$ are functions of fermion helicities \begin{eqnarray*} A_t(\Delta\sigma,\lambda, \bar{\lambda})&=&K_{Ne}^{\ast}K_{\nu e}\sqrt{1+2 \bar{\lambda}\beta}\delta_{\lambda=-1/2}\delta_{\Delta\sigma=-1}, \\ A_u(\Delta\sigma,\lambda, \bar{\lambda})&=&K_{Ne}K_{\nu e}^{\ast}\sqrt{1-2 \bar{\lambda}\beta}\delta_{\lambda=+1/2}\delta_{\Delta\sigma=-1}, \\ A_s(\Delta\sigma,\lambda, \bar{\lambda})&=&\left[ \frac{1}{2}(-1+2\tan^2{\Theta_W}) \delta_{\Delta\sigma=-1}+ \tan^2{\Theta_W}\delta_{\Delta\sigma=+1} \right] \\ &\times& \left[ \Omega_{N\nu}\sqrt{1+2\bar{\lambda}\beta}\delta_{\lambda=-1/2}- \Omega_{N\nu}^{\ast}\sqrt{1-2\bar{\lambda}\beta}\delta_{\lambda=+1/2} \right], \end{eqnarray*} and $$\beta=\frac{s-s_N}{s+s_N}.$$ s,t,u are ordinary Mandelstam variables; $\Theta$ and $\phi$ are CM azimuthal and polar angles of the heavy Majorana neutrino N with respect to the initial electron, $\sqrt{s_N}$ is the invariant mass of the heavy neutrino, $\Theta_W$ is the Weinberg angle. \\ \underline{Decay process $N \rightarrow l^{\pm}W^{\mp},\nu Z$} In the helicity rest frame of N ($\Theta_e^{\ast}$ and $\phi_e^{\ast}$ are $l^{\pm}$'s or $\nu$'s azimuthal and polar angles respectively) the decay process $N(\bar{\lambda}) \rightarrow V(\lambda_V)+f(\lambda_f)$ is described by 4 helicity amplitudes (the final fermion mass is neglected, $M_V$ is the gauge boson mass) \begin{eqnarray} T(\bar{\lambda};\lambda_V;\lambda_f)=\sqrt{s_N-M_V^2}F_{\lambda_V \lambda_f}{D_{\bar{\lambda},\lambda_f-\lambda_V}^{1/2\;\; \ast}} (\phi_f^{\ast}, \Theta_f^{\ast},0) \end{eqnarray} where \begin{eqnarray*} F_{++}&=&\sqrt{2}X,\;\;F_{--}=\sqrt{2}Y,\;\;F_{0+}=\frac{\sqrt{s_N}}{M_V}X, \\ F_{0-}&=&\frac{\sqrt{s_N}}{M_V}Y,\;\;F_{+-}=F_{-+}=0 \end{eqnarray*} and $$ \left\{ \begin{array}{lll} X=-\frac{e}{\sqrt{2}\sin{\Theta_W}}K_{Ne},\;& Y=0\;\; & \mbox{\rm for}\; N \rightarrow W^-l^+, \cr X=0,\;\; & Y=\frac{e}{\sqrt{2}\sin{\Theta_W}}K_{Ne}^{\ast},\;\;\; & \mbox{\rm for}\; N \rightarrow W^+l^-, \cr X=-\frac{g}{2\sin{\Theta_W}\cos{\Theta_W}}\Omega_{N\nu},\; & Y=\frac{g}{2\sin{\Theta_W}\cos{\Theta_W}}\Omega_{N\nu}^{\ast},\; & \mbox{\rm for} N \rightarrow \nu Z. \end{array} \right. $$ \\ \underline{Full cross section} The angular distribution of the final lepton in the $e^+e^-$ CM frame in the process $e^+e^- \rightarrow \nu N(\rightarrow e^{\pm}W^{\mp})$ is given by $\large($ ($\Theta_e,\phi_e)$ are the CM azimuthal and polar angles of final $e^{\pm}$ with respect to the initial electron $(e^-) \large)$ \begin{eqnarray} \frac{d\sigma}{d\cos{\Theta_e}}&=& \frac{G_F^2M_W^2}{ 2^{14}s^2\pi^5} \int_0^{2\pi}d\phi \int_{-1}^1 d\cos{\Theta}\int_0^{2\pi}d\phi_e \int_{M_W^2}^sds_N \nonumber \\ &&{\bf J} \frac{(s_N-M_W^2)(s-s_N)}{s_N \left[ (s_N-m_N^2)^2+M_N^2 \Gamma_N^2 \right] } \nonumber \\ &&\sum\limits_{\Delta\sigma;\lambda,\lambda_V,\lambda_f} \mid \sum_{\bar{\lambda}}M\left( \Delta\sigma;\lambda,\bar{\lambda} \right) T\left( \bar{\lambda};\lambda_V,\lambda_f \right) \mid^2 \nonumber \end{eqnarray} \begin{equation} \end{equation} where ${\bf J}$ is the Jacobian of the transformation between the $e^{\pm}$ angles in the CM frame of the decaying neutrino and the CM frame of initial colliding leptons \begin{eqnarray} {\bf J}&=&\frac{1-\beta^2}{(1-\beta z)^2w} \left\{ \sin^2{\Theta}\sin^2{\left( {\phi}_e+\phi \right)} \right. \nonumber \\ &+&\left. \left( \cos{\Theta}\sin{{\Theta}_e}- \sin{\Theta}\cos{{\Theta}_e}\cos{\left( {\phi}_e+\phi \right) } \right)^2 \right\} \end{eqnarray} where \begin{eqnarray} w &=&\sin^2{\Theta_e}\sin^2{\left( \phi_e+\phi \right)} \nonumber \\ &+&\left(\cos{\Theta}\sin{\Theta_e} \cos{ \left( \phi_e+\phi \right)}- \sin{\Theta}\cos{\Theta_e} \right)^2, \\ \mbox{\rm and} && \nonumber \\ z&=&\sin{\Theta}\sin{\Theta_e} \cos{ \left( \phi_e+\phi \right) }+ \cos{\Theta}\cos{\Theta_e} . \end{eqnarray} The amplitude $T(\bar{\lambda},\lambda_V,\lambda_f)$ in Eq.(A.2) is introduced in the CM frame of the decaying neutrino. We need to determine the exact dependence between $\Theta_e,\phi_e$ and $\Theta_e^{\ast},\phi_e^{\ast}$ variables. They are given by the relations \begin{equation} \cos{\Theta_e^{\ast}}=\frac{-\beta+z}{1-\beta z}, \end{equation} \begin{equation} \tan{\phi_e^{\ast}}=\frac{\sin{\Theta_e}\sin{\left(\phi_e-\phi \right)}}{ \cos{\Theta}\sin{\Theta_e} \cos{( \phi_e-\phi )}- \sin{\Theta}\cos{\Theta_e}} \end{equation} and \begin{equation} sign(\sin{\phi_e^{\ast}})=sign(\sin{(\phi_e+\phi)}), \end{equation} ($\tan{\phi_e^{\ast}}$ and $sign(\sin{\phi_e^{\ast}})$ describe the $\phi_e^{\ast}$ univocally in the region $0<\phi_e^{\ast}<2 \pi$). \section*{Acknowledgements} This work was partly supported by the Polish Committee for Scientific Research under Grant No.~PB 659/P03/95/08 and by the Curie Sk\l odowska grant MEN/NSF 93-145. \section*{References} \newcounter{bban} \begin{list} {$[{\ \arabic {bban}\ }]$}{\usecounter{bban}\setlength{\rightmargin}{ \leftmargin}} \item G.Gelmini, E.Roulet, Rep. Prog. Phys. {\bf 58} (1995) 1207. \item The LSND Collaboration, C.~Athanassopoulos et al., \newline Phys.Rev.Lett. {\bf75}(1995)2650; ibid. {\bf 77}(1996)3082. \item T.~Yanagida, Prog.~Theor.~Phys. {\bf B135} (1978) 66; M.~Gell-Mann, P.~Ramond and R.~Slansky, in `Supergravity', eds. P.~Nieuwenhuizen and D.~Freedman (North-Holland, Amsterdam, 1979) p.315. \item L3 Collaboration, O.~Adriani et al., Phys. Lett. {\bf B295} (1992) 371 and {\bf B316} (1993) 427. \item R.N. Mohapatra, P.B. Pal, "Massive neutrinos in physics and astrophysics", World Scientific, 1991. \item D. Wyler and L. Wolfenstein, Nucl. Phys. {\bf B218} (1983) 205; R.N.~Mohapatra and J.W.F.~Valle, Phys. Rev. {\bf D34} (1986) 1642; E.~Witten, Nucl. Phys. {\bf B268} (1986) 79; J.~Bernabeu et al., Phys. Lett. {\bf B187} (1987) 303; J.L.~Hewett and T.G.~Rizzo, Phys. Rep. {\bf 183} (1989) 193; P.~Langacker and D.~London, Phys. Rev. {\bf D38} (1988) 907; E.~Nardi, Phys. Rev. {\bf D48} (1993) 3277; D.~Tommasini, G.~Barenboim, J.~Bernabeu and C.~Jarlskog, Nucl. Phys. {\bf B444} (1995) 451. \item R.~Palmer, "Future accelarators", plenary talk at ICHEP, Warsaw 1996. \item {\bf (a):} For LEPI energy and below the process $e^+e^- \rightarrow \nu N$ was studied before, see e.g. A.~Ali, Phys.Rev.{\bf D10} (1974) 2801; M.~Gourdin, X.Y.~Pham, Nucl. Phys. {\bf B164} (1980) 387; J.L.~Rosner, Nucl. Phys. {\bf B248} (1984) 503; M.~Ditmar, A.~Santamaria, M.C.~Gonzales-Garcia and J.W.F.~Valle, Nucl. Phys. {\bf B332} (1990)1; M.C.~Gonzales-Garcia, A.~Santamaria and J.W.F.~Valle, Nucl. Phys. {\bf B342} (1990) 108; J.~Kugo and S.Y.~Tsai Prog. Theor. Phys. {\bf 86} (1991) 183; J.W.F.~Valle Nucl.Phys.Proc.Suppl. {\bf 48}(1996)137 and hep-ph/9603307; A.~Hoefer and L.M.~Sehgal, Phys.Rev. {\bf D54}(1996)1944;\newline {\bf (b):} above LEPI energy, see e.g. F.~del~Aguila, E.~Laermann and P.~Zerwas, Nucl. Phys. {\bf B297} (1988)1; E.~Ma and J.~Pantaleone, Phys. Rev. {\bf D40} (1989) 2172; W.~Buchm{\"u}ller and C.~Greub, Nucl. Phys. {\bf B363} (1991) 349 and {\bf B381} (1992) 109; J.~Maalampi, K.~Mursula and R.~Vuopionper{\"a}, Nucl. Phys. {\bf B372} (1992)23; M.C.~Gonzales-Garcia, O.J.P.~Eboli, F.~Halzen and S.F.~Noaves Phys. Lett. {\bf B280} (1992) 313; R.~Vuopionper{\"a} Z. Phys. {\bf C65} (1995) 311. \item see e.g. J.~Gluza and M.~Zra\l ek, Phys. Lett. {\bf B362} (1995) 148. \item J.~Gluza and M.~Zra\l ek, Phys. Rev. {\bf D51} (1995) 4707. \item J.~Gluza and M.~Zra\l ek, Phys. Lett. {\bf B372} (1996) 259. \item B.W.~Lee, R.~Shrock, Phys.Rev. {\bf D16}(1977)1444; B.W.~Lee, S.~Pakvasa, R.~Shrock, H.~Sugawara, Phys.Rev.Lett. {\bf 38}(1977)937; W.~Marciano, A.I.~Sanda, Phys.Lett. {\bf B37}(1977)303; T.P.~Cheng, L.F.~Li, Phys.Rev. {\bf D44}(1991)1502. \item J.Gluza and M.Zra\l ek, Phys.Rev. {\bf D48} (1993) 5093. \item C.A.~Heusch and P.~Minkowski, hep-ph/9611353. \item A.~Balysh et.~al., Phys. Lett. {\bf B356} (1995) 450. \item T.~Bernatowicz et.~al., Phys. Rev. Lett. {\bf 69} (1992) 2341. \item E.Nardi, E.Roulet and D.Tommasini, Nucl. Phys. {\bf B386} (1992) 239; A.~Ilakovac and A.~Pilaftsis, Nucl. Phys. {\bf B437} (1995) 491. \item A.Djoudi, J.Ng and T.G.Rizzo, hep-ph/9504210. \item A.~Blondel, "Status of the electroweak interactions", plenary talk at ICHEP, Warsaw 1996. \end{list} \newpage \begin{table}[h] \begin{center} \vspace{ 1cm} \begin{tabular}{|c| c| c|} \cline{1-3} \cline{1-3} & \multicolumn{2}{|c|}{ } \\ $M_N$ [GeV] & \multicolumn{2}{|c|}{ $\Gamma_N^{total}/\mid K_{Ne} \mid^2$ [GeV] } \\ & \multicolumn{2}{|c|}{ } \\ && \\ & $m_H=100$ GeV & $m_H \geq m_N$ GeV \\ && \\ \cline{1-3} \cline{1-3} && \\ 100 & 0.22 & 0.22 \\ && \\ 150 & 2.9 & 2.6 \\ && \\ 200 & 8.7 & 7.2 \\ && \\ 300 & 33.1 & 26.1 \\ && \\ 500 & 160.2 & 143 \\ && \\ 700 & 445.5 & 337.5 \\ && \\ 1000 & 1306 & 984 \\ \cline{1-3} \cline{1-3} \end{tabular} \end{center} \end{table} {\footnotesize {\bf Table 1.} The total width for a heavy neutrino decay divided by mixing matrix element $\mid K_{Ne} \mid^2$ with the decay channels $\Gamma \left( N \rightarrow \nu_l H \right) $ (second column) and without these channels (third column) for various heavy neutrino masses $m_N$.} \newpage \begin{table} \begin{center} \vspace{ 0.5cm} \begin{tabular}{|c| c| c| c|} \cline{1-4} \cline{1-4} & \multicolumn{3}{|c|}{ } \\ $M_N$ [GeV] & \multicolumn{3}{|c|}{ $\sigma^{total}_{max}$ [fb], $n_R=1$ } \\ & \multicolumn{3}{|c|}{ } \\ &&& \\ & $\sqrt{s}=0.5$ TeV & $\sqrt{s}=1$ TeV & $\sqrt{s}=2$ TeV \\ &&& \\ \cline{1-4} \cline{1-4} &&& \\ 100 & 0.18 & 0.2 & 0.2 \\ &&& \\ 150 & 0.25 & 0.3 & 0.3 \\ &&& \\ 200 & 0.31 & 0.4 & 0.4 \\ &&& \\ 300 & 0.34 & 0.6 & 0.6 \\ &&& \\ 500 & - & 0.8 & 1.0 \\ &&& \\ 700 & - & 0.7 & 1.3 \\ &&& \\ 1000 & - & - & 1.6 \\ \cline{1-4} \cline{1-4} \end{tabular} \end{center} \end{table} {\footnotesize {\bf Table 2.} Total cross section $\sigma_{tot} \left( e^+e^- \rightarrow \nu N \right) $ in $n_R=1$ case (see Eq.(14) with $\omega^2=2 \cdot 10^{-5}\;TeV^{-1}$) for various heavy neutrino masses and three different total energies $\sqrt{s}= 0.5,1,2$ TeV. If $\omega^2 \simeq 80\cdot 10^{-5}\;TeV^{-1}$ [13] all numbers in the Table should be multiplied by 40.} \newpage \begin{table} \begin{center} \vspace{ 0.5cm} \begin{tabular}{|c| c| c| c|} \cline{1-4} \cline{1-4} & \multicolumn{3}{|c|}{ } \\ $M_N$ [GeV] & \multicolumn{3}{|c|}{ $\sigma^{tot}_{max}$ [fb] $n_R>1$} \\ & \multicolumn{3}{|c|}{ } \\ &&& \\ & $\sqrt{s}=0.5$ TeV & $\sqrt{s}=1$ TeV & $\sqrt{s}=2$ TeV \\ &&& \\ \cline{1-4} \cline{1-4} &&& \\ 100 & 240 & 275 & 287 \\ &&& \\ 150 & 227 & 271 & 286 \\ &&& \\ 200 & 209 & 267 & 285 \\ &&& \\ 300 & 155 & 252 & 281 \\ &&& \\ 500 & - & 207 & 270 \\ &&& \\ 700 & - & 138 & 252 \\ &&& \\ 1000 & - & - & 216 \\ \cline{1-4} \cline{1-4} \end{tabular} \end{center} \end{table} {\footnotesize {\bf Table 3} Total cross section $\sigma_{tot} \left( e^+e^- \rightarrow \nu N \right)$ for various heavy neutrino masses and total energies $\sqrt{s}$ calculated with largest possible value of $\mid K_{Ne} \mid^2$ ($n_R>1$ case, see Eq.(28)). Result is given for $\kappa^2=0.0054$.} \end{document}
1,108,101,563,794
arxiv
\section{Conclusions} \label{sec:conclusion} We propose ST2Vec, a representation learning based architecture for spatio-temporal similarity learning in road networks while enabling a range of trajectory measures. Extensive experiments using three real data sets confirm ST2Vec is capable of improved higher effectiveness, efficiency, and scalability than state-of-the-art methods. Also, similarity-based case studies of top-$k$ querying and clustering demonstrate the potential of ST2Vec for downstream analytics. In the future, it is of interest to integrate ST2Vec into spatial database management, thus enabling more types of trajectory analyses. \section{Experimental Study} \label{sec:exe} We first describe the experimental settings and then compare the effectiveness of ST2Vec with popular and state-of-art baselines. Next, we evaluate model efficiency and scalability. Further, we provide detailed insight into parameter sensitivity to characterize the robustness of ST2Vec. In addition, we include ablation analyses. Moreover, we report on the acceleration capability of ST2Vec over traditional non-learning based measures. Last but not least, we perform two case studies to examine ST2Vec intuitively. \subsection{Experimental Settings} \noindent \textbf{Datasets.} In the experiments, three public real-life trajectory data sets are adopted for experimental evaluations, including T-Drive$\footnote{\footnotesize https://www.microsoft.com/en-us/research/publication/t-drive-trajectory-data-sample/}$, Rome$\footnote{\footnotesize https://crawdad.org/roma/taxi/20140717/}$, and Xi'an$\footnote{\footnotesize https://outreach.didichuxing.com/research/opendata/}$. \begin{itemize}\setlength{\itemsep}{-\itemsep} \item \textbf{T-Drive} contains 15 million taxi trajectory points from Beijing, China, collected from Feb. 2 to Feb. 8, 2008. \item \textbf{Rome} includes 367,052 trajectories from taxis in Rome, Italy, covering 30+ days. \item \textbf{Xi'an} contains 806,482 trajectories from Xi'an, China, collected during one weak by the DiDi company. \end{itemize} Since we target trajectory similarity analytics in road networks, we map match~\cite{BrakatsoulasPSW05} all trajectories to the corresponding road networks from OpenStreetMap. This way, the raw GPS trajectory data is transformed into time-ordered vertex sequences, in accordance with Definition~\ref{defn:trajectory}. Further, we acquire trajectories from urban areas and remove trajectories with fewer than 10 sampling points. This preporcessing yields 348,210 trajectories in T-Drive, 45,157 trajectories in Rome, and 553,016 trajectories in Xi'an. \noindent \textbf{Evaluation Metrics and Ground-truth.} Following existing trajectory similarity leaning studies~\cite{seed, subsimilar, YangW0Q0021, HanWYS021}, we utilize the top-$k$ similarity search as validation method, adopting HR@10, HR@50, and R10@50 as evaluation metrics. The ground-truth results of top-$k$ similarity search are the exact top-$k$ similarity search results obtained when using traditional non-leaning based distance measures, including TP~\cite{shang2017trajectory}, DITA~\cite{shang2018dita}, LCRS~\cite{Yuan019}, and NetERP~\cite{KoideXI20}. Then, the basic idea when evaluating the effectiveness of similarity learning is to compare the top-$k$ results returned by the leaning-based methods with the top-$k$ results produced by the non-learning methods. Specifically, HR@$k$ denotes the top-$k$ hitting ratio that captures the degree of overlap between a top-$k$ result and the corresponding ground-truth result; and R$k$@$t$ is the top-$t$ recall for the top-$k$ ground truth that captures the fraction of the top-$k$ ground truth in the corresponding top-$t$ result. The closer HR@10, HR@50, and R10@50 are to 1, the higher the model effectiveness (i.e., similarity learning performance). \begin{table*}[tb] \vspace{-5mm} \caption{The Comparison of Similarity Learning on TP, DITA, LCRS, and NetERP Distances using Rome Dataset} \vspace{-2.5mm} \hspace{-5mm} \begin{tabular}{p{1.4cm}<{\centering}|p{1.6cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}} \hline \multirow{2}{*}{Category} & \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{TP [22]} & \multicolumn{3}{c|}{DITA [26]} & \multicolumn{3}{c|}{LCRS [44]} & \multicolumn{3}{c}{NetERP [14]} \\ \cline{3-14} & & HR@10 & HR@50 & R10@50 & HR@10 & HR@50 & R10@50 & HR@10 & HR@50 & R10@50 & HR@10 & HR@50 & R10@50 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Window\\ Guided \\ Baselines\end{tabular}} & NEUTRAJ$^\textit{w}$ & 0.0976 &0.1499 &0.1775 &0.0898 &0.1417 &0.1756 &0.0405 &0.1552 &0.2488 &0.0053 &0.0397 &0.0723 \\ & Traj2SimVec$^\textit{w}$ & 0.0552 &0.089 &0.0973 &0.0363 &0.0391 &0.0753 &0.0057 &0.026 &0.03138 &0.1191 &0.2235 &0.2728\\ & T3S$^\textit{w}$ & 0.1098 &0.1863 &0.2228 &0.0893 &0.1426 &0.1823 &0.0669 &0.1766 &0.2910 &0.0123 &0.0512 &0.0871 \\ & GTS$^\textit{w}$ & 0.1738 &0.3775 &0.4952 &0.0872 &0.1612 &0.2636 &0.1915 &0.2677 &0.4798 &0.0697 &0.1508 &0.1869 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}LSTM\\ Guided \\ Baselines\end{tabular}} & NEUTRAJ$^\textit{l}$ &0.1225 &0.2177 &0.2613 &0.0932 &0.1499 &0.1950 &0.0864 &0.1981 &0.3308 &0.0172 &0.0608 &0.1004 \\ & Traj2SimVec$^\textit{l}$ & 0.1108 &0.2287 &0.2712 &0.0544 &0.0772 &0.1336 &0.0992 &0.1350 &0.23309 &0.1151 &0.2205 &0.2787 \\ & T3S$^\textit{l}$ &0.1195 &0.2092 &0.2508 &0.0931 &0.1494 &0.1931 &0.0805 &0.1930 &0.3209 &0.0156 &0.0582 &0.0969 \\ & GTS$^\textit{l}$ & 0.1891 &0.4188 &0.5405 &0.0896 &0.1644 &0.2678 &0.2217 &0.2985 &0.5361 &0.0732 &0.1566 &0.2001 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Our TMM\\ Guided \\ Baselines\end{tabular}} & NEUTRAJ$^\textit{t}$ & 0.2092 & 0.4725 & 0.5986 & 0.0931 & 0.1692 & 0.2743 & 0.2606 & 0.3372 & 0.6088 & 0.0606 & 0.1254 & 0.2763 \\ & Traj2SimVec$^\textit{t}$ & 0.2065 & 0.4654 & 0.5821 & 0.0891 & 0.1573 & 0.2477 & 0.2383 & 0.2899 & 0.5299 & 0.2067 & 0.2921 & 0.4711 \\ & T3S$^\textit{t}$ & 0.2473 & 0.4994 & 0.5171 & 0.1876 & 0.2652 & 0.4729 & 0.2278 & 0.3098 & 0.4711 & 0.1217 & 0.2458 & 0.4608 \\ & GTS$^\textit{t}$ & 0.3191 & 0.4229 & 0.6467 & 0.2148 & 0.3538 & 0.5226 & 0.2878 & 0.3185 & 0.5562 & 0.1935 & 0.2746 & 0.4177 \\ \hline \multirow{1}{*}{Our Methods} & ST2Vec & \textbf{0.3834} & \textbf{0.5051} &\textbf{0.7221} &\textbf{0.2421} &\textbf{0.3689} &\textbf{0.5614} &\textbf{0.3178} &\textbf{0.3942} &\textbf{0.7244} &\textbf{0.2117} &\textbf{0.2967} &\textbf{0.5117} \\ \hline \end{tabular} \label{tab:comparisonRome} \end{table*} \begin{table*}[tb] \vspace{-3mm} \caption{The Comparison of Similarity Learning on TP, DITA, LCRS, and NetERP Distances using Xi'an Dataset} \vspace{-2.5mm} \hspace{-5mm} \begin{tabular}{p{1.4cm}<{\centering}|p{1.6cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}} \hline \multirow{2}{*}{Category} & \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{TP [22]} & \multicolumn{3}{c|}{DITA [26]} & \multicolumn{3}{c|}{LCRS [44]} & \multicolumn{3}{c}{NetERP [14]} \\ \cline{3-14} & & HR@10 & HR@50 & R10@50 & HR@10 & HR@50 & R10@50 & HR@10 & HR@50 & R10@50 & HR@10 & HR@50 & R10@50 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Window\\ Guided \\ Baselines\end{tabular}} & NEUTRAJ$^\textit{w}$ & 0.1353 &0.1946 &0.2369 &0.1326 &0.1953 &0.2397 &0.0742 &0.0996 &0.1240 &0.0401 &0.1856 &0.1878 \\ & Traj2SimVec$^\textit{w}$ & 0.0689 &0.1154 &0.1446 &0.0247 &0.0628 &0.0749 &0.0103 &0.0166 &0.0157 &0.1148 &0.2185 &0.2299 \\ & T3S$^\textit{w}$ & 0.1398 &0.2086 &0.2617 &0.1321 &0.1987 &0.2525 &0.0705 &0.0995 &0.1258 &0.0384 &0.1725 &0.1751 \\ & GTS$^\textit{w}$ & 0.1640 &0.2679 &0.3748 &0.0763 &0.1470 &0.2444 &0.0253 &0.0569 &0.0928 &0.1496 &0.2277 &0.2801 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}LSTM\\ Guided \\ Baselines\end{tabular}} & NEUTRAJ$^\textit{l}$ & 0.1763 &0.2352 &0.2879 &0.1331 &0.1854 &0.2384 &0.0746 &0.1324 &0.1637 &0.0573 &0.1861 &0.2106 \\ & Traj2SimVec$^\textit{l}$ &0.1136 &0.1648 &0.2080 &0.0337 &0.0756 &0.1072 &0.0176 &0.0222 &0.0349 &0.1969 &0.3264 &0.3765 \\ & T3S$^\textit{l}$ & 0.1908 &0.2582 &0.3222 &0.1355 &0.1899 &0.2529 &0.0734 &0.1409 &0.1755 &0.0603 &0.1803 &0.2099 \\ & GTS$^\textit{l}$ &0.2995 &0.3941 &0.5125 &0.1254 &0.1727 &0.2818 &0.0607 &0.1912 &0.2474 &0.1627 &0.2465 &0.3456 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Our TMM\\ Guided \\ Baselines\end{tabular}} & NEUTRAJ$^\textit{t}$ &0.2169 &0.4892 &0.6222 &0.2197 &0.2612 &0.4228 &0.1143 &0.1928 &0.4869 &0.1820 &0.2742 &0.4368 \\ & Traj2SimVec$^\textit{t}$ &0.2310 &0.4288 &0.7799 &0.2035 &0.2329 &0.3735 &0.1158 &0.3783 &0.4708 &0.2232 &0.3336 &0.6268 \\ & T3S$^\textit{t}$ &0.2545 &0.3870 &0.5341 &0.2319 &0.4049 &0.5397 &0.1286 &0.1663 &0.3197 &0.1630 &0.2943 &0.4461 \\ & GTS$^\textit{t}$ &0.4190 &0.5363 &0.7937 &0.4086 &0.4011 &0.7778 &0.1049 &0.2375 &0.5235 &0.2318 &0.2939 &0.5112\\ \hline \multirow{1}{*}{Our Methods} & ST2Vec & \textbf{0.4628} &\textbf{0.6014} &\textbf{0.8646} &\textbf{0.4128} &\textbf{0.5367} &\textbf{0.8132} &\textbf{0.1412} &\textbf{0.2893} &\textbf{0.6105} &\textbf{0.3684} &\textbf{0.4247} &\textbf{0.7231} \\ \hline \end{tabular} \label{tab:comparisonXian} \vspace{0mm} \end{table*} \noindent \textbf{Competitors/Baselines.} We compare ST2Vec with all existing similarity learning methods, including NEUTRAJ~\cite{seed}, Traj2SimVec~\cite{subsimilar}, T3S~\cite{YangW0Q0021}, and GTS~\cite{HanWYS021}. Only the code for NEUTRAJ was available, while the code for the others are not. Note that, GTS has the state-of-the-art performance. Hence, we first carefully implemented Traj2SimVec, T3S, and GTS according to their descriptions. As our work is the first deep learning based method for spatio-temporal trajectory similarity leaning, for fairness of comparisons, we extend these competitors with time control, resulting in 12 baselines in three categories. The symbols $\textit{w}$, $\textit{l}$, and $\textit{t}$ are used to indicate the categories of baselines. \begin{itemize}\setlength{\itemsep}{-\itemsep} \item \textbf{Window-guided baselines ($*^\textit{w}$):} In this category, we distribute trajectories across discrete time slots and perform top-$k$ similarity queries in each slot, resulting in NEUTRAJ$^\textit{w}$, Traj2SimVec$^\textit{w}$, T3S$^\textit{w}$, and GTS$^\textit{w}$. \item \textbf{LSTM-guided baselines ($*^\textit{l}$):} In this category, we feed temporal trajectories directly into an LSTM model, resulting in NEUTRAJ$^\textit{l}$, Traj2SimVec$^\textit{l}$, T3S$^\textit{l}$, and GTS$^\textit{l}$. \item \textbf{Our TMM-guided baselines ($*^\textit{t}$):} In this category, we integrate our temporal trajectory embedding module (i.e., TMM) into the competitors, resulting in NEUTRAJ$^\textit{t}$, Traj2SimVec$^\textit{t}$, T3S$^\textit{t}$, and GTS$^\textit{t}$. \end{itemize} \noindent \textbf{Hyperparameters.} For ST2Vec, we use the UF strategy as the default; for all comparison methods, we use the SF strategy as the default. We split each data set into training, validation, and test sets in the ratio 3:1:6. The default value of $\lambda$ is set to 0.5. We set the spatial and temporal embedding dimensionalities to 128. The number of hidden LSTM units is 128. We set the batch size to 50. We tune their parameters to obtain the best performance. Moreover, we train the model using Adam~\cite{KingmaB14} with an initial learning rate of 0.001. Finally, we implemented ST2Vec in Python and Pytorch. All experiments were conducted on a server with an Intel Silver 4210R, 2.40GHz CPU, 64-GB RAM, and a GeForce GTX-2080 Ti 11G GPU. All implementation codes and corresponding datasets have been released online$\footnote{\footnotesize Code and data available at https://github.com/ZJU-DBL/ST2Vec}$ for further studies. \subsection{Model Effectiveness Study} To demonstrate the model (i.e., similarity learning) effectiveness, we conduct top-$k$ similarity queries and compare the performance of ST2Vec with all 12 baseline approaches. Tables~\ref{tab:comparisonTdrive},~\ref{tab:comparisonRome}, and~\ref{tab:comparisonXian} list the results on the three datasets. From these results, we provide observations and analyses as follows. We first observe that, our TMM-guided baselines significantly outperform the window-guided and LSTM-guided baselines, indicating that the proposed temporal trajectory embedding module is effective. This is because, although the window-based and LTSM-based methods might capture the temporal information to some extent, they ignore the continuous nature of time and periodic patterns, restricting their effectiveness. The second observation is that in the same category, GTS and ST2Vec outperform the other methods on all metrics. The main reason is that GTS and ST2Vec consider road network topology in spatial correlation modeling, while the other methods only capture the sequence features in free space and cannot embed the structural dependencies in road networks. The third observation is that ST2Vec achieves substantially better accuracy than GTS on all distance measures and all datasets. This reflects the fact that GTS targets POI-based trajectory similarity computation that disregards the actual travel paths between adjacent POIs. Given a target trajectory, the trajectories with the same neighbor POIs constitute its returned as its top-$k$ similarity querying results, although the movement paths of such trajectories can be very different from that of the target trajectory. In contrast, ST2Vec is designed for fine-grained trajectory similarity learning and considers both locations and travel paths between adjacent sample locations. Consequently, ST2Vec is capable of better similarity learning performance. \begin{table*}[] \vspace{-5mm} \caption{Model Scalability Evaluation with Varying Number of Trajectories to Perform Top-$k$ Similarity Computation} \vspace{-2.5mm} \hspace{-4mm} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{c|c|cccc|cccc|cccc|cccc} \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{TP Distance} & \multicolumn{4}{c|}{DITA Distance} & \multicolumn{4}{c|}{LCRS Distance} & \multicolumn{4}{c}{NetERP Distance} \\ \cline{3-18} & & \multicolumn{1}{c|}{10k} & \multicolumn{1}{c|}{50k} & \multicolumn{1}{c|}{100k} & 200k & \multicolumn{1}{c|}{10k} & \multicolumn{1}{c|}{50k} & \multicolumn{1}{c|}{100k} & 200k & \multicolumn{1}{c|}{10k} & \multicolumn{1}{c|}{50k} & \multicolumn{1}{c|}{100k} & 200k & \multicolumn{1}{c|}{10k} & \multicolumn{1}{c|}{50k} & \multicolumn{1}{c|}{100k} & 200k \\ \hline \multirow{5}{*}{T-Drive} & NEUTRAJ$^l$ & \multicolumn{1}{c|}{27.81} & \multicolumn{1}{c|}{131.39} & \multicolumn{1}{c|}{261.16} & 534.24 & \multicolumn{1}{c|}{23.75} & \multicolumn{1}{c|}{135.18} & \multicolumn{1}{c|}{258.04} & 537.96 & \multicolumn{1}{c|}{31.10} & \multicolumn{1}{c|}{127.03} & \multicolumn{1}{c|}{261.04} & 529.72 & \multicolumn{1}{c|}{25.13} & \multicolumn{1}{c|}{127.22} & \multicolumn{1}{c|}{257.48} & 538.23 \\ \cline{2-18} & Traj2SimVec$^l$ & \multicolumn{1}{c|}{93.66} & \multicolumn{1}{c|}{458.90} & \multicolumn{1}{c|}{927.87} & 1862.11 & \multicolumn{1}{c|}{92.10} & \multicolumn{1}{c|}{454.28} & \multicolumn{1}{c|}{926.25} & 1865.86 & \multicolumn{1}{c|}{98.55} & \multicolumn{1}{c|}{461.15} & \multicolumn{1}{c|}{924.49} & 1866.50 & \multicolumn{1}{c|}{93.34} & \multicolumn{1}{c|}{456.74} & \multicolumn{1}{c|}{928.39} & 1858.86 \\ \cline{2-18} & T3S$^l$ & \multicolumn{1}{c|}{30.52} & \multicolumn{1}{c|}{146.94} & \multicolumn{1}{c|}{275.91} & 542.53 & \multicolumn{1}{c|}{33.80} & \multicolumn{1}{c|}{142.58} & \multicolumn{1}{c|}{276.96} & 541.31 & \multicolumn{1}{c|}{34.99} & \multicolumn{1}{c|}{147.42} & \multicolumn{1}{c|}{279.50} & 542.94 & \multicolumn{1}{c|}{29.77} & \multicolumn{1}{c|}{148.35} & \multicolumn{1}{c|}{279.40} & 540.53 \\ \cline{2-18} & GTS$^l$ & \multicolumn{1}{c|}{34.65} & \multicolumn{1}{c|}{159.52} & \multicolumn{1}{c|}{299.19} & 602.52 & \multicolumn{1}{c|}{37.67} & \multicolumn{1}{c|}{156.25} & \multicolumn{1}{c|}{297.24} & 597.64 & \multicolumn{1}{c|}{35.63} & \multicolumn{1}{c|}{158.60} & \multicolumn{1}{c|}{300.50} & 607.09 & \multicolumn{1}{c|}{37.68} & \multicolumn{1}{c|}{159.61} & \multicolumn{1}{c|}{296.45} & 606.36 \\ \cline{2-18} & ST2Vec & \multicolumn{1}{c|}{\textbf{30.32}} & \multicolumn{1}{c|}{\textbf{145.94}} & \multicolumn{1}{c|}{\textbf{293.35}} & \textbf{597.71} & \multicolumn{1}{c|}{\textbf{25.81}} & \multicolumn{1}{c|}{\textbf{146.65}} & \multicolumn{1}{c|}{\textbf{293.03}} & \textbf{596.56} & \multicolumn{1}{c|}{\textbf{29.38}} & \multicolumn{1}{c|}{\textbf{143.89}} & \multicolumn{1}{c|}{\textbf{297.92}} & \textbf{593.77} & \multicolumn{1}{c|}{\textbf{28.17}} & \multicolumn{1}{c|}{\textbf{147.26}} & \multicolumn{1}{c|}{\textbf{290.46}} & \textbf{598.88} \\ \hline \multirow{5}{*}{Rome} & NEUTRAJ$^l$ & \multicolumn{1}{c|}{22.44} & \multicolumn{1}{c|}{97.01} & \multicolumn{1}{c|}{191.55} & 388.27 & \multicolumn{1}{c|}{21.39} & \multicolumn{1}{c|}{101.70} & \multicolumn{1}{c|}{192.50} & 387.27 & \multicolumn{1}{c|}{25.46} & \multicolumn{1}{c|}{94.97} & \multicolumn{1}{c|}{196.04} & 390.83 & \multicolumn{1}{c|}{22.54} & \multicolumn{1}{c|}{98.15} & \multicolumn{1}{c|}{192.83} & 386.96 \\ \cline{2-18} & Traj2SimVec$^l$ & \multicolumn{1}{c|}{81.22} & \multicolumn{1}{c|}{421.22} & \multicolumn{1}{c|}{877.58} & 1801.12 & \multicolumn{1}{c|}{81.52} & \multicolumn{1}{c|}{425.17} & \multicolumn{1}{c|}{882.12} & 1800.89 & \multicolumn{1}{c|}{77.31} & \multicolumn{1}{c|}{422.88} & \multicolumn{1}{c|}{882.22} & 1802.69 & \multicolumn{1}{c|}{76.34} & \multicolumn{1}{c|}{418.81} & \multicolumn{1}{c|}{873.00} & 1796.19 \\ \cline{2-18} & T3S$^l$ & \multicolumn{1}{c|}{24.54} & \multicolumn{1}{c|}{100.31} & \multicolumn{1}{c|}{199.38} & 395.73 & \multicolumn{1}{c|}{23.51} & \multicolumn{1}{c|}{97.26} & \multicolumn{1}{c|}{200.27} & 394.36 & \multicolumn{1}{c|}{27.74} & \multicolumn{1}{c|}{96.67} & \multicolumn{1}{c|}{195.80} & 392.66 & \multicolumn{1}{c|}{20.94} & \multicolumn{1}{c|}{97.26} & \multicolumn{1}{c|}{195.04} & 394.57 \\ \cline{2-18} & GTS$^l$ & \multicolumn{1}{c|}{23.54} & \multicolumn{1}{c|}{104.27} & \multicolumn{1}{c|}{196.52} & 395.62 & \multicolumn{1}{c|}{24.19} & \multicolumn{1}{c|}{106.97} & \multicolumn{1}{c|}{198.63} & 394.01 & \multicolumn{1}{c|}{27.51} & \multicolumn{1}{c|}{108.87} & \multicolumn{1}{c|}{199.64} & 398.01 & \multicolumn{1}{c|}{27.21} & \multicolumn{1}{c|}{101.19} & \multicolumn{1}{c|}{193.42} & 393.58 \\ \cline{2-18} & ST2Vec & \multicolumn{1}{c|}{\textbf{21.66}} & \multicolumn{1}{c|}{\textbf{99.34}} & \multicolumn{1}{c|}{\textbf{194.10}} & \textbf{392.46} & \multicolumn{1}{c|}{\textbf{23.64}} & \multicolumn{1}{c|}{\textbf{102.53}} & \multicolumn{1}{c|}{\textbf{198.55}} & \textbf{393.24} & \multicolumn{1}{c|}{\textbf{16.89}} & \multicolumn{1}{c|}{\textbf{95.83}} & \multicolumn{1}{c|}{\textbf{198.12}} & \textbf{394.65} & \multicolumn{1}{c|}{\textbf{16.73}} & \multicolumn{1}{c|}{\textbf{99.37}} & \multicolumn{1}{c|}{\textbf{198.69}} & \textbf{392.71} \\ \hline \end{tabular}} \label{tab:scalability} \end{table*} \begin{figure*} [tb] \centering \includegraphics[width=0.6\textwidth]{Efficiency_Legend.eps}\\ \vspace{-1.5mm} \subfigure[T-drive/TP]{ \includegraphics[width=4.3cm,height=2.5cm]{Efficiency_TP_T.eps}} \subfigure[T-drive/DITA]{ \includegraphics[width=4.3cm,height=2.5cm]{Efficiency_DITA_T.eps}} \subfigure[T-drive/LCRS]{ \includegraphics[width=4.3cm,height=2.5cm]{Efficiency_LCRS_T.eps}} \subfigure[T-drive/NetERP]{ \includegraphics[width=4.3cm,height=2.5cm]{Efficiency_NetERP_T.eps}} \\ \centering \vspace{-2mm} \subfigure[Rome/TP]{ \includegraphics[width=4.3cm,height=2.5cm]{Efficiency_TP_R.eps}} \subfigure[Rome/DITA]{ \includegraphics[width=4.3cm,height=2.5cm]{Efficiency_DITA_R.eps}} \subfigure[Rome/LCRS]{ \includegraphics[width=4.3cm,height=2.5cm]{Efficiency_LCRS_R.eps}} \subfigure[Rome/NetERP]{ \includegraphics[width=4.3cm,height=2.5cm]{Efficiency_NetERP_R.eps}} \\ \vspace{-2mm} \caption{Model Efficiency Evaluation on Offline Model Training and Online Computing Phases} \label{fig:efficiency} \vspace{-2mm} \end{figure*} \subsection{Model Efficiency Study} Next, we study the model efficiency in terms of both offline model training (denoted as training, with the unit seconds/epoch) and online computing (denoted as computing, with the unit seconds/4k trajectories). Fig.~\ref{fig:efficiency} shows the results on T-Drive and Rome. Note that the scale of the y-axis is logarithmic due to the significant performance differences. The results on Xi'an are similar and are omitted for brevity. We only compare ST2Vec with the LSTM-guided baselines because they outperform the window-guided baselines and because the TMM-guided baselines are essentially based on our TMM module. As can be seen, ST2Vec has good performance for both training and computing. Consider the results for T-drive as an example. During the training phase, ST2Vec finishes each epoch within 40 seconds and runs two times faster than NEUTRAJ, T3S, and GTS, and five times faster than Traj2SimVec. In terms of similarity computation (i.e., testing), we measure the total running time of each method on the test data. Here, ST2Vec also exhibits superior performance (i.e., within 1 second) and is 20 times faster than NEUTRAJ and T3S and two times faster than Traj2Sim and GTS. \subsection{Model Scalability Study} Next, we explore model scalability when varying the number of trajectories from 10k to 200k. Table~\ref{tab:scalability} shows the results when learning four distance measures on T-Drive and Rome. The results on Xi'an are omitted because they yield similar observations. As can be observed, ST2Vec offers the best scalability for learning-based trajectory similarity computation due to three observations. First, the running time increases with the cardinality. Second, ST2Vec offers substantial performance improvements over the existing methods. Third, the performance of ST2Vec is affected less by an increase in cardinality than are the four baselines. Consequently, ST2Vec is capable of large-scale trajectory similarity computation. \subsection{Parameter Sensitivity Study} Further, we evaluate the sensitivity of ST2Vec to assess its robustness. Specifically, we consider the effects on the model performance of the training data size, the number of triplets $N$ constructed for each trajectory, and the spatio-temporal weight $\lambda$. We report results for T-drive only; Rome and Xi'an yield similar observations. \noindent \textbf{Sensitivity to $datasize$.} First, we investigate the effect of the number of training trajectories on the performance of ST2Vec. Fig.~\ref{fig:ModelAnalysisofsize} shows the similarity learning performance (i.e., HR@10, HR@50, R10@50) for the four measures when varying the training data size from 10k to 200k. As can be observed, ST2Vec exhibits stable performance. \noindent \textbf{Sensitivity to $N$.} Second, we investigate model robustness when varying $k$ for constructing training samples. Here, we randomly sample 10k trajectories from T-Drive. Then, for each trajectory, we get its 1, 3, 6, 15, and 30 most similar/dissimilar trajectories to construct similarity triplets. Fig.~\ref{fig:ModelAnalysisofk} plots the results using four distance measures. As can be observed, HR@10, HR@50, and R10@50 all increase slightly, which offers evidence that ST2Vec is capable of achieving good performance even with limited training samples. \noindent \textbf{Sensitivity to $\lambda$.} Finally, we perform a sensitivity analysis of the spatio-temporal weight $\lambda$ used in Eq. 1. When $\lambda$ = 1, only the spatial domain is considered, and when $\lambda$ = 0, the similar computation considers the temporal domain only. Fig.~\ref{fig:ModelAnalysisoflambda} shows that HR@10, HR@50, and R10@50 performance are stable across different settings of $\lambda$, indicating that ST2Vec works well with different $\lambda$ preferences. \begin{figure*} [tb] \centering \hspace{-0.25cm} \subfigure[T-drive/TP]{ \includegraphics[width=4.3cm,height=3cm]{ACC_Number_TP.eps}} \subfigure[T-drive/DITA]{ \includegraphics[width=4.3cm,height=3cm]{ACC_Number_DITA.eps}} \subfigure[T-drive/LCRS]{ \includegraphics[width=4.3cm,height=3cm]{ACC_Number_LCRS.eps}} \subfigure[T-drive/NetERP]{ \includegraphics[width=4.3cm,height=3cm]{ACC_Number_NetERP.eps}} \\ \vspace{-2mm} \caption{Performance of ST2Vec under Varying Training Data Size} \label{fig:ModelAnalysisofsize} \vspace{-3mm} \end{figure*} \begin{figure*} [tb] \centering \hspace{-0.25cm} \subfigure[T-drive/TP]{ \includegraphics[width=4.3cm,height=3cm]{ACC_N_TP.eps}} \subfigure[T-drive/DITA]{ \includegraphics[width=4.3cm,height=3cm]{ACC_N_DITA.eps}} \subfigure[T-drive/LCRS]{ \includegraphics[width=4.3cm,height=3cm]{ACC_N_LCRS.eps}} \subfigure[T-drive/NetERP]{ \includegraphics[width=4.3cm,height=3cm]{ACC_N_NetERP.eps}} \\ \vspace{-2mm} \caption{Performance of ST2Vec under Varying Number of Triplets $N$ for Each Trajectory} \label{fig:ModelAnalysisofk} \vspace{-3mm} \end{figure*} \begin{figure*} [tb] \centering \hspace{-0.25cm} \subfigure[T-drive/TP]{ \includegraphics[width=4.3cm,height=3cm]{ACC_l_TP.eps}} \subfigure[T-drive/DITA]{ \includegraphics[width=4.3cm,height=3cm]{ACC_l_DITA.eps}} \subfigure[T-drive/LCRS]{ \includegraphics[width=4.3cm,height=3cm]{ACC_l_LCRS.eps}} \subfigure[T-drive/NetERP]{ \includegraphics[width=4.3cm,height=3cm]{ACC_l_NetERP.eps}} \\ \vspace{-2mm} \caption{Performance of ST2Vec under Varying Spatio-Temporal Weight $\lambda$} \label{fig:ModelAnalysisoflambda} \vspace{-3mm} \end{figure*} \begin{figure}[tb] \centering \subfigure[T-Drive]{ \includegraphics[width=0.23\textwidth]{Attention_T.eps}} \hspace{-0.25mm} \subfigure[Rome]{ \includegraphics[width=0.23\textwidth]{Attention_R.eps}} \vspace{-5mm} \caption{ST2Vec Performance vs. with/without Attention} \vspace{-2mm} \label{fig:ModelAnalysisofattention} \end{figure} \begin{figure}[tb] \centering \subfigure[T-Drive]{ \includegraphics[width=0.23\textwidth]{Fusion_T.eps}} \hspace{-0.25mm} \subfigure[Rome]{ \includegraphics[width=0.23\textwidth]{Fusion_R.eps}} \vspace{-5mm} \caption{ST2Vec Performance vs. Fusion Manners} \label{fig:ModelAnalysisoffusion} \vspace{-3mm} \end{figure} \begin{figure}[tb] \centering \subfigure[T-Drive]{ \includegraphics[width=0.23\textwidth]{Epoch_TP.eps}} \hspace{-0.25mm} \subfigure[Rome]{ \includegraphics[width=0.23\textwidth]{Epoch_DITA.eps}} \vspace{-5mm} \caption{The Convergence Curve of ST2Vec with respect to 20 epochs} \label{fig:ModelAnalysisofEpoch} \vspace{-3mm} \end{figure} \subsection{Ablation Study} \noindent \textbf{ST2Vec Performance vs. with/without Attention.} To study the effect of the attention mechanism on the performance, we remove it from ST2Vec and call the resulting model ST2Vec-No-Att. The HR@50 results on T-Drive are shown in Fig.~\ref{fig:ModelAnalysisofattention}, indicate that the spatial and temporal attention mechanisms are effective. Taking TP as an example, ST2Vec improves HR@50 over ST2Vec-No-Att from 0.51 to 0.58. \noindent \textbf{ST2Vec Performance vs. Fusion Approach.} Second, to evaluate the effect of the fusion approach on model performance, we train ST2Vec using separate fusion (SF) and unified fusion (UF). Fig.~\ref{fig:ModelAnalysisoffusion} shows that ST2Vec using unified fusion achieves similar effectiveness to that using separate fusion. However, the ST2Vec-UF achieves fast model convergence than ST2Vec-SF, as SF features two separate LSM models that resulting in a double number of parameters to tune than UF. \noindent \textbf{ST2Vec Performance vs. Curriculum/Random.} Finally, to evaluate the effect of curriculum learning on model performance, we consider all four distances. Fig.~\ref{fig:ModelAnalysisofEpoch} shows that the learning process guided by curriculum learning achieves faster convergence and higher computational quality (i.e., HR@50) than does random batch learning. This is because the curriculum strategy trains the model directionally by feeding training samples that vary from easy to hard (as discussed in Section~\ref{sec:method}-D). \subsection{Efficiency Acceleration Study} As a follow-up on the analysis in Section~\ref{sec:method}-E, we consider similarity computation using our ST2Vec-based method and a traditional pairwise based method. Table~\ref{table:time} reports the average time cost to perform top-50 spatio-temporal similarity search for each query trajectory with different data sizes, comparing the query efficiency under ST2Vec and Non-learning. As can be observed, ST2Vec achieves 200--400x speeds up over the non-learning based method. \begin{figure*}[tb] \vspace{-3mm} \centering \includegraphics[width=1\textwidth]{Case.eps} \vspace{-9mm} \caption{Case Studies: Top-$k$ Querying and Clustering} \label{fig:CaseStudies} \vspace{-3mm} \end{figure*} \subsection{Case Study} We proceed to perform trajectory top-$k$ querying and clustering using T-Drive to examine the capabilities of ST2Vec intuitively. In terms of top-$k$ querying, we randomly choose one trajectory as the query trajectory. Then we plot its top-2 ground-truth trajectories according to TP as well as its top-2 similarity trajectories returned by ST2Vec. The left part of Fig.~\ref{fig:CaseStudies} plots different trajectories and shows that the trajectories returned by ST2Vec match the ground-truth trajectories very well. Next, we explore the effectiveness of ST2Vec using DBSCAN clustering when fixing the parameter \textit{minPts} at 10. Here, we also use TP. We compare the clustering results generated by the ground-truth and embedding based distances. As shown in the right part of Fig.~\ref{fig:CaseStudies}, the numbers of clusters in the two results share similar trends as $\epsilon$ grows, meaning that ST2Vec also works well for clustering analyses. \begin{table}[t] \caption{Time Cost of Online Similarity Search on T-Drive} \vspace{-2.5mm} \begin{tabular}{p{1.2cm}<{\centering}|p{2cm}<{\centering}|p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}} \hline Measures & Methods & 1k & 5k & 10k & 200k \\ \hline \multirow{2}{*}{TP} & Non-learning & 1.492s & 3.127s & 5.893s & 117.832s \\ & \textbf{ST2Vec} & 0.004s & 0.014s & 0.028s & 0.521s \\ \hline \multirow{2}{*}{DITA} & Non-learning & 0.921s & 3.301s & 6.291s & 125.826s \\ & \textbf{ST2Vec} & 0.004s & 0.015s & 0.028s & 0.522s \\ \hline \multirow{2}{*}{LCRS} & Non-learning & 1.292s & 4.614s & 8.784s & 175.824s \\ & \textbf{ST2Vec} & 0.004s & 0.014s & 0.028s & 0.525s \\ \hline \multirow{2}{*}{NetERP} & Non-learning & 1.535s & 6.246s & 12.674s & 253.481s \\ & \textbf{ST2Vec} & 0.004s & 0.015s & 0.028s & 0.522s \\ \hline \end{tabular} \label{table:time} \vspace{-5mm} \end{table} \section{The ST2Vec Approach} \label{sec:method} We first detail the three modules. Then, we describe the training process of representation-based trajectory similarity learning. Finally, we provide an analysis of ST2Vec approach. \begin{figure}[t] \centering \hspace{-2mm} \includegraphics[width=0.5\textwidth]{Training.eps} \vspace{-4mm} \caption{Architecture and Training Scheme of ST2Vec} \label{fig:architecture} \vspace{-4mm} \end{figure} \begin{figure*}[tb] \centering \vspace{-3mm} \includegraphics[width=0.98\textwidth]{Framework.eps} \vspace{-3mm} \caption{An Overview of the ST2Vec with \textit{Unified Embedding}} \label{fig:overview} \vspace{-4mm} \end{figure*} \subsection{Temporal Modeling Module (TMM)} To capture the correlations between a pair of temporal trajectories ($T_i^{(t)}$, $T_j^{(t)}$), it is natural to use state-of-the-art sequence models such as RNN, LSTM, or their variants, to embed temporal trajectories into vectors. However, this fails to handle time's periodic and non-periodic temporal patterns. \noindent \textbf{Basic idea.} To achieve fine-grained temporal representation learning, we integrate \textit{time embedding} with \textit{temporal sequence embedding} to construct a trajectory-aware temporal sequence modeling module. Further, we notice that different time points may have different importance, e.g., rush hour vs. late night. Thus we further introduce the attention function to enhance the representation of temporal irregularity. \subsubsection{\textbf{Time Embedding}} Inspired by position embedding in BERT~\cite{WangSLJYLS21}, for each time point $t$ in a temporal trajectory, we learn its time embedding $t'$, which is a vector of size $q + 1$. \begin{equation} \label{eq:time2vec} t'[i]= \begin{cases}\omega_{i} t+\varphi_{i}, & \text { if } i=0 \\ \cos \left(\omega_{i} t+\varphi_{i}\right), & \text { if } 1 \leq i \leq q\end{cases} \end{equation} Here, $t'[i]$ denotes the $i$-th element of $t'$, $\omega_0, ..., \omega_l$ and $\varphi_0, ..., \varphi_l$ are learnable parameters, and $\cos (\cdot,\cdot)$ serves as a periodic activation function that helps capture periodic behaviors without the need for feature engineering. For $1 \leq i \leq q$, $\omega_i$ and $\varphi_i$ are the frequency and the phase-shift of the cos function, and thus the period of the cos function is $\frac{2\pi}{\omega_i}$, i.e., it has the same value at $t$ and $t+\frac{2\pi}{\omega_i}$. The linear term represents the progression of time and can be used for capturing non-periodic patterns in the input that depend on time. Based on this, we can embed a temporal trajectory $T^{(t)}$ into a sequence of time vectors, i.e., $\langle t_1, t_2, ..., t_m \rangle \rightarrow \langle t_1', t_2', ..., t_m' \rangle$. \subsubsection{\textbf{Temporal Sequence Embedding}} As illustrated in Fig.~\ref{fig:overview}, if we remove the spatio-temporal co-attention fusion module, after embedding each time point in a trajectory, we could feed $\langle t_1', t_2', ..., t_m' \rangle$ to an LSTM to model its temporal dependence. The recurrent step of an LSTM is performed as follows. At each step $i$, an LSTM cell takes as input the current input vector $x_i$ and the state of the previous step $h_{i-1}$, and it outputs the state vector of the current step $h_i$. \begin{equation} h _{i}=\operatorname{LSTM}\left(t _{i}^\prime, h _{i-1}, i_i, f_i, o_i, m_i\right), \end{equation} \noindent where $i_i$, $f_i$, $o_i$, and $s_i$ represent an input gate, a forget gate, an output gate, and a memory cell, respectively. More details on LSTMs are available elsewhere~\cite{BreuerEJBHF19}. In the context of our LSTM layer, $t_i^\prime$ is the learned time embedding that corresponds to the original time $t_i$. The LSTM unit exploits the embedded time, the hidden state, and the cell state from the previous step to compute the new hidden state and to update the cell state. Eventually, we treat the last hidden state $h_{t}$ as the deep temporal trajectory representation because it contains all temporal information of the trajectory. Overall, a temporal information preserving representation is learned by the recurrent procedure that processes the time points and captures the correlations among them. \subsubsection{\textbf{Decoupled Attention}} Different time points in a trajectory have different weights in computations. To contend with this, we employ attention mechanisms to capture the correlations between trajectory points to improve model effectiveness, to be verified experimentally. Specifically, we propose a self-attention mechanism to compute the attention score between time points in the same trajectory as follows. \begin{equation} \tilde{ h }_{i}^{(p)}=\sum_{k=1}^{i} \operatorname{att}\left( h _{i}^{(p)}, h _{k}^{(p)}\right) \cdot h _{k}^{(p)} \end{equation} Here, $\tilde{ h }_{i}^{(p)}$ denotes the improved state representation, and att$(\cdot,\cdot)$ is an attention function: \begin{equation} \operatorname{att}\left( h _{i}^{(p)}, h _{k}^{(p)}\right)=\frac{\alpha_{i, k}}{\sum_{k^{\prime}=1}^{i} \exp \left(\alpha_{i, k^{\prime}}\right)} \end{equation} where, $\alpha_{i, k}= w _{1}^{\top} \cdot \tanh \left( W _{1} \cdot h _{k}^{(p)}+ W _{2} \cdot h _{ i }^{(p)}\right)$ and $w_1$, $W_1$, and $W_2$ are the parameter vector and matrices to learn. By including the attention mechanism into the temporal sequence embedding, we can discover more important time points, in turn improving model performance, to be confirmed experimentally. Note that we also use the hidden representation of the last step to encode the full temporal trajectory. \subsection{Spatial Modeling Module (SMM)} \noindent \textbf{SMM vs. Previous Studies.} Since several studies exist on spatial trajectory modeling, we first detail the main difference between them and ST2Vec. Most of the previous studies~\cite{seed, subsimilar, YangW0Q0021} measure trajectory similarities in free space. In this setting, RNN-type models are adopted widely to capture the sequence information for spatial similarity representation learning. However, moving objects such as people and vehicles move in road networks~\cite{ShangCWJZK18}, in which case, these studies do not reflect the real distances between trajectories due to the movement restrictions imposed by road networks. Further, such restrictions cannot be learned by single RNN models. To this end, the state-of-the-art study~\cite{HanWYS021} combines GNNs with LSTM for road network constrained trajectory representation learning and it achieves the state-of-the-art similarity learning performance. However, it is designed specifically for POI (Points of Interest) based similarity computation. That is, the study~\cite{HanWYS021} treats two trajectories $T_i$ and $T_j$ as similar if they share the same POIs. This approach gives more significance to POIs while ignoring detailed travel paths, which might yield inaccuracies when measuring the similarity between trajectories that share the same POIs but have different moving paths. In contrast to all of the above studies, we target fine-grained spatial similarity learning in road networks, which considers both the locations (i.e., sampling points) and paths when evaluating the similarity between two trajectories. \noindent \textbf{Basic idea.} Given a spatial trajectory $T^{(s)} = \langle l_1, l_2, ..., l_m \rangle$ ($l_i$ denote vertices in $G$), we aim to embed $T^{(s)}$ as a vector $v_{T^{(s)}}$ in low-dimensional space that captures to capture its road-network constrained spatial information. Due to the spatial dependencies in the underlying road network, it is naturally to utilize GNNs to take into account the structure of $G$, as GNNs have been used successfully in road-network settings like region classification~\cite{YangWWCW19} and traffic prediction~\cite{ZhengFW020}. Hence, to achieve spatial similarity oriented representation learning, we develop spatial modeling module (SMM), which also encompasses three phases, i.e., location embedding, spatial sequence embedding, and spatial attention. \subsubsection{\textbf{Location Embedding}} Trajectories of objects (e.g., people and vehicles) moving in road networks are constrained by the topology of the road network. Thus, the distance between two spatially close sampling points can still be large, if the points are not connected well in the road network. To capture topological, or structural, information, we first utilize the Node2Vec~\cite{node2vec-kdd2016} method, which aims to capture the co-occurrence of the adjacent locations in road networks. Specifically, given a vertex $l_i$, we adopt Node2Vec to approximate the spatial conditional probability of vertices in its neighborhood, i.e., we perform the mapping $l_i \rightarrow n_i$, where $l_i$ and $n_i$ denote the original and embedded locations, respectively. Then, locations sharing similar neighborhoods tend to have similar embeddings. Next, we feed the embedded locations (i.e., the $n_i$) to a GNN step by step to obtain locally smoothed location embeddings, where spatially adjacent locations tend to be close in the latent space. Given a road network $G$ and a low-dimensional representation $n_i$ of location $l_i \in G$, we define the GCN function as follows. \begin{equation} \label{eq:GCN} l_{i}^\prime=\operatorname{GCN}\left( n_i\right)=\sigma\left(\left(\sum_{j \in N _{i}} c_{i j} W _{s} n_j\right) \| n_i\right) \end{equation} Here, $l_i^\prime$ is a vertex/location representation, $\sigma$ is a non-linear activation function, $c_{ij}$ is an adjacency weight, $W_{s} \in \mathcal{R}^{d \times d}$ is learnable matrix shared by all vertices in $G$, $||$ denotes the concatenation operation, and $\mathcal{N}_i$ is the set of neighbor vertices of $n_i$ in $G$. Based on Node2Vec and Eq.~\ref{eq:GCN}, we obtain a fine-grained representation of each spatial trajectory, i.e., $\langle l_1, l_2, ..., l_m \rangle \rightarrow \langle l_1^\prime, l_2^\prime, ..., l_m^\prime \rangle$. \subsubsection{\textbf{Spatial Sequence Embedding $\&$ Attention}} As illustrated in Fig.~\ref{fig:overview}, if we remove the spatio-temporal co-attention fusion module, we can obtain a sequence of location vectors as input for the LSTM model. The spatial sequence embedding here is similar to the temporal sequence embedding in Eq.~4. Given a spatial trajectory $T^{(s)}$, based on Node2Vec and Eq.~\ref{eq:GCN}, we first obtain its initialized location sequence representation and feed that to a LSTM model to encode the spatial information. Further, a self-attention mechanism is applied to capture different contributions of the different locations in the learning process. We do this because different location points in a trajectory contribute differently to the similarity computation. For instance, noisy location points with obvious deviations from other points typically have high influence on the similarity computation. Finally, we use the hidden state of the last step of the LSTM model as the spatial embedding. \subsection{Spatio-Temporal Co-attention Fusion (STCF)} Next, we propose to fuse the hidden spatial and temporal information of trajectories to generate spatio-temporal oriented embeddings. We propose a spatio-temporal co-attention fusion module that uses two fusion strategies. \subsubsection{Separate Fusion (SF)} Based on \textit{temporal sequence embedding} (Section~\ref{sec:method}-A) and \textit{spatial sequence embedding} (Section~\ref{sec:method}-B), we could embed temporal and spatial trajectories separately. Thus, a straightforward approach is to first generate spatial and temporal embeddings of trajectories with two separate LSTM models and then combine the two types of embeddings. Given a trajectory $T$ with its initial temporal embedding $(t_1^\prime, t_2^\prime, ..., t_m^\prime)$ and initial spatial embedding $(l_1^\prime, l_2^\prime, ..., l_m^\prime)$, we define the spatio-temporal trajectory embedding based on the separate fusion as: \begin{equation} v_T =\textit{LSTM}_t(t_1^\prime, t_2^\prime, ..., t_m^\prime) + \textit{LSTM}_s(l_1^\prime, l_2^\prime, ..., l_m^\prime) \end{equation} Although this approach is simple and effective, it requires two LSTM models to separately capture the temporal information and spatial information, doubling the number of parameters that need to be determined in LSTMs. To improve model convergence/efficiency, we propose another fusion strategy. \subsubsection{Unified Fusion (SF)} Given a trajectory, based on the aforementioned procedure of \textit{time embedding} and \textit{location embedding}, we could obtain its initial temporal sequence embedding, denoted by $\tau^{(t)} = \langle t_1^\prime, t_2^\prime, ..., t_m^\prime \rangle$, and its initial spatial sequence embedding, denoted by $\tau^{(s)} = \langle l_1^\prime, l_2^\prime, ..., l_m^\prime \rangle$. Since these representations capture different dimensions of trajectory properties, we design a co-attention fusion module to enhance them by letting them interact with each other, as depicted in Fig.~\ref{fig:overview}. Specifically, we first make a transformation for the temporal and spatial features via a matrix $W_F$. \begin{equation} z_{\tau}^1=W_{F} {\tau}^{(t)}, \quad z_{\tau}^2=W_{F} {\tau}^{(s)} \end{equation} The interaction between two representations is calculated by \begin{equation} \begin{gathered} \beta_{i, j}=\frac{\exp \left(W_{Q^{\prime}} z_{\tau}^{i} \cdot W_{K}^{\prime} z_{\tau}^{j^{T}}\right)}{\sum_{j^{\prime} \in\{1,2\}} \exp \left(W_{Q^{\prime}} z_{\tau}^{i} \cdot W_{K}^{\prime} z_{\tau}^{j^{\prime} T}\right)}, \\ {\tau}^{\hat{(t)}}=\operatorname{Norm}\left(\textit{FFN}^{\prime}\left(\beta_{1,1} z_{\tau}^{1}+\beta_{1,2} z_{\tau}^{2}\right)+\tau^{(t)}\right), \\ {\tau}^{\hat{(s)}}=\operatorname{Norm}\left(\textit{FFN}^{\prime}\left(\beta_{2,1} z_{\tau}^{1}+\beta_{2,2} z_{\tau}^{2}\right)+\tau^{(s)}\right), \end{gathered} \end{equation} Here, $W_Q^{\prime}$ and $W_K^{\prime}$ are matrices with the same shape as $W_F$, and ${\tau}^{\hat{(t)}}$ and ${\tau}^{\hat{(s)}}$ are the enhanced representations of ${\tau}^{(t)}$ and ${\tau}^{(s)}$. As shown in Fig~\ref{fig:overview}, we then feed the enhanced initial temporal and spatial sequence embeddings into the same, single LSTM architecture for unified spatio-temporal trajectory embedding. This type of fusion manner is formally defined as follows. \begin{equation} v_T = \textit{LSTM}({\tau}^{\hat{(t)}}, {\tau}^{\hat{(s)}}) \end{equation} \subsection{Training and Model Optimization}\label{data} \subsubsection{Training Data Selection} Training sample selection is essential to the similarity learning~\cite{FaghriFKF18}. Recall that we aim to minimize the difference between the learned similarity $\mathcal{G}(v_{T_i}, v_{T_j})$ and the ground truth similarity $\mathcal{D}(T_i, T_j)$, where $\mathcal{G}$ denotes the target neural network and $\mathcal{D}$ represents some chosen distance measure. Hence, training samples $(T_i, T_j)$ are required. Guided by the similarities generated from training samples, ST2Vec trains a neural network to yield embeddings that approximate the chosen similarity function as defined in Eq.~1. A simple approach is to use all pairs of trajectories as training samples, but this incurs excessive training costs and causes overfitting. Thus, given a trajectory dataset, how to select samples to supervise the training process is important. \noindent \textbf{Selection Strategy.} Given a trajectory dataset, we randomly select one trajectory as an anchor $T_a$ and sample a similar (resp. dissimilar) trajectory as its positive $T_p$ (resp. its negative $T_n$) trajectory. Such a triple of an anchor, a positive, and a negative trajectory form a similarity triplet $(T_a, T_p, T_n)$. The triplets provide trajectory samples in terms of similarities and dissimilarities, making the trained model effective and robust. This type of sampling is used widely in imagine classification~\cite{WangWZJGZL018} and text clustering~\cite{XieGF16}. Specifically, as depicted in Fig.~\ref{fig:architecture}, when we select an anchor trajectory, we find its $N$ most similar trajectories as \textbf{similar ones}. Disregarding the similar trajectories, we randomly select $N$ other trajectories as \textbf{dissimilar ones}, such a sampling strategy provides a trade off between robustness and efficiency. \subsubsection{Training Process} In the training data selection phase, we obtain representative similarity triplets $(T_a, T_p, T_n)$. In the modeling phase, ST2Vec embeds the triples of trajectories considering both temporal and spatial aspects, i.e., $f_\theta^{(t, s)}: (T_a^{(t, s)}, T_p^{(t, s)}, T_n^{(t, s)}) \rightarrow (v_a^{(t, s)}, v_p^{(t, s)}, v_n^{(t, s)})$, where superscripts $t$ and $s$ denote the temporal and spatial aspects, respectively, and $f_\theta^{(t, s)}$ provides the functionality of $\mathcal{G}$. Also, the spatio-temporal similarities can be computed by the $L_2$ norm based on $\|v_{a}-v_{p}\|_{2}$ and $\|v_{a}-v_{n}\|_{2}$. The ground truth similarity $\mathcal{D}_{{a}, {p}}$ can be normalized as $\mathcal{D}_{a, p}^{\prime}=\exp (-\alpha \cdot \mathcal{D}_{a, p}) \in[0,1]$, and the dissimilarity $\mathcal{D}_{{a}, {n}}$ can be normalized as $\mathcal{D}_{a, n}^{\prime}=\exp (-\alpha \cdot \mathcal{D}_{a, n}) \in[0,1]$. Note $\alpha$ is a tunable parameter, making it possible to control the scale of similarity values. \noindent \textbf{Loss Function.} We define a space and time aware loss function $\mathcal{L}$ that measures the weighted sum squared errors of similarity triplets. \begin{equation} \begin{aligned} \mathcal{L} = & \mathcal{D}_{a, p}^{\prime}\left(\mathcal{D}_{a, p}^{\prime}-\exp \left(-\left\|v_{a}-v_{p}\right\|_{2}\right)\right)^{2} \\ &+\mathcal{D}_{a, n}^{\prime}\left(\mathcal{D}_{a, n}^{\prime}-\exp \left(-\left\|v_{a}-v_{n}\right\|_{2}\right)\right)^{2} \end{aligned} \end{equation} As before, subscripts $a$, $p$, and $n$ indicate anchor, positive, and negative, respectively, and $\mathcal{D}$ is the spatio-temporal similarity function defined in Eq.~1. It is worth mentioning that, this design combines the spatial similarity and the temporal similarity into a unified measure, which enables ST2Vec to adapt varying spatial and temporal weights according to different preferences, regardless of what $\lambda$ is. \subsubsection{Training Optimization} We observe that the existing trajectory similarity learning methods typically use random training instances for learning and often converge slowly. Recently studies in text generation~\cite{abs-2102-03554}, translation~\cite{LiuLWC20}, and object detection~\cite{SovianyIRS21} suggest that using training samples from easy to hard, i.e., first training easy ones and then hard ones, benefits the learning process. Such an organization of learning in human learning is referred to as a curriculum learning. In view of this, given a trajectory anchor $T_a$ and its $k$ similar (dissimilar) ones, we can get $k$ training triplets. As shown in Fig.~\ref{fig:architecture}, we can order the triplets with the easy ones first (i.e., the most dissimilar to $T_a$), followed by the hard ones (i.e., the most similar to $T_a$). Then, we feed those triplets from the easy to hard. This way, ST2Vec achieves faster convergence and higher accuracy, to be validated experimentally. \begin{table*}[tb] \vspace{-5mm} \caption{The Comparison of Similarity Learning on TP, DITA, LCRS, and NetERP Distances using T-Drive Dataset} \vspace{-2.5mm} \hspace{-5mm} \begin{tabular}{p{1.4cm}<{\centering}|p{1.6cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}|p{0.82cm}<{\centering}p{0.82cm}<{\centering}p{0.86cm}<{\centering}} \hline \multirow{2}{*}{Category} & \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{TP [22]} & \multicolumn{3}{c|}{DITA [26]} & \multicolumn{3}{c|}{LCRS [44]} & \multicolumn{3}{c}{NetERP [14]} \\ \cline{3-14} & & HR@10 & HR@50 & R10@50 & HR@10 & HR@50 & R10@50 & HR@10 & HR@50 & R10@50 & HR@10 & HR@50 & R10@50 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Window\\ Guided \\ Baselines\end{tabular}} & NEUTRAJ$^\textit{w}$ & 0.0978 &0.1373 &0.1582 &0.0805 &0.1243 &0.1442 &0.0357 &0.0419 &0.0861 &0.0054 &0.0173 &0.0198 \\ & Traj2SimVec$^\textit{w}$ & 0.0827 &0.1261 &0.1397 &0.053 &0.0682 &0.1151 &0.016 &0.098 &0.1861 &0.0209 &0.0986 &0.1010 \\ & T3S$^\textit{w}$ &0.1295 &0.1733 &0.2045 &0.0838 &0.1266 &0.1489 &0.0435 &0.0678 &0.1187 &0.01253 &0.0292 &0.0388 \\ & GTS$^\textit{w}$ & 0.3034 &0.3980 &0.6975 &0.1178 &0.2223 &0.3991 &0.0188 &0.0538 &0.0652 &0.0252 &0.0408 &0.0505 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}LSTM\\ Guided \\ Baselines\end{tabular}} & NEUTRAJ$^\textit{l}$ & 0.1765 &0.2221 &0.2703 &0.0767 &0.1103 &0.1340 &0.0533 &0.1126 &0.1694 &0.0259 &0.0502 &0.0736 \\ & Traj2SimVec$^\textit{l}$ & 0.1446 &0.1902 &0.2263 &0.05261 &0.0642 &0.1071 &0.0329 &0.1397 &0.2257 &0.0328 &0.1050 &0.1244 \\ & T3S$^\textit{l}$ & 0.1535 &0.1984 &0.2382 &0.0806 &0.1191 &0.1422 &0.0486 &0.0904 &0.1445 &0.0193 &0.0398 &0.0563 \\ & GTS$^\textit{l}$ &0.3709 &0.4756 &0.7965 &0.1277 &0.2321 &0.4143 &0.0360 &0.1074 &0.1342 &0.03984 &0.0655 &0.0894 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Our TMM\\ Guided \\ Baselines\end{tabular}} & NEUTRAJ$^\textit{t}$ & 0.3371 & 0.4091 & 0.7001 & 0.1412 & 0.2719 & 0.4892 & 0.0924 & 0.2848 & 0.3632 & 0.1086 & 0.1832 & 0.2841 \\ & Traj2SimVec$^\textit{t}$ & 0.3987 & 0.5364 & 0.6593 & 0.1321 & 0.3072 & 0.3643 & 0.0968 & 0.2826 & 0.3741 & 0.2128 & 0.3212 & 0.5553 \\ & T3S$^\textit{t}$ & 0.3944 & 0.5011 & 0.7917 & 0.1284 & 0.2288 & 0.4073 & 0.1442 & 0.4331 & 0.5672 & 0.1464 & 0.2767 & 0.4077 \\ & GTS$^\textit{t}$ & 0.4243 & 0.5640 & 0.8026 & 0.3244 & 0.4370 & 0.6381 & 0.1643 & 0.4427 & 0.6242 & 0.2154 & 0.3477 & 0.5343 \\ \hline \multirow{1}{*}{Our Methods} & ST2Vec & \textbf{0.4624} & \textbf{0.5868} & \textbf{0.8361} & \textbf{0.3773} & \textbf{0.5037} & \textbf{0.7031} &\textbf{0.1806} & \textbf{0.5469} & \textbf{0.7293} & \textbf{0.2386} & \textbf{0.3493} & \textbf{0.6133} \\ \hline \end{tabular} \label{tab:comparisonTdrive} \vspace{0mm} \end{table*} \subsection{Approach Analysis} Let's go back to the essence of the similarity learning, which aims to reduce the time complexity of traditional measures using pair-wise computations on original GPS trajectories, by performing similarity computation based on embedding vectors. To compute the similarity between a pair of trajectories in a road network, the time complexity of pair-wise based methods is $O ((E + L\lg L) \cdot m^{2})$, where $O(E + L\lg L)$ is the cost of finding a shortest path between two vertices and $m$ is the average trajectory length. Consequently, traditional methods cannot be applied efficiently in downstream tasks such as clustering, where the distances between all trajectory pairs must be computed. In contrast, the time complexity of ST2Vec for trajectory similarity computation is $O(d)$, where $d$ is a constant dimension. Thus, ST2Vec is more efficient for large-scale trajectory data analysis, as verified in Table~\ref{table:time}. Once $\mathcal{G}$ is well-trained, it enable computing the inter-trajectory spatio-temporal similarity in linear time, since $v_{T_i}$ and $v_{T_j}$ are low-dimensional vectors. \section{Introduction} \label{sec:intro} With the proliferation of GPS-equipped devices and online map based services (e.g., Uber and DiDi), massive volumes of spatio-temporal trajectories of moving objects such as people and vehicles are collected, which motivates various studies of trajectory analytics~\cite{Zheng15, SousaBL20}. A GPS trajectory $T$ is represented as a time-ordered sequence of discrete spatio-temporal points, i.e., $T=\langle(g_1, t_1), (g_2, t_2), ..., (g_n, t_n)\rangle$, where $g$ denotes an observed geo-location and $t$ denotes the corresponding time. A form of trajectory analytics--trajectory similarity computation that evaluates the similarity (distance) between two trajectories benefits a wide range of real-world applications such as ridesharing~\cite{ShangCWJZK18}, traffic analysis~\cite{Zheng15}, social recommendation~\cite{LyeCTHC20} and so on, as depicted in Example 1. Example 1. \textit{Given the capability of evaluating the similarity between a pair of trajectories, (i) drivers can be assigned potential ridesharing partners to share ride with; (ii) traffic authorities can predict traffic congestion by aggregating similar trajectories and counting the travel frequencies of roads; and (iii) social apps can identify users with similar living trajectories for friend recommendation. Further, trajectory similarity computation is a fundamental component of downstream similarity-based trajectory analyses, including top-$k$ similarity querying~\cite{shang2017trajectory} and clustering~\cite{YuLCC19}.} To measure the similarity between two trajectories, a variety of handcrafted distance measures exist, including free space based measures such as DTW~\cite{YiJF98}, LCSS~\cite{vlachos2002discovering}, Hausdorff~\cite{AtevMP10}, and ERP~\cite{ChenN04}, or road network based measures such as TP~\cite{shang2017trajectory}, DITA~\cite{shang2018dita}, LCRS~\cite{Yuan019}, and NetERP~\cite{KoideXI20}. However, these measures are associated with high computation costs. Specifically, they rely on pointwise matching computation~\cite{deeprepresentation}, meaning that they need to scan all point pairs from two trajectories to calculate the similarity scores, which incurs quadratic time complexity $O(\hat{n}^{2})$, where $\hat{n}$ is the average trajectory length. The high computation costs also limit the scalability of a series of downstream similarity-based trajectory analyses. To address the above issues, inspired by the success of metric learning in neural language processing~\cite{ChenWZ18, ChenYPCSPQ20} and computer vision~\cite{GeHDS18, Kordopatis-Zilos19}, a new line of studies~\cite{seed, subsimilar, YangW0Q0021, HanWYS021} aims to utilize neural networks to learn trajectory similarities for similarity computation. The core task is to obtain trajectory representations (embeddings) by means of neural networks so that the similarity relations between trajectories are well in that embedding space. This way, the similarity relations between GPS trajectories could be reflected by the similarity relations between the embeddings of the trajectories. Thus, given a pair of trajectories, trajectory similarity learning methods first map trajectories to $d$-dimensional vectors and then calculate the similarities between trajectories based on their embedding vectors, which reduces the time complexity from $O(\hat{n}^{2})$ to $O(d)$, representing a substantial speedup over techniques that operate directly on the GPS trajectories. \begin{figure}[t] \centering \vspace{1mm} \includegraphics[width=0.47\textwidth]{Demonstration.eps} \vspace{-4mm} \caption{An Illustration of Spatio-Temporal Trajectory Similarity} \label{fig:demo} \vspace{-5mm} \end{figure} While the existing similarity-learning-based trajectory similarity computation approaches~\cite{seed, subsimilar, YangW0Q0021, HanWYS021} successfully improve the high time complexity of traditional similarity computation, they still come with several significant limitations. In particular, all of the above approaches discard the temporal dimensional of spatio-temporal trajectories. That is, they learn and generate spatial-similarity-oriented trajectory embeddings that consider only the spatial dimensional of trajectories, i.e., $T^{(s)}=\langle g_1, g_2, ..., g_n\rangle$. As a result, they can only retrieve spatially similar trajectories, making them inefficient for time-aware scenarios, to be detailed below. \noindent \textbf{Why spatio-temporal similarity?} Unlike existing studies that target only spatially aware trajectory similarity learning and computation, we argue that a general similarity measure should consider both the spatial and the temporal aspects of trajectories. One motivating application is ridesharing. As shown in Fig.~\ref{fig:demo}, $T_1$ denotes the travel planned by a driver, and $T_2$ and $T_3$ belong to two people looking for a ride. The similarities between $T_1$ and $T_2$ and $T_1$ and $T_3$ determine which person to recommend to the driver. Existing spatial-proximity oriented methods typically recommend $T_2$, since $T_1$ and $T_2$ are more spatially close to each other. However, the resulting recommendation is of no use, as $T_1$ and $T_2$ have very different departure times. In spatio-temporal terms, $T_1$ and $T_3$ are most similar, and the person with $T_3$ should get the ride. Overall, taking into account both the spatial and temporal similarity is important in time-aware applications such as transportation planning~\cite{TranMLYHS20} and monitoring~\cite{YuLCZ19}. In addition, time is an essential dimension of spatio-temporal trajectory data and deserves attention on par with the spatial aspect. In this paper, we follow an orthogonal but complementary approach to existing space-driven similarity learning studies--we address the problem of spatio-temporal trajectory similarity learning in road networks. To achieve this, a straightforward approach is to cut time into discrete time slots and then perform spatio-temporally similarity computation in each slot using existing spatial similarity learning techniques. However, this approach treats space and time separately and also cannot fully utilize the temporal information due to the coarse-grained discretization of the time dimension. Instead, a more promising direction is to learn unified spatio-temporal embeddings that capture the intricate spatio-temporal similarities between trajectories. Although existing studies~\cite{seed, subsimilar, YangW0Q0021, HanWYS021} offer guidance for spatial embedding, three non-trivial challenges remain to be addressed, including temporal embedding, spatio-temporal fusion, and model optimization. \textit{Challenge I: How to capture the temporal correlations between trajectories for temporal similarity learning?} The core task is to generate time-oriented embeddings where the temporal similarity relations (i.e., close or distant) between trajectories are preserved. To achieve this, a natural idea is to feed time sequences of trajectories, i.e., $T^{(t)}=\langle t_1, t_2, ..., t_n\rangle$, into recurrent neural network (RNN) models to capture the time sequence information, similarly to how spatial similarity learning that feeds spatial sequences into RNNs. However, temporal modeling is more challenging than spatial modeling. This is because, unlike spatial locations of trajectories are discrete and enable the evaluation of spatial relations by specific measures, the time information exhibits strong continuous and periodic patterns. Specifically, time never stops, resulting in seconds, hours, days, etc. Thus, the time representation must be invariant to time rescaling. Second, trajectories show strong periodicity, which also affects temporal similarity computation. Thus, directly feeding time information into RNNs for temporal dimensional embedding is ineffective since it does not contend with the above problems. Instead, we design a \underline{t}emporal \underline{m}odeling \underline{m}odule, termed TMM, to achieve effective temporal trajectory similarity representation learning. This module is flexible and generic, in that it can be integrated with any existing spatial trajectory similarity learning proposal~\cite{seed, subsimilar, YangW0Q0021, HanWYS021} for spatio-temporal similarity learning. \textit{Challenge II: How to fuse spatial and temporal trajectory embeddings to achieve unified spatio-temporal similarity learning?} Once the spatial and temporal characteristics are captured, we need to fuse them to generate unified spatio-temporal similarity oriented embeddings. Different users may assign different weights to spatial and temporal similarity, to accommodate applications at hand. For example, applications such as region function estimation~\cite{KongLLTHX19} may assign high importance to spatial aspects of trajectories and thus assign high weight to spatial similarity. In contrast, applications such as ridesharing~\cite{LowalekarVJ21} may assign high importance to the temporal aspects and thus assign high weight to temporal similarity. Overall, a preferable fusion approach must be robust to learn different spatial and temporal weights adaptively and do not hurt model convergence, especially when both the time and spatial dimensions are considered to generate trajectory embeddings. To address this challenge, we develop a \underline{s}patio-\underline{t}emporal \underline{c}o-attention \underline{f}usion module, termed STCF, that fuses the separate spatial and temporal information using a unified fusion approach to obtain unified embeddings. \textit{Challenge III: How to optimize the models to improve the effectiveness and efficiency?} The two primary goals of learning-based trajectory similarity analyse are effectiveness (similarity querying quality) and efficiency (model convergence speed). Specifically, the training samples, learning procedure, and neural network parameters all potentially affect model performance. To improve effectiveness, we design a new sampling strategy with triplets and then train models using curriculum leaning. To avoid an excess of parameters due to the spatio-temporal modeling and to improve efficiency, we provide two different fusion approaches in the co-attention fusion module. To address all three challenges, we propose a representation learning based architecture, termed ST2Vec, which leverages fine-grained spatial and temporal information in trajectories to enable unified spatio-temporal similarity learning in road networks. To sum up, we make the following contributions. \begin{itemize}\setlength{\itemsep}{-\itemsep} \item We propose a new representation learning based architecture for spatio-temporal trajectory similarity learning in road networks. To the best of our knowledge, this is the first deep-learning proposal for spatio-temporal similarity computation. ST2Vec is capable of accommodating varying spatial and temporal weights under a series of trajectory measures, thus enabling flexible analyses. \item We develop a temporal modeling module for temporal trajectory representation learning. Further, to achieve unified spatio-temporal similarity learning, we develop a spatio-temporal co-attention fusion module with two fusion strategies to integrate the spatial and temporal features of trajectories in an efficient and effective manner. \item For the preparation phase, we improve robustness by developing a new sampling strategy to select representative samples to construct similarity triplets. In the training phase, we exploit the curriculum concept to guide the learning process, further improving the model performance with better accuracy and faster convergence. \item We report on extensive experiments with three real-world data sets and four popular network-aware trajectory measures. The findings offer evidence that ST2Vec is able to outperform four state-of-the-art competitors in terms of effectiveness, efficiency, and scalability. In addition, case studies including top-$k$ similarity querying and clustering demonstrate the downstream capabilities of ST2Vec. \end{itemize} The rest of the paper is organized as follows. Section~\ref{sec:problem} presents preliminaries. Section~\ref{sec:alternative} defines the problem to be solved and explains two alternative approaches to the problem. Section~\ref{sec:method} then details our framework and methods. The experimental results are reported in Section~\ref{sec:exe}. Section~\ref{sec:related} reviews related work. Finally, Section~\ref{sec:conclusion} concludes the paper and offers promising directions. \section{Preliminaries} \label{sec:problem} We proceed to introduce key concepts related to the studied problem, including road-network constrained trajectories and the learning targets of ST2Vec. \subsection{Road Networks $\&$ Trajectories} As we target trajectory similarity learning in road networks, we first define road networks and trajectories. \begin{definition}\label{defn: road} {\bf (Road Network)} \textit{A road network is modeled as a directed graph $G = (L, E)$, where $L$ is a set of road vertices and $E \subseteq L \times L$ is an edge set of road segments.} \end{definition} Specifically, a vertex $l_i = (x_i, y_i) \in L$ models a road intersection or a road end, in which $x_i$ and $y_i$ denote the longitude and latitude of $l_i$, respectively. An edge $e_{l_i, l_j} \in E$ models a directed road segment from $l_i$ to $l_j$. The \textbf{GPS trajectory} $T$ of a moving object is initially captured as a time-ordered sequence of sampling points from a GPS device, i.e., $T = \langle (g_1, t_1), (g_2, t_2), ..., (g_n, t_n) \rangle$, where $n$ denotes the length of $T$. Each sampling point is represented as a 2-dimensional (location, time) tuple, i.e., $(g_i, t_i), i \in [1, n]$. Here, $g$ denotes the observed geo-location that consists of longitude and latitude, and $t$ denotes the corresponding time. As we target road-network constrained trajectory similarity learning, we align trajectory points $g$ with vertices $l$ using an existing map-matching procedure (e.g., ~\cite{BrakatsoulasPSW05}). Specifically, we assume the trajectory points are located on the vertices in $G$. It is straightforward to handle trajectory points located on edges: if a point $g$ is located on an edge $e$, we split $e$ into two sub-edges by introducing a new vertex $l_g$. Consequently, each original trajectory $T$ is transformed into a directed path in $G$ from a start vertex to an end vertex, as defined below. \begin{definition}\label{defn:trajectory} {\bf (Trajectory)} \textit{Given a road network $G = (L, E)$, a trajectory $T$ is a directed sequence of $m$ $(m \leq n)$ vertices in $G$, i.e., $T = \langle (l_1, t_1), (l_2, t_2), ..., (l_m, t_m) \rangle$, where $l_i \in L$ is a vertex and $t_i$ is the corresponding time.} \end{definition} Unless stated otherwise, we assume in the sequel that trajectories are map matched. Given a trajectory $T$, we use $T^{(s)}$ and $T^{(t)}$ denote its spatial and temporal aspects, respectively, i.e., its \textbf{spatial trajectory} $T^{(s)} = \langle l_1, l_2, ..., l_m \rangle$ and its \textbf{temporal trajectory} $T^{(t)} = \langle t_1, t_2, ..., t_m \rangle$. Note that $T^{(s)}$ and $T^{(t)}$ correspond to each other synchronously at each step. \subsection{Spatio-Temporal Similarity $\&$ Learning Targets} \noindent \textbf{Remark.} Before performing similarity learning, a similarity measure must be chosen that serves as the learning target. Existing spatial similarity leaning studies~\cite{seed, subsimilar, YangW0Q0021} use free space oriented measures (i.e., Hausdorff~\cite{AtevMP10}, DTW~\cite{YiJF98}, LCSS~\cite{vlachos2002discovering}, and ERP~\cite{ChenN04}) for trajectory similarity learning in free space, or they~\cite{HanWYS021} use network oriented measures (i.e., TP~\cite{shang2017trajectory}) for trajectory similarity learning in road networks. In this paper, without loss of generality, we combine spatial and temporal similarity measures linearly to define spatio-temporal similarity, which is also the learning target of ST2Vec. Given trajectories $T_i$ and $T_j$, we thus define the spatio-temporal trajectory similarity function $\mathcal{D}(T_i, T_j)$ as a weighted, linear combination of their spatial and temporal similarity. It is simple and flexible to define spatio-temporal similarity this way, and this liner combine approach is popular in previous non-learning-based spatio-temporal trajectory similarity studies~\cite{ShangDZJKZ14, ShangZJYKLW15, shang2017trajectory}. In this paper, we first employ this approach for spatio-temporal trajectory similarity learning. \begin{equation} \label{eq1} \operatorname{\mathcal{D}}\left(T_{i}, T_{j}\right)=\lambda \cdot \operatorname{\mathcal{D}}_{ S }\left(T_{i}^{(s)}, T_{j}^{(s)}\right)+(1-\lambda) \cdot \operatorname{\mathcal{D}}_{ T }\left(T_{i}^{(t)}, T_{j}^{(t)}\right) \end{equation} Since we study road network constrained trajectory similarity, $\mathcal{D}$ refers to the state-of-the-art network-aware distance measures including TP~\cite{shang2017trajectory}, DITA~\cite{shang2018dita}, LCRS~\cite{Yuan019}, and NetERP~\cite{KoideXI20}. Here, $\mathcal{D}_{S}$ and $\mathcal{D}_{T}$ denote spatial and temporal similarity, respectively. Although these distance measures are predominantly oriented towards spatial proximity, they are also able to support temporal similarity~\cite{shang2017trajectory}. This is because, given a trajectory $T$, its spatial sequence $T^{(s)}$ and temporal sequence $T^{(t)}$ both are time series and support distance aggregation between sequences for similarity evaluations. Since we aim to enable similarity learning across different measures without modifying these measures or their implementations, we do not cover their detailed implementations, but instead refer the interested reader to the literature~\cite{SousaBL20}. Further, parameter $\lambda \in [0, 1]$ controls the relative weight of spatial and temporal similarity, which enables providing flexibility that can be used to support different applications as discussed in Section~\ref{sec:intro}. \section{Problem Statements} \label{sec:alternative} We proceed to present the problem formulation, followed by two alternative solutions to our problem. Then, we give a taste of the ST2Vec solution. \subsection{Problem Formulation} \noindent \textbf{Problem Statement.} For any pair of trajectories $T_i$ and $T_j$, the spatio-temporal trajectory similarity learning aims to learn a neural-network driven function $\mathcal{G} (\cdot,\cdot)$ such that $\mathcal{G}\left(v_{T_{i}}, v_{T_{j}}\right)$ is maximally close to $\mathcal{D}\left(T_{i}, T_{j}\right)$: \begin{equation} \label{eq2} \arg \min \limits_{\mathcal{\mathcal{M}}} \left|\mathcal{G}\left(v_{T_{i}}, v_{T_{j}}\right)-\mathcal{D}\left(T_{i}, T_{j}\right)\right| \end{equation} Here, $\mathcal{M}$ denotes the model parameters of the neural network, $\mathcal{D}$ is the spatio-temporal trajectory similarity defined in Eq.~\ref{eq1}, and $v_{T_i}$ and $v_{T_j}$ are the spatio-temporal embeddings of $T_i$ and $T_j$. According to Eq.~\ref{eq2}, spatio-temporal similarity learning aims to train a neural network that realizes a function $\mathcal{G}$ by embedding trajectories (i.e., $T_i$ and $T_j$) into low-dimensional vectors (i.e., $v_{T_i}$ and $v_{T_j}$) that reflect their similarity relations. That is, $v_{T_i}$ and $v_{T_j}$ are close (resp. distant) to each other if $T_i$ and $T_j$ are similar (resp. dissimilar) to each other. \subsection{Alternative Solutions} To realize spatio-temporal trajectory similarity computation, two alternative solutions exist that are extensions of existing spatial similarity learning proposals~\cite{seed, subsimilar, YangW0Q0021, HanWYS021}. A straightforward solution is to split the time axis into discrete time intervals and then assign trajectories to different time intervals using sliding-window methods. After this pre-processing, one can conduct similarity computation in each time interval using existing spatial similarity learning methods. However, this approach is spatially-oriented, is coarse-grained, and is suboptimal. Further, time is continuous and unbounded, making it difficult to determine an appropriate window length, and regardless of the length chosen, inaccurate or incorrect spatio-temporal similarity computations are inevitable. In addition, this approach relies on discrete time and trajectory processing, which incurs additional processing costs. Another solution to capture the temporal information of trajectories is to feed the time sequences of trajectories to RNNs the same way that location sequences of trajectories are fed to RNNs. Then, the resulting temporal vectors can be combined with the spatial vectors obtained by existing methods~\cite{seed, subsimilar, YangW0Q0021, HanWYS021} to achieve spatio-temporal similarity learning. Although this approach is more reasonable than the first, it is also naive. As discussed in Section~\ref{sec:intro}, temporal correlations are more complex than spatial correlations because time is continuous and correlations may be periodic. Consequently, simply applying spatial trajectory embedding methods to embed time is likely to be sub-optimal. The paper's experimental study considers the above two approaches and provides detailed insight into their performance. \subsection{ST2Vec Solution} In contrast to above solutions, we propose a new \textbf{representation learning} based architecture, termed ST2Vec, that is capable of exploiting the fine-grained temporal and spatial information in trajectories to enable unified spatio-temporal similarity learning. Fig.~\ref{fig:architecture} shows the architecture and training scheme of ST2Vec. It takes similar and dissimilar pairs of anchor trajectories to construct input \textbf{similarity triplets} that consider both the spatial (i.e., $T^{(s)}$) and temporal (i.e., $T^{(t)}$) dimensions. Then, ST2Vec learns to embed trajectories, mapping the trajectories to low-dimensional space, which process is shown in the dashed rectangle in Fig.~\ref{fig:architecture}. This process proceeds until the trajectory similarities evaluated on the embedding vectors (denoted by light blue and light yellow cubes) approximate the ground-truth similarities (denoted by blue and yellow cubes) as computed by Eq.~\ref{eq1}. In order to generate the spatio-temporal similarity-oriented embeddings (cf.\ the green rectangle with reddish edges in Fig.~\ref{fig:architecture}), we must capture the temporal and spatial information in trajectories and fuse this information in a unified manner. To achieve this, STVec features three major modules, i.e., \underline{t}emporal \underline{m}odeling \underline{m}odule (TMM), \underline{s}patial \underline{m}odeling \underline{m}odule (SMM), and \underline{s}patial-\underline{t}emporal \underline{c}o-attention \underline{f}usion module (STCF). These modules are covered next. We note that this design with three modules makes it possible to replace our SMM with any existing spatial similarity learning module to realize spatio-temporally aware similarity learning. \section{Related Work} \label{sec:related} We review related work on trajectory similarity in terms of non-learning-based methods and learning-based methods. \noindent \textbf{Non-learning-based methods}~\cite{XieLP17, shang2017trajectory, shang2018dita, wang2018torch, wang2019fast, Yuan019, KoideXI20} of trajectory similarity computation rely on well defined similarity measures and associated acceleration techniques. Here, we focus on popular similarity measures, while a comprehensive coverage of free-space based similarity measures is available elsewhere~\cite{SuLZZZ20, SousaBL20}. Network-aware similarity computation techniques first map original trajectories into road-network paths that consist of vertices or segments. Then, they define similarity measures based on classic distance measures such as Hausdorff~\cite{AtevMP10}, DTW~\cite{YiJF98}, LCSS~\cite{vlachos2002discovering}, and ERP~\cite{ChenN04}, generally by aggregating the distances between road vertices or segments of two trajectories. For example, Koide et al.~\cite{KoideXI20} propose NetERP by aggregating shortest path-distances between the vertices of two trajectories. Based on LCSS, Wang et al.~\cite{wang2018torch, wang2019fast} propose the Longest Overlapping Road Segments (LORS) for trajectory similarity computation and clustering. Similarly, Yuan et al.~\cite{Yuan019} propose the direction-aware Longest Common Road Segments (LCRS). These methods typically have quadratic computational complexity, as they rely on computations for aligned point pairs. Moreover, non-learning-based methods rely on hand-crafted heuristics, failing to exploit information hidden in trajectories. \noindent \textbf{Learning-based methods}~\cite{deeprepresentation, seed, subsimilar, YangW0Q0021, HanWYS021} are becoming increasingly popular in recent years, as they feature the success of deep learning technologies, i.e., powerful approximation capability. The learning-based methods learn distance functions via neural networks that embed input trajectories and approximate given distance measures. This way, trajectory embeddings are generated that enable fast trajectory similarity computation and downstream analyses. Li et al.~\cite{deeprepresentation} propose t2vec that addresses the high computational cost of traditional methods while taking into account low sampling rates and the influence of noisy points. Nevertheless, t2vec was designed for trajectory representation learning, not similarity computation. Yao et al.~\cite{seed} propose NEUTRAJ, which employs metric learning to approximate trajectory similarity for different free-space based distance measures. Further, Zhang et al.~\cite{subsimilar} propose Traj2SimVec, which considers sub-trajectory similarity in the learning process. Zhang et al.~\cite{YangW0Q0021} propose T3S, which utilizes attention function to improve the performance. Despite the efforts of these studies, they all target spatial trajectory similarity in free space and cannot model the complex dependence of road networks. Recently, Han et al.~\cite{HanWYS021} develop GTS, which takes spatial trajectory similarity learning into road network context and achieves the state-of-the-art performance. Nevertheless, GTS is spatial-oriented similarity learning while ignoring the temporal aspect of trajectories. More specifically, GTS is designed for POI-based spatial trajectory similarity computation. For trajectories that share the same or neighbor POIs but with totally different traveling paths, GTS treats them as similar to each other. Besides, GTS only learns a single type of distance measure (i.e., TP~\cite{shang2017trajectory}, an extension of Hausdorff distance), while ST2Vec accommodates a series of popular measures including TP, DITA, LCRS, and NetERP.
1,108,101,563,795
arxiv
\section{Introduction} The universe's oldest light, the cosmic microwave background (CMB), encodes in its temperature and polarization a wealth of information. While the study of the CMB has a long and multi-faceted history, the study of the gravitational effects imprinted on the CMB at later stages of the universe has captured the attention of modern cosmology. Specifically, the study of gravitational lensing of the CMB by galaxy clusters has enabled the reconstruction of the gravitational potential field, $\phi$, around these clusters. For one, this field contains information about the spatial distribution of these clusters, which enables insights into the parameters that govern their formation, such as dark energy and massive neutrino properties \citep{lewis2006weak}. Moreover, a variety of cosmological parameters, such as $\Omega_m$, the mass density of the universe, can be constrained by recovering the total mass of these clusters from $\phi$ \citep{bocquet2019cluster}. However, with recent and upcoming CMB surveys - e.g. AdvancedACTPol \citep{thornton2016atacama} and Simons Observatory \citep{ade2019simons} - expected to amass lensed CMB measurements at unprecedentedly high signal-to-noise ratios, discoveries tied to the study of CMB lensing are likely to only become more significant. Currently, the most prevalent method for reconstructing $\phi$ from lensed CMB is the quadratic estimator (QE), an estimator formed from quadratic combinations of data \citep{hu2002mass}. However, QE is shown to be sub-optimal for low-noise polarization data due to the lensing itself \citep{yoo2008improved}, as well as for low-noise temperature data due to the cosmic variance of the background CMB gradient \citep{hadzhiyska2019improving} \cite{hirata2003analyzing}, and thus not suited for the higher signal-to-noise ratios promised by this novel generation of CMB surveys. As such, a variety of alternatives have been proposed, including a gradient-inversion technique \cite{hadzhiyska2019improving}, a maximum-likelihood-estimator \cite{raghunathan2017measuring}, and a hierarchical Bayesian inference method \cite{muse}. Machine learning (ML)-backed methods also present an attractive alternative to QE. Indeed, Caldeira, et. al. \cite{caldeira2019deepcmb} use a Residual U-Net (ResUNet) to recover $\kappa$ maps (the dimensionless surface-mass density along the line of sight) around galaxy clusters from input lensed CMB with higher signal-to-noise ratios than QE over a broad range of angular scales in the low-noise regime. However, the quality of ResUNet's predictions materially degrades in noisier conditions, as well as at higher angular scales, where it underperforms QE. One disadvantage of using the ResUNet method is its reliance on a static loss function (such as L1/L2) during network optimization. While this static loss function has been shown to capture low-frequency components, it essentially formulates the image-to-image translation problem as a per-pixel regression problem, thereby ignoring dependence between pixels in the output space and often leading to a loss of sharpness and structure \citep{larsen2016autoencoding}. In this work, we aim to overcome this disadvantage by optimizing the ResUNet with a trainable loss function (discriminator) in conjunction with an L1 loss, effectively transforming the ResUNet into a modified Pix2Pix conditional generative adversarial network (cGAN) \cite{cgan}. This architecture is particularly adept at extrapolating structure and high-frequency components in image-to-image translation tasks while maintaining low-frequency correctness. We train both our cGAN and a ResUNet to recover $\kappa$ maps from lensed CMB temperature maps around galaxy clusters under four conditions: no noise, astrophysical (tSZ and kSZ) foreground noise, $5 \mu K/arcmin$ white noise (which mimics instrumentation noise), and random off-centering (which mimics the fact that the clusters centers will not always be perfectly centered). We demonstrate that our cGAN outperforms ResUNet under all four conditions, and that this out-performance becomes especially pronounced in the noisier regimes and at high $\ell$ \footnote{As the present paper focuses on reconstruction in the high-$\ell$ regime, and because of the fact that QE is computationally prohibitively expensive, we focus on comparing our cGAN architecture exclusively to ResUNet.}. \section{Data} We employ the Websky Extra-galactic CMB Simulations \citep{websky} to model the lensed CMB temperature anisotropies, tSZ and kSZ effects, and $\kappa$ maps around corresponding galaxy clusters. These simulations are tailored to the upcoming ground-based CMB surveys such as the Simons Observatory and Advanaced ACTPol \citep{websky}, rendering them ideal for our purposes \footnote{The following cosmological parameters are used in the Websky simulations: $\Omega_m = 0.31, \Omega_b = 0.049, \Omega_c = 0.261, H_0 = 100*0.68, n_s = 0.965, \tau = 0.0943$ \cite{websky}.}. We select $50,000$ clusters matching the following criterion: $M_{200m} \in [10^{13},5*10^{14}] M_{\odot}$ and $z \in [0.47,0.6]$, and cut out the lensed CMB temperature, $\kappa$, tSZ, and kSZ maps in $128 \times 128$ arcmin squares around the cluster center. Additionally, we create random $5\mu K/arcmin$ $128\times128$ white noise maps. Notably, all maps are projected into 2D euclidean space, as our models are only capable of handling such formats, using Orphics \cite{orphics}. From these maps, we compile three feature datasets, $X_{CMB}$, $X_{CMB+tSZ+kSZ}$, and $X_{CMB+5\mu K}$, and use the $\kappa$ maps as our target dataset. Additionally, we repeat this process for the pure CMB and $\kappa$ maps with random off-centering in both RA and Dec directions by a Gaussian with mean 0 and 1 arcmin variance, resulting in one additional feature/target pair, $\{X_{oc}, \kappa_{oc}\}$. All datasets are thus size $(50000, 128, 128, 1)$, and we split each using a 80:10:10 split. \section{Method} Our cGAN is made up of two main components: $G$, the generator, and $D$, the discriminator. Both $G$ and $D$ are convolutional neural networks (CNNs), which have demonstrated unparalleled power in dealing with image-data \citep{aloysius2017review}; moreover, $G$ is a Residual U-Net, a type of CNN which uses an archetypal encoder-decoder scheme that has proven adept at a wide array of image-to-image translation tasks \citep{zhang2018road}. $G$ learns to map the observed CMB temperature map, $X$, to a predicted $\kappa_{pred}$ map, $G : X \rightarrow \kappa_{pred}$, while $D$ takes as input the concatenated generator-predicted $\kappa_{pred}$ and ground-truth $\kappa$ maps, and predicts a grid of the likelihood $[0, 1]$ that each $70\times70$ patch in $\kappa_{pred}$ is real, $D:\{\kappa_{pred}, \kappa\} \rightarrow D_{output}$. We model $G$ after the ResUNet proposed by Caldeira, et. al. \cite{caldeira2019deepcmb}, albeit with some modifications, and $D$ after the convolutional PatchGAN architecture proposed by \cite{cgan}. Ultimately, $G$ is trained to produce $\kappa_{pred}$ maps as similar as possible to the ground truth $\kappa$ map, while $D$ is trained to discriminate between the "fake" $\kappa_{pred}$ and the ground-truth $\kappa$. \begin{figure} \centering \includegraphics[scale=0.225]{illustration.png} \caption{Structure of the cGAN. As illustrated, the generator, $G$, based on the ResUNet architecture takes as input $X_{CMB}$ and produces a predicted $\kappa$ map. These are fed into the discriminator, $D$, to produce the first loss function, $L_{cGAN}(G,D)$ used to update the parameters of both $G$ and $D$. An $L_1$ loss is also included, as it has been shown to emphasize low-frequency correctness.} \label{fig:my_label} \end{figure} $G$ is composed of an encoder, a bottleneck, and decoder. The encoder is made of six encoding blocks, each of which consists of a down-sampling convolutional layer followed by a simple convolutional layer. The number of filters for each block increases as $32*2^{n}$, where $n$ is the index of the encoder block from $n \in \{0,5\}$. The decoder follows the same structure, however the first convolutional layer of each decoder block is a 2D transpose convolutional layer, and the number of filters goes as $32*2^{5-n}$. Residual connections are established between encoder and decoder blocks with same $n$ to allow low-level feature information to flow more easily through the network. The output of the decoder block is fed into a final 2D transpose convolutional layer after which $tanh$ activation is applied \footnote{For consistency, we use this same network architecture and parameters for the ResUNet to which we compare our cGAN.}. $D$ is composed of four encoding convolutional layers with filter size $64*2^n$, followed by a simple convolutional layer with filter size $512$, a final convolutional layer with filter size $1$, and a $sigmoid$ activation function. Batch normalization is applied to all convolutional layers in $G$ and $D$ except the first, and $Leaky ReLU$ activation functions follow each convolutional layer except for the last. Additionally, the first two decoding blocks in $G$ have $dropout=0.5$ applied. The models are constructed using the Keras interface of the Tensorflow API \citep{abadi2016tensorflow}, and we use the general GAN update structure laid out by Brownlee \cite{brownlee_2021}. We train both our cGAN and the ResUNet three separate times on the $40,000$ cluster $\{X_{CMB}, \kappa\}, \{X_{CMB+tSZ+kSZ}, \kappa\}$, $\{X_{CMB+5\mu K}, \kappa\}$, and $\{X_{oc}, \kappa_{oc}\}$ training datasets. Notably, cGAN is trained to minimize both the discriminator loss and an L1 loss; more specifications on optimization are provided in the Appendix. We train both models for 100 epochs using mini-batches of 32 samples on a single NVIDIA Tesla P100 GPU with 16GB of Graphic RAM, which takes 3.8 hours for the ResUNet and 8.1 hours for the cGAN. \section{Results} All results are generated over the held-out $5000$-cluster test dataset. Figure \ref{fig:visuals} provides a sample visualization of ResUNet and cGAN predictions under various noise conditions for a random test cluster. From visual inspection, it appears as if cGAN captures significantly more information than ResUNet under all noise conditions; this is especially apparent in the noised (astrophysical foreground and $5\mu K/arcmin$) regimes, where cGAN continues to recover the majority of structural information, whereas significant blurring and/or loss of structure has occurred in the ResUNet predictions. In order to quantitatively test the performance of cGAN, we compare the power spectrum (generated using the Orphics package \cite{orphics}) and one-point PDF of the predictions of cGAN and ResUNET to the ground-truth $\kappa$ maps in Figure \ref{fig:quant}. \begin{figure} \centering \includegraphics[scale=0.3]{visualizations.png} \caption{Visualization of sample predicted $\kappa_{pred}$ maps from the cGAN and ResUNet for a random cluster in the test dataset. The predictions are made for CMB temperature maps with either no noise, astrophysical foreground noise (tSZ+kSZ), $5\mu K/arcmin$ white noise, or random off-centering according to a Gaussian with $1 arcmin$ variance.} \label{fig:visuals} \end{figure} In observing the mean power spectra, it is clear that the ResUNet power spectrum is able to mimic that of the ground-truth relatively well in the noiseless regime, as well as after random off-centering. However, it materially diverges from the ground-truth power spectrum under both astrophysical foreground and $5\mu K/arcmin$ noise regimes; this divergence is especially pronounced at high $\ell$. Conversely, the cGAN power spectrum is able to stay faithful to the ground-truth under all four regimes, and does not materially degrade even at high $\ell$: no noticeable degradation is visible until around $\ell \approx 6500$. The relative strength of the cGAN is further emphasized in the average one-point PDFs, in which ResUNet's one-point PDF materially diverges from the ground-truth $\kappa$ map under the noised regimes while cGAN's does not. Ultimately, cGAN's successes at high $\ell$, highlight the relative ability of the discriminator architecture at emphasizing small-scale structure in the predicted $\kappa$ maps. \begin{figure} \centering \includegraphics[scale=0.4]{quants.png} \caption{The mean over the 5000-cluster test dataset of the one-point PDF/power spectra of the cGAN and ResUNet predicted $\kappa_{pred}$ maps under various noise conditions. For the power spectra, we also calculate the standard deviation in predicted power spectra over the test dataset using the bootstrap method, and include these standard deviations in as the shaded region in the graphs (the average standard deviation per cluster is on the order of magnitude of 10\% for all predictions).} \label{fig:quant} \end{figure} \section{Conclusion} In the present paper, we demonstrate that the inclusion of a discriminator in the optimization of a Residual U-Net can materially improve its performance in recovering galaxy cluster convergence from lensed CMB temperature maps. Specifically, across both visualizations of predicted $\kappa$ maps, as well as the power spectra and one-point PDFs of these $\kappa$ maps, we show that the disciminator-enhanced network (cgAN) noticeably outperforms ResUNet under a variety of noise conditions. Moreover, we demonstrate that this out-performance becomes especially pronounced in noisy regimes (such as instrumentation or astrophysical foreground noise), as well as at high $\ell$. This out-performance at high $\ell$ is particularly encouraging, as small-scale features are challenging to recover using traditional QE methods. In future work, it will be valuable to explore 1) cGAN's performance under a wider variety of noise conditions, 2) cGAN's relative performance with additional loss functions (such as a Fourier-space loss), and 3) alternative GAN structures (such as Wasserstein GAN \cite{wgan}). \section{Broader Impact} We employ a generative adversarial network (GAN) - a popular machine learning model - in an astrophysical context. GANs are already present in myriad social applications, and while most use cases are benign, they have been maliciously employed across multiple social media platforms, from generating fake Facebook accounts to conducting personation attacks on targeted subjects \cite{cnn_facebook}. The applications of such algorithms to astrophysics however has been quite limited. Nonetheless, in applying GANs in these contexts, we can better understand where and why they fail, which can help improve our ability to spot maliciously employed GANs more broadly. \begin{ack} The contributions to the paper are as follows. \textbf{Parker}; performed all neural network computation work; innovated NN architecture; performed NN diagnostic analysis; wrote up the paper and corresponding poster; initiated problem concept. \textbf{Han}; constructed data pipeline from Websky; guided ResUNet construction; designed NN diagnostic analysis; initiated problem concept. \textbf{Portela}; consulted on paper edits; helped clarify sticky physics questions. \textbf{Ho}; designed uncertainty calculations; suggested one-point PDF analysis; guided paper through the final steps before submission. Moreover, we are extremely thankful and indebted to Professor Suzanne Staggs for the invaluable time, advice, and effort that she has graciously provided in guiding us during the completion of this fascinating project. \end{ack}
1,108,101,563,796
arxiv
\chapter[Bloom \& Richards]{Data Mining and Machine-Learning in Time-Domain Discovery \& Classification} {\Large \bf Joshua S. Bloom}\vspace{0.05in} \\{\it Astronomy Department\\University of California, Berkeley\\601 Campbell Hall, Berkeley, CA 94720}\\ {\tt [email protected]}\\ \noindent {\Large \bf Joseph W. Richards}\vspace{0.05in} \\{\it Astronomy Department/Statistics Department\\University of California, Berkeley\\ 601 Campbell Hall, Berkeley, CA 94720}\\ {\tt [email protected]}\\ The changing heavens have played a central role in the scientific effort of astronomers for centuries. Galileo's synoptic observations of the moons of Jupiter and the phases of Venus starting in 1610, provided strong refutation of Ptolemaic cosmology. These observations came soon after the discovery of Kepler's supernova had challenged the notion of an unchanging firmament. In more modern times, the discovery of a relationship between period and luminosity in some pulsational variable stars \cite{1908AnHar..60...87L} led to the inference of the size of the Milky Way, the distance scale to the nearest galaxies, and the expansion of the Universe (see \cite{2010ARA&A..48..673F} for review). Distant explosions of supernovae were used to uncover the existence of dark energy and provide a precise numerical account of dark matter (e.g., \cite{2006A&A...447...31A}). Repeat observations of pulsars \cite{1992Natur.355..145W} and nearby main-sequence stars revealed the presence of the first extrasolar planets \cite{1995Natur.378..355M,1998ARA&A..36...57M,2000ApJ...529L..41H,2000ApJ...529L..45C}. Indeed, time-domain observations of transient events and variable stars, as a technique, influences a broad diversity of pursuits in the entire astronomy endeavor \cite{2009astro2010S.307W}. While, at a fundamental level, the nature of the scientific pursuit remains unchanged, the advent of astronomy as a {\it data-driven} discipline presents fundamental challenges to the way in which the scientific process must now be conducted. Digital images (and data cubes) are not only getting larger, there are more of them. On logistical grounds, this taxes storage and transport systems. But it also implies that the intimate connection that astronomers have always enjoyed with their data---from collection to processing to analysis to inference---necessarily must evolve. Figure \ref{fig:science} highlights some of the ways that the pathway to scientific inference is now influenced (if not driven by) modern automation processes, computing, data-mining and machine learning. The emerging reliance on computation and machine learning (ML) is a general one---a central theme of this book---but the time-domain aspect of the data and the objects of interest presents some unique challenges. First, any collection, storage, transport, and computational framework for processing the streaming data must be able to keep up with the dataflow. \begin{figure}[tbh] \centerline{\includegraphics[width=5.0in,angle=0]{bloom_loop.eps}} \caption[Data mining, computation, and machine learning roles in the scientific process.]{Data mining, computation, and ML roles in the scientific pathway.} \label{fig:science} \end{figure} This is not necessarily true, for instance, with static sky science, where metrics of interest can be computed off-line and on a timescale much longer than the time required to obtain the data. Second, many types of transient (one-off) events evolve quickly in time and require more observations to fully understand the nature of the events. This demands that time-changing events are quickly discovered, classified, and broadcast to other followup facilities. All of this must happen robustly with, in some cases, very limited data. Last, the process of discovery and classification must be calibrated to the available resources for computation and followup. That is, the {\it precision} of classification must be weighed against the {\it computational cost} of producing that level of precision. Likewise, the cost of being wrong about the classification of some sorts of sources must be balanced against the scientific gains about being right about the classification of other types of sources. Quantifying these tradeoffs, especially in the presence of a limited amount of followup resources (such as the availability of larger-telescope observations) is not straightforward and inheres domain-specific imperatives that will, in general, differ from astronomer to astronomer. \begin{figure}[tbh] \centerline{\includegraphics[width=4in]{vermeer.eps}} \caption[Vermeer's {\it The Astronomer}]{``The Astronomer,'' by Johannes Vermeer, c.~1668. In many ways, epoch of the armchair astronomer is returning to primacy.} \label{fig:astronomer} \end{figure} This chapter presents an overview of the current directions in machine learning and data-mining techniques in the context of time-domain astronomy. Ultimately the goal---if not just the necessity given the data rates and the diversity of questions to be answered---is to abstract the traditional role of astronomer in the entire scientific process. In some sense, this takes us full-circle from the pre-modern view of the scientific pursuit presented in Vermeer's ``The Astronomer'' (Figure \ref{fig:astronomer}): in broad daylight, he contemplates the nighttime heavens from depictions presented to him on globe, based on observations that others have made. He is an abstract thinker, far removed from data collection and processing; his most visceral connection to the skies is just the feel of the orb under his fingers. Substitute the globe for a plot on a screen generated from an SQL query to a massive public database in the cloud, and we have a picture of the modern astronomer benefitting from the ML and data-mining tools operating on an almost unfathomable amount of raw data. \section{Discovery} We take the notion of discovery, in the context of the time domain, as the recognition that data collected (e.g., a series of images of the sky) contains a source which is changing in time in some way. Classification (\S \ref{sec:class}) is the quantification of the similarity of that source to other known types of variability and, by extension, the inference of {\it why} that source is changing. The most obvious change to discover is that of brightness or flux. On imaging data, changes in color and position might also be observed\footnote{Discovery of change in position, especially for fast-moving sources (such as asteroids), inheres its own set of data-mining challenges which we will discuss. See, for example, \cite{2007Icar..189..151K,2010PASP..122..549P}.}. Spectroscopically, changes in emission/absorption properties and apparent velocities might also be sought. Discovery of time-variable behavior is technique-specific and, as such, we will review the relevant regimes. Yip et al.~\cite{2009AJ....137.5120Y} discuss variability discovery on spectroscopy line features in the context of active galactic nuclei. Gregory \cite{2011MNRAS.410...94G} presents ML-based discovery and characterization algorithms for astrometric- and Doppler-based data in the context of exoplanets. We focus here on the discovery of brightness/flux variability. \subsection{Identifying Candidates} {\bf Pixelated Imaging}: Many new and planned wide-field surveys are drawing attention to the need for data-mining and ML. These surveys will generate repeated images of the sky in some optical or infrared bandpass. These 2-dimensional digitized images form the basic input to discovery\footnote{In each one minute exposure, for example, the Palomar Transient Factory \cite{2010SPIE.7735E.122L} produces 11 images each from a 2k $\times$ 4k CCD array of size 1 sq.~arcsecond (0.65 sq.~degree per image). Since each pixel is 2 bytes, the amounts to 184 MB of raw data generated per minute. Raw data are pre-processed using calibration data to correct for variable gain and illumination across the arrays; spatially-dependent defects in the arrays are flagged and such pixels are excluded from further scrutiny.}. The data from such surveys are usually obtained in a ``background-limited'' regime, meaning that the signal-to-noise on an exposure is dominated by the flux of sources (as the signal) and the background sky brightness (as the dominant noise component). Except in the most crowded images of the plane of the Milky Way, most pixels in the processed images contain only sky flux. Less than a few percent of pixels usually contain significant flux from stars, galaxies or other astrophysical nebulosities. There are two broad methods for discovering variability in such images. In one, all sources above some statistical threshold of the background noise are found and the position and flux associated with those sources are extracted to a catalog. There are off-the-shelf codebases to do this (e.g., \cite{1996A&AS..117..393B,2002ApJS..138..185F}) but such detection and extraction on images is by no means straightforward nor particularly rigorous, especially near the sky-noise floor of images. Discovery of variability is found by asking statistical questions (see \S \ref{sec:lc}) about the constancy (or otherwise) on the {\it light curve} produced on a given source, created by cross-correlating sources by their catalog position across different epochs \cite{2008ASPC..394..165B}. The other method, called ``image differencing,'' \cite{1996AJ....112.2872T,1998ApJ...503..325A,2000AcA....50..421W,2008MNRAS.386L..77B} takes a new image and subtracts away a ``reference image'' of the same portion of the sky; this reference image is generally a sharp, high signal-to-noise composite of many historical images taken with the same instrumental setup and is meant to represent an account of the ``static'' (unchanging) sky. Both methods have their relative advantages and drawbacks (see \cite{2009ApJ...696..870D} for a discussion). Since image differencing involves astrometric alignment and image convolution, catalog-based searches are generally considered to be faster. Moreover, catalog searches tend to produce fewer spuriously detected sources because the processed individual images tend to have less ``defects'' than differenced images. Catalog searches perform poorly, however, in crowded stellar fields (where aperture photometry is difficult) and in regions around galaxies (where new point sources embedded in galaxy light can be easily outshined). Given the intellectual interests in finding variables in crowded fields (e.g., microlensing; \cite{1986ApJ...304....1P,2001MNRAS.327..868B}) and transient events (such as supernovae and novae) near galaxies, image-difference based discovery is considered necessary for modern surveys. Computational costs aside, one of the primary difficulties with image differencing is the potential for a high ratio of spurious candidate events to truly astrophysical events. A trained human scanner can often discern good and bad subtractions and, for many highly successful projects, human scanners were routinely used for determining promising discovery candidates. The KAIT supernova search \cite{2001ASPC..246..121F} makes use of undergraduate scanners to shift through $\sim1000$ images from the previous night. Over 1000 SNe were discovered in 10 years of operations with this methodology \cite{2010arXiv1006.4611L}. Basic quality/threshold cuts on the metrics about each candidate can be used to present to human scanners a smaller subset of images for inspection; in this way, the Sloan Digital Sky Survey II Supernova Search \cite{2008AJ....135..338F} netted $>300$ spectroscopically confirmed supernovae discoveries from $\sim$150,000 manually scanned candidates. The Nearby Supernova Factory \cite{2002SPIE.4836...61A}, after years of using threshold cuts, began to use boosted decision trees (\S \ref{sec:super}) on metrics from image differences to optimize supernova discovery \cite{2007ApJ...665.1246B}. Unlike with specific domain-focused discovery surveys (like supernovae searches), many surveys are concerned with discovery and classification of all sorts of variable stars and transients. So unlike in the supernova discovery classifier of Bailey et al.~\cite{2007ApJ...665.1246B} (which was highly tuned to finding transient events near galaxies), discovery techniques must aim to be agnostic to the physical origin of the source of variability. That is, there is an imperative to separate the notion of ``discovery'' and ``physical classification.'' In the Palomar Transient Factory, we find at least one hundred high-significance bogus candidates for every one real candidate in image differences \cite{bloom2011}. With over one million candidates produced nightly, the number of images that would have to be vetted by humans is unfeasible. Instead, we produced a training set of human-vetted candidates, each with dozens of measured features (such as FWHM, ellipticity; see \cite{bloom2011}). These candidates are scored on a scale from 0 to 1 based on their inferred likelihood of being bogus or astrophysically ``real.'' We developed a random forest classifier on the features to predict the 1--0 real-bogus value and saved the result of the ML-classifer on each candidate. These results are used to make discovery decisions in PTF. After one year of the survey, we also created training sets of real and bogus candidates by using candidates associated with known/confirmed transients and variables \cite{sahand2011}. Figure \ref{fig:rfcandidate} shows the ``receiver operating characteristic'' (ROC) curve for a random forest classifier making use of the year-one training sample. \begin{figure}[tbp] \centerline{\includegraphics[width=200pt,angle=270]{roc.eps}} \caption[ROC Curve for Image-Differenced Candidates]{ROC curve for image-differenced candidates based on a training sample of 50,000 candidates from the Palomar Transient Factory. High efficiency and high purity are on the bottom left of the plot. From \cite{sahand2011}.} \label{fig:rfcandidate} \end{figure} If all real sources occurred at just one epoch of observation, then ROC curves such as those depicted in Figure \ref{fig:rfcandidate} would directly reflect discovery capabilities: type I error (false-negatives) would be the efficiency for discovery and type II error (false-positive rate) would be the purity for discovery. However, most transient events occur over several epochs and bogus candidates often do not recur at precisely the same location. Therefore, turning candidate-level ROC curves to global discovery efficiency/purity quantities is not straightforward. In PTF we require 2 high-quality ML-score candidates within a 12 day window to qualify a certain position on the sky as a discovery of a true astrophysical source\footnote{This discovery is designed to find fast-changing events, of particular interest to the PTF collaboration. We also require at least two observations more than 45 minutes separated in time, to help remove moving asteroids from the discovery set.}. In the first 8 months of producing automatic discoveries with PTF, our codebase independently discovered over 10,000 transients and variable stars. {\bf Radio Interferometry}: Traditionally, radio images are generated from raw u-v interferometric data using a human intensive process to iteratively flag and remove spurious baseline data. Phase drift due to the ionosphere, instrumental instability, and terrestrial radio-frequency interference (RFI) are all impediments to automatically producing clean images of the sky. Given the massive data rates soon expected from wide-field surveys (e.g., LOw Frequency ARray: LOFAR; Australian Square Kilometre Array Pathfinder; ASKAP), there is a pressing need to autonomously produce clean images of the radio sky. Algorithmic innovations to speed automatic image creation have been impressive (e.g., \cite{2004SPIE.5489..817N}). For RFI mitigation, a genetic algorithm approach has produced promising results \cite{2005RaSc...40S5S08F}. Once images are made, sources are detected much the same way as with optical imaging\footnote{Note that McGowan et al.~\cite{2005ASPC..345..362M} have developed an ML approach to faint source discovery in radio images.} and catalog searches are used to find transients and variables \cite{2007ApJ...666..346B,2010ApJ...719...45C}. \subsection{Detection and Analysis of Variability} \label{sec:lc} For catalog-based searches, variability is determined on the basis of the collection of flux measurement as a function of time for a candidate source. Since variability can be manifested in many ways (such as aperiodic behavior, occasional eclipsing etc.) one single metric on variability will not suffice in capturing variability \cite{1996PASP..108..851S,2005ESASP.576..513E,2008MNRAS.386..887B,2009MNRAS.400.1897S,2009BlgAJ..12...49D}. A series of statistical questions can be asked with each new epoch to each light curve. Are the data consistent with an unchanging flux, in a $\chi^2$ sense? Are there statistically significant deviant data points? How are those outliers clustered in time? Significant variability of periodic sources may be revealed by direct periodogram analysis (\cite{1989MNRAS.241..153S}; see also ref.\ \cite{2011arXiv1101.2445B}). In the Poisson detection limit, such as at $\gamma$-ray wavebands or with detections of high-energy neutrinos, discovering variability in a source is akin to asking the question of whether there is a statistically significant change in the rate of arrival of individual events; for this, there are sophisticated tools (such as Bayesian blocks) for analysis \cite{1998ApJ...504..405S,2001ApJ...550L.101S}. One of the important real-world considerations is that photometric uncertainty estimates are always just estimates, based on statistical sampling of individual image characteristics. Systematic errors in this uncertainty (either too high or too low) can severely bias variability metrics (c.f.\ \cite{2005ESASP.576..513E}). Characterizing efficiency-purity from systematic errors must be done on a survey by survey basis. \section{Classification} \label{sec:class} Determining the physical origin of variability is the basic impetus of classification. But clearly what is {\it observed} and what is {\it inferred to belie that which is observed} are not the same, the latter deriving from potentially several interconnected and complex physical processes. A purely physical-based classification schema is then reliant upon subjective and potentially incorrect model interpretation. For instance, to say that the origin of variability is due to an eclipse requires an intuitive leap, however physically relevant, from observations of a periodic dip in an otherwise constant light curve. A purely observational-based classification scheme, on the other hand, lacks the clarifying simplicity offered by physical classification. For example, how is a periodic light curve ``dipping'' (from an eclipsing system) different, quantitatively, than an extreme example of periodic brightness changes (say from a pulsational variable)? To this end, existing classification taxonomies tend to rely on an admixture of observational and physical statements. And when a variable source is found the goal is in finding how that source fits within an established taxonomy. Phenomenological and theoretical taxonomies aside, the overriding conceptual challenge of classification is that no two sources in nature are identical and so the boundaries between classes (and subclasses) are inherently fuzzy: there is no ground truth in classification, regardless of the amount and quality of the data. With finite data, the logistical challenge is in extracting the most relevant information, mapping that onto the quantifiable properties derivable from instances of other variables, and finding an (abstractly construed) distance to other sources. There a several broad reasons for classification: \begin{enumerate} \item {\bf Physical Interest} Understanding the physical processes behind the diversity of variability requires numerous examples across the taxonomy. Studying the power-spectrum of variability in high signal-to-noise light curves can be used to infer the interior structure of stars (astroseismology). Modelling detached eclipsing systems can be used to infer the mass, radius, and temperatures of the binary components. \item {\bf Utility} Many classes of variables have direct utility in making astrophysically important measurements that are wholly disconnected from the origin of the variability itself. Mira, RR Lyrae, and Cepheids are used for distant ladder measurements, providing probes of the structure and size of the universe. Calibrated standard-candle measurements of Ia and IIP supernovae are cosmographic probes of fundamental parameters. Short period AM CVn systems serve as a strong source of ``noise'' for space-based gravity wave detectors; finding and characterizing these systems through optical variability allows the sources to be effectively cleaned out of the LISA datastream, allowing more sensitivity searches for gravity waves in the same frequency band. \item {\bf Demographics} Accounting for various biases, the demographics from classification of a large number of variable stars can be used to form and understand the evolutionary life-cycle of stars across mass and metallicity. Understanding the various ways in which high mass stars die can be gleaned from the demographics of supernova sub-types. \item {\bf Rarities and Anomalies} Finding extreme examples of objects from known classes or new examples of sparsely populated classes has the potential to inform the understanding of (the more mundane) similar objects. The ability to identify anomalous systems and discover new types of variables---either hypothesized theoretically or not---is likewise an important feature of any classification system. \end{enumerate} Expert-based (human) classification has been the traditional approach to time-series classification: a light-curve (and colors, and position on the sky, etc.) is examined and a judgement is made about class membership. The preponderance of peculiar outliers of one (historical) class may lead to a consensus that a new sub-class is warranted\footnote{For example, type Ia supernovae, likely due to the explosion of a white dwarf, appear qualitatively similar in their light curves to some core-collapsed supernovae from hydrogen-stripped massive stars (Type Ib/Ic). Yet the presence or absence of silicon in the spectra became the defining observation that led to very different physical inferences for similar phenomenological types of supernovae.}. Again, with surveys of hundreds of thousands to billions of stars and transients, this traditional role must necessarily be replaced by ML and other data-mining techniques. \subsection{Domain-based Classification} Some of the most fruitful modern approaches to classification involve domain-specific classification: using theoretical and/or empirical models of certain classes of interest to determine membership of new variables in that class. Once a source is identified as variable, its location in color-luminosity space can often provide overwhelming evidence of class membership (Figure \ref{fig:hr}). Hertzsprung-Russell (H-R) diagrams obviously require distance to the source to be known accurately and so, until Gaia \cite{2002Ap&SS.280....1P}, it has its utility restricted to those with parallax previously measured by the Hipparcos survey. For some sources, such as RR Lyrae and quasars, location in color-color space suffices to provide probable classification (Figure \ref{fig:color}). Strict color cuts or more general probabilistic decisions on clustering\footnote{Such classification decisions can make use of the empirical distribution of sources within a class and uncertainties on the data for a given instance \cite{2010arXiv1011.6392B}.} within a certain color-color space can performed. (Regardless, reddening and contamination often make such classification both inefficient and impure.) \begin{figure}[tbp] \centerline{\includegraphics[width=3.6in,angle=0]{HRfofv.eps}} \caption[H-R diagram of variable stars]{Fractional variability of stars across the H-R diagram derived from Hipparcos data. Red indicates significant variability and blue low-amplitude variability ($10$\% peak-to-peak). Identification of colors coupled with distances provide a rather clean path to classification. From \cite{em08}.} \label{fig:hr} \end{figure} \begin{figure}[tbp] \centerline{\includegraphics[width=4.5in,angle=0]{color_plot.eps}} \caption[Color-color plot]{Color-color plot showing variable sources from Stripe 82. Region II is the traditional QSO locus and Region IV is the region populated by most RR Lyrae. There are clearly many QSOs that fall outside region IV (particularly high redshift QSOs), some of which are in the RR Lyrae region. From \cite{2010arXiv1008.3143B}.} \label{fig:color} \end{figure} Of considerable interest, given that historical and/or simultaneous color information is not always available and that unknown dust can affect color-based classification, is to classify using time-series data alone. For some domains, the light curves tell much of the story. Well before the peak brightness in a microlensing event, for example, an otherwise quiescent star will appear to brighten monotonically like a second-order power-law in time. By continuously fitting the light curve of (apparently) newly variable stars for such a functional form, a statistically rigorous question can be asked about whether that event appears to be microlensing or not. For a sufficiently-homogeneous class of variables, an empirical light curve can be fit to the data and those sources with acceptable fits can be admitted to that class. This was done to discover and classify RR Lyrae stars in the SDSS Stripe 82 dataset \cite{2010ApJ...708..717S}. Such approaches require, implicitly, a threshold of acceptability. However, using cuts based on model probabilities and goodness-of-fit values can be damaging: these metrics are often a poor description of class probabilities due to the overly-restricted space of template models under consideration as well as other modeling over-simplifications. A better approach is to use a representative training set of sources with known class to estimate the ROC curve for the model fits, and to then pick the threshold value corresponding with the desired efficiency and purity of the sample. If the training set is truly representative, this ensures a statistical guarantee of the class efficiency and purity of samples generated by this approach. A related, but less strong statement can often be made that the variability has ``class-like'' variability. For example, there is no one template of a quasar light curve but since quasars are known to vary stochastically like a damped random walk, with some characteristic timescale that correlates only mildly with luminosity, it is possible to capture the notion of whether a given light curve is statistically consistent with such behavior. In Butler \& Bloom \cite{2010arXiv1008.3143B} we created a set of features designed to capture how much a variable was ``quasar like'' and found a high degree of efficiency and purity of quasar identification based on a spectroscopic validation sample (Figure \ref{fig:qso}). Some variable stars, such pulsating super giants and X-ray binaries, also show this QSO-like behavior; so it is clear that such domain-specific statistical features alone cannot entirely separate classes. \begin{figure}[tbh] \centerline{\includegraphics[width=4.8in,angle=0]{chi_plot.eps}} \caption[QSO Variability]{Variability selection of quasars. Using a Bayesian framework to connect light-curves of point sources to damped random walk behavior, statistics that account for uncertainty and covariance can be developed to find QSO-like behavior. This selection (green line) is highly efficient at finding known QSOs ($\sim$99\%) and impure at the $3$\% level. From \cite{2010arXiv1008.3143B}.} \label{fig:qso} \end{figure} There is significant utility in restricting the model fits to a finite number of classes. Indeed, one of the more active areas of domain-specific classification is in supernova subclassing. By assuming that a source is some sort of supernovae, a large library of well-observed supernova light curves (and photometric colors) can be used to infer the sub-type of a certain instance, especially when quantifying the light curve trajectory through color-color space \cite{2002PASP..114..833P}. Provided that the library of events spans (and samples) sufficiently well the space of possible subclasses (and making use of available redshift information to transform templates appropriately), Bayesian odds ratios can be effectively used to determine membership within calibrated confidence levels (see ref.~\cite{2010arXiv1010.1005N}). \subsection{Feature-based Classification} An abstraction from domain-specific classification (such as template fitting) is to admit that the totality of the available data belies the true classification, irrespective of whether we understand the origin of that variability or have quantified specifically what it means to belong to a certain class. We classify on {\it features}, metrics derived from time-series and contextual data. There are a number of practical advantages to this transformation of the data. First, feature creation allows heterogeneous data to be mapped to a more homogeneous $m$-dimension real number line space. In this space, instances of variable objects collected from different instruments with different cadences and sensitivities can be directly intercompared. This is the sort of space where machine-learning algorithms work well, allowing us to bring to bear the richness of the machine-learning literature to astronomical classification. Second, features may be arbitrary simple (e.g., median of the data) or complex. So in cases with only limited data availability---when, for instance, light curve fitting might fail---we have a subset of metrics that can still be useful in classification. Many machine-learning frameworks have prescriptions for dealing with missing data that do not bias the results. Third, many feature-based classification methods produce class probabilities for each new source, and there are well-prescribed methods in ML both for calibrating the classification results and to avoiding overfitting. Last, ML approaches allow us to explicitly encode the notion of loss (or ``cost'') in the classification process, allowing for a controlled approach to setting the efficiency and purity of the final results. There is, of course, a huge space of possible features and many will be significantly related to others (e.g., mean and median will strongly correlate). One of the interesting advantages of some ML techniques is the classification robustness both in the face of feature covariance and ``useless'' features. This is freeing, at some level, allowing us to create many feature generators without worry that too many kitchen sinks will sink the boat. The flip side, however, is that there are always more features on the horizon than those in hand that could be incrementally more informative for a certain classification task. Methods for feature-based classification of time-varying sources in astronomy come in one of two flavors. The first are {\bf supervised} methods, which use both the features and previously-known class labels from a set of training data to learn a mapping from feature to class space. The second are {\bf unsupervised} methods (also called statistical clustering), which do not use class labels and instead seek to unveil clustering of the data in feature space. The end goals of these approaches are different: supervised classification attempts to build an accurate \emph{predictive} model where, for new instances, the true classes (or class probabilities) can be predicted with as few errors as possible, whereas unsupervised classification seeks a characterization of the distribution of features, such as estimating the number of groups and allocation of the data points into those groups. A common technique (e.g., \cite{2005MNRAS.358...30E}) is to blend the two by first performing unsupervised classification and subsequently analyzing the resultant clusters with respect to a previously-known set of class labels. \subsubsection{Feature Creation} The two broad classes of features, {\bf time-domain} and {\bf context}, each provide unique value to classification but also inhere unique challenges. The most straightforwardly calculated time-domain features are based on the distribution of detected fluxes, such as the various moments of the data (mean, skewness, kurtosis). Variability metrics, such as $\chi^2$ under an unchanging brightness hypothesis and the so-called Stetson variability quantities \cite{1996PASP..108..851S}, are easily derived and make use of photometric uncertainties. Quantile-based measurements (such as the fraction of data observed between certain flux ranges) provide some robustness to outliers and provide a different view of the brightness distribution than moments. Inter-comparisons (e.g., ratios) of these metrics across different filters may themselves be useful metrics. Time-ordered metrics retain phase information. Frequency analysis, finding significant periodicity in the data, provides powerful input to the classification of variable stars (Figure \ref{fig:varstarfeature}). There are significant limitations to frequency-domain features, most obvious of which is that a lot of time-series data is required to make meaningful statements: with three epochs of data, it makes no sense to ask what the period of the source is. Even in the limit that a frequency of interest ($f_0$) is potentially sampled well in a Nyquist sense (where the total time duration of the light curve is longer than $\sim2/f_0$), the particular cadence of the observations may strongly alias the analysis, rendering significance measurements on peaks in the periodogram intractable. And unless the sources are regularly sampled (which, in general, they are not) there will be covariance across the power spectrum. Finding significant periods can mean fitting a small amount of data over millions of trial frequencies, resulting in frequency-domain features that are computationally expensive\footnote{One practical approach for data observed with similar cadences is to compute the periodogram at a small number of a fixed set of frequencies and set the power/significance at each of these frequencies to be separate features. Covariance is then implicitly dealt with at the ML level, rather than feature generation level (e.g., ref.~\cite{2005MNRAS.358...30E})}. We review techniques and hybrid prescriptions for period finding and analysis in Richards et al.~\cite{2011rich}. \begin{figure}[tbp] \centerline{\includegraphics[width=1.8in,angle=270]{richards_features.eps}} \caption[Distribution of features of variable stars]{Distribution of two frequency-domain features derived for 25 classes of variable stars from OGLE and Hipparcos photometry: a) log of the most significant frequency (units of day$^{-1}$) from a generalized Lomb-Scargle periodogram analysis, and b) the log of the amplitude of the most significant period, in units of magnitude. Mira variables (top) are long-period, high-amplitude variables, while delta Scuti stars (10th from top) are short-period, low-amplitude variables. Aperiodic sources, such as S Doradus stars (5th from bottom), have a large range in effective dominant period. From \cite{2011rich}.} \label{fig:varstarfeature} \end{figure} Other time-ordered features may be extracted using a notion of ``distance'' between a given instance of a light curve and all others. For instance, to derive features useful for supernova typing, Richards et al.~\cite{2011richSN} built up a matrix of pairwise distances between each pair of SNe (including both labeled and unlabeled instances) based on interpolating spline fits to the time-series measurements in each photometric band. The pairwise distance matrix was subsequently fed into a diffusion map algorithm that embeds the set of supernovae in an optimal, low-dimensional feature space, separating out the various SN subtypes (Figure \ref{fig:sndmap}). In a variable star analysis, Deb \& Singh~\cite{2009A&A...507.1729D} use the covariance matrix of a set of interpolated, folded light curves to find features using PCA. In addition to using distance-based features to capture the time variability of sources, the way in which flux changes in time can be captured by fitting parameters under the assumption that the data are due to a Gaussian process \cite{springerlink:10.1007/978-3-540-28650-9_4}. \begin{figure}[tbp] \centerline{\includegraphics[width=3.2in,angle=270]{SNdmap.ps}} \caption[Diffusion map features of SNe]{Light curve distance measures can be used in conjunction with spectral methods, such as diffusion map, to compute informative features. In this example, a spline-based distance between supernova light curves, designed to capture both shape and color differences, was used. In the first two diffusion map coordinates (left), Type Ia and II SNe are distinguished, whereas higher features (right) reveal some separation between Ia and Ib/c supernovae. From \cite{2011richSN}.} \label{fig:sndmap} \end{figure} We define context-specific features as being all derivable features that are not expected to change in time. The location of the event on the sky, in Galactic or ecliptic coordinates, obviously provides a strong indication of whether the event has occurred in the Galaxy or in the Solar System. Metrics on the distance to the nearest detected galaxy and the parameters of that galaxy (its color, size, inferred redshift, etc.) are crucial features for determining the nature of extragalactic events. Even with very little time-domain data a strong classification statement can be made: for example, an event well off the ecliptic plane that occurs on the apparent outskirts of a red, spiral-less galaxy is almost certainly a type Ia supernova. One of the main challenges with context features is the heterogeneity of the available data. For example, in some places on the sky, particularly in the SDSS footprint, much is known about the stars and galaxies near any given position. Outside such footprints, context information may be much more limited. From a practical standpoint, if context information is stored only in remotely queryable databases, what information is available and the time it takes to retrieve that information may be highly variable in time. This can seriously affect the computation time to produce a classification statement on a given place on the sky. \subsubsection{Supervised Approaches} \label{sec:super} Using a sample of light curves whose true class membership is known (e.g., via spectral confirmation), supervised classification methods learn a statistical model (known as a classifier) to predict the class of each newly-observed light curve from its features. These methods are constructed to maximize the predictive accuracy of the classifications of new sources. The goal of these approaches is clear: given a set of previously-labeled variables, make the best guess of the label of each new source (and optionally find the sources that do not fit within the given label taxonomy). Many supervised classification methods also predict a vector of class probabilities for each new source. These probabilistic classifiers can be used to compute ROC curves for the selection of objects from a specified science class---such as those in Figure \ref{fig:sneroc}---from which the optimal probability threshold can be chosen to create samples with desired purity and efficiency. \begin{figure}[tbp] \centerline{\includegraphics[width=4in,angle=90]{sne_roc.ps}} \caption[ROC curves for SN classification]{ROC curves for the selection of supernovae from a random forest probabilistic classifier, using data from the SN Photometric Classification Challenge \cite{2010PASP..122.1415K}. Left: For classification of Type Ia SNe, in the spectroscopic sample we can achieve 95\% efficiency at a 99\% purity or $>99\%$ efficiency at 98\% purity, depending on the threshold. Right: For Type II-P supernovae, the classifier performs even better, with higher efficiency at each given purity level. From \cite{2011richSN}.} \label{fig:sneroc} \end{figure} There are a countless number of classification methods in statistics and machine learning literature. Our goal here is to review a few methods that are commonly used for supervised classification of time-variable sources in astronomy. If the class-wise distributions of features were all completely known (along with the class prior proportions), then for a new source we would use Bayes' rule to compute the exact probability that the source is from each class, and classify the source as belonging to the class of maximal probability. This is referred to as \emph{Bayes' classifier}, and is the provable best possible classifier in terms of error rate. In practice, however, we do not know the class-wise feature distributions perfectly. Many methods attempt to estimate the class densities from the training data. In {\bf Kernel Density Estimation} (KDE) classification, the class-wise feature distributions are estimated using a non-parametric kernel smoother. This approach has been used to classify supernova light curves \cite{2010arXiv1010.1005N}. A pitfall of this technique is the tremendous difficulty in estimating accurate densities in high-dimensional feature spaces via non-parametric methods (this is referred to as the \emph{curse of dimensionality}). To circumvent this problem, {\bf Na\"ive Bayes} performs class-wise KDE on one feature at a time, assuming zero covariance between features. Though this simplifying assumption is unlikely to be true, Na\"ive Bayes has enjoyed much use, including in time-domain science \cite{2008AIPC.1082..287M}. A step up from Na\"ive Bayes is {\bf Bayesian Network} classification, which assumes a sparse, graphical conditional dependence structure amongst the features. This approach was used with considerable success for variable star classification \cite{2007A&A...475.1159D,2009A&A...506..519D}. Alternatively, class-wise distributions can be estimated using parametric models. The {\bf Gaussian Mixture} classifier assumes that the feature distribution from each class follows a multivariate Gaussian distribution, where the mean and covariance of each distribution are estimated from the training data. This approach is used widely in variable star classification (e.g., \cite{2007A&A...475.1159D}, \cite{2009A&A...506..519D}, and \cite{2010ApJ...713L.204B}). The advantage of this parametric approach is that it does not suffer from curse of dimensionality. However, if the data do not really follow a mixture of multivariate Gaussian distributions, then predictions may be inaccurate: for example, we showed in \cite{2011rich} that using the same set of variable star features, a random forest classifier outperforms the Gaussian mixture classifier by a statistically significant margin. Gaussian mixture classifiers are also called {\bf Quadratic Discriminant Analysis} (QDA) classifiers (or {\bf Linear Discriminant Analysis}, LDA, if pooled covariance estimates are used). These names refer to the type of boundaries that are induced between classes, in feature space. Indeed, many classification methods instead focus on locating the optimal class boundaries. {\bf Support Vector Machines} (SVMs) find the maximum-margin hyperplane to separate instances of each pair of classes. Kernelization of a SVM can easily be applied to find non-linear class boundaries. This is approach used to classify variable stars in a number of recent papers \cite{2007A&A...475.1159D,2007arXiv0712.2898W,2011rich}. The {\bf K-nearest neighbors} (KNN) classifier predicts the class of each object by voting its K nearest neighbors in feature space, thereby implicitly estimating the class decision boundaries non-parametrically. Another popular method is {\bf Classification Trees}, which performs recursive binary partitioning of the feature space to arrive at a set of pure, disjoint regions. Trees are powerful classifiers because they can capture complicated class boundaries, are robust to outliers, are immune to irrelevant features, and easily cope with missing feature values. Their drawback is that due to their hierarchical nature, they tend to have high variance with respect to the training set. Tree ensemble methods, such as {\bf Bagging}, {\bf Boosting}, and {\bf Random Forest} overcome this limitation by building many classification trees to bootstrapped versions of the training data and averaging their results. Boosting, which has been used by Newling et al.~\cite{2010arXiv1010.1005N} for SN classification and Richards et al.~\cite{2011rich} for variable star classification, iteratively reweights the training examples to increasingly focus on difficult-to-classify sources. Random Forest, which was used by multiple entrants in the Supernova Photometric Classification Challenge \cite{2010PASP..122.1415K} and by our group \cite{2011rich} for variable star classification, builds de-correlated trees by choosing a different random subset of features for each split in the tree-building process. In Richards et al.~\cite{2011rich}, we found that random forest was the optimal method for a multi-class variable star problem in terms of error rate (Figure \ref{fig:misclass}). In time-domain classification problems, we often have a well-established hierarchical taxonomy of classes, such as the variable star taxonomy in Figure \ref{fig:classhierarchy}. Incorporating a known class hierarchy into a classification engine is a research field that has received much recent attention in the machine learning literature (e.g., \cite{2010sill}). Several attempts for {\bf hierarchical classification} have been made in variable star problems. Debosscher et al.~\cite{2009A&A...506..519D} use a 2-stage Gaussian mixture classifier, first classifying binaries versus non-binaries, while Blomme et al.~\cite{2010ApJ...713L.204B} use a multi-stage hierarchical taxonomy. In Richards et al.~\cite{2011rich}, we use two methods for hierarchical classification, both using random forest and the taxonomy in Figure \ref{fig:classhierarchy}. Finally, no discussion of supervised classification would be complete without mentioning the hugely-popular method {\bf Artificial Neural Networks} (ANN). Though there are several versions of ANN, in their simplest form they are non-linear regression models that predict class as a non-linear function of linear combinations of the input features. Drawbacks to ANN are their computational difficulty (e.g., there are many local optima) and lack of interpretability, and for these reasons they have lost popularity in the statistics literature. However, they have enjoyed much success and widespread use in astronomy. In time-domain astronomy, ANNs have been used by for variable star classification \cite{2005AJ....130...84F,2006A&A...446..395S,2007A&A...475.1159D} and by one team in the SN Classification Challenge (though the team's ANN entry fared much worse than their random forest entry, using the same set of features) . \begin{figure} \begin{center} \includegraphics[angle=0,scale=.4]{misclassRates.ps} \end{center} \caption[Error rates for different classifers]{Distribution of cross-validation error rates for several classifiers on a mixed data set of OGLE and Hipparcos sources (see \cite{2011rich}). The classifiers are divided based on the features on which they were trained; from left to right: (1) periodic plus non-periodic features, (2) the Lomb-Scargle features estimated by \cite{2007A&A...475.1159D}, (3) the Lomb-Scargle features estimated by \cite{2011rich}, and (4) only non-periodic features. In terms of mis-classification rate, the random forest classifier trained on all of the features perform the best. Classifiers considered are: classification trees (CART \& C4.5 variants), K-nearest neighbors (KNN), tree boosting (Boost), random forest (RF), pairwise versions of CART (CART.pw), random forest (RF.pw), and boosting (Boost.pw), pairwise SVM (SVM.pw), and two hierarchical random forest classifiers (HSC-RF, HMC-RF). All of the classifiers plotted, except single trees, achieve better error rates than the best classifier from \cite{2007A&A...475.1159D} (dashed line), who considered Bayesian Network, Gaussian Mixture, ANN, and SVM classifiers. \label{fig:misclass}} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=90,width=5.0in]{classhierarchy.ps} \end{center} \caption[Classification hierarchy of variable stars]{Variable star classification hierarchy for the problem considered in \cite{2011rich}. This structure can be used in a hierarchical classifier to yield improved results. The hierarchy is constructed based on knowledge of the physical processes and phenomenology of variable stars. At the top level, the sources split into three major categories: pulsating, eruptive, and multi-star systems. \label{fig:classhierarchy}} \end{figure} \subsubsection{Unsupervised \& Semi-Supervised Approaches} Unsupervised classification (statistical clustering) methods attempt to find $k$ clusters of sources in feature space. These methods do not rely on any previously-known class labels, and instead look for natural groupings in the data. After clusters are detected, labels or other significance can be affixed to them. In time-domain studies, these methods are useful for explorative studies, for instance to discover the number of statistically-distinct classes in the data or to discover outliers and anomalous groups. In the absence of confident training labels, an unsupervised study is a powerful way to characterize the distributions in the data to ultimately determine labels and build a predictive model using supervised classification. In time-domain astronomy, the most popular clustering method is {\bf Gaussian Mixture Modeling}. This method fits a parametric mixture of Gaussian distributions to the data by maximum likelihood via the expectation-maximization (EM) algorithm. A penalized likelihood or Bayesian approach can be used to estimate the number of clusters present in the data. The {\tt Autoclass} method \cite{1996chee} is a Bayesian mixture model clustering method that was used by Eyer \& Blake~\cite{2005MNRAS.358...30E} to cluster ASAS variable stars. Sarro et al.~\cite{2009A&A...494..739S} use another variant of Gaussian Mixture Modeling to cluster a large database of variable stars. {\bf Self-Organizing Maps} (SOM) is another popular unsupervised learning method in time-domain astronomy. This method aims to map the high-dimensional feature vectors down to a discretized two-dimensional coordinate plane for easy visualization. SOM is the unsupervised analog of ANN that uses a neighborhood function to preserve the topology of the input feature space. This method has been used previously \cite{2004bret,2008AIPC.1082..201W} to obtain two-dimensional parametrization of astronomical light curves. In those studies, SOM was performed prior to visual analysis of the labeled sources in this space. This class of approach, where available class labels are ignored to obtain a simple parametrization of the light curve features and subsequently used in a learning step, is called \emph{semi-supervised learning}. The advantage to this technique is that, if the relevant class information is preserved by the unsupervised step, then supervised classification will be easier in the reduced space. Semi-supervised classification permeates the time-domain astronomy literature. In addition to the afore-mentioned SOM studies, other authors have used PCA \cite{2007arXiv0712.2898W, 2009A&A...507.1729D} and diffusion map \cite{2011richSN} to parametrize time-variable sources prior to classification. Of these studies, only Richards et al.~\cite{2011richSN} used a rigorous statistical classifier. \section{Future Challenges} For any finite collection of photons, our knowledge of the true flux is inherently uncertain. This basic phenomenological uncertainty belies an even greater uncertainty in the physical origin of what we think we are witnessing. As such, any classification scheme of a given variable or transient source must be inherently probabilistic in nature. We have outlined how---with an emerging influence of the machine-learning literature---we can gain traction on the probabilistic classification challenge. Calibrating (and validating) the output probabilities from machine-learning frameworks is still a nascent endeavor. Feature generation is obviously a key ingredient to classification and we have presented evidence that random forest classifiers are particularly useful at using features that are most relevant to classification and skirting the problem of large covariance between features. On the positive side, this frees us from having to create a small set of perfectly tuned features. However, how do we know when we have exhausted the range of reasonable feature space for classification? Our suspicion is that expert knowledge has already imbued the feature creation process with much of the knowledge implicitly needed for classification: we know for instance that phase offset between the first and second most dominant periods can be a powerful way to distinguish two closely related classes of pulsational variables. There may be information-theoretic (and feature-agnostic) answers to this question, which might be attacked with some genetic programming framework. On statistical grounds, implicit in the feature generation procedure is that the distribution of features (and their covariances) on the training set will be similar to the set of instances of sources we wish to classify. A gross mismatch of the characteristics of these two sets is likely to be a significant problem for the robustness of the classification statements. No study to date has looked at how we can use the knowledge gleaned from one survey and apply that to classification in another. For instance, if a classifier is blindly trained on one survey to classify objects from another, then it will achieve sub-optimal results by not considering differences in feature distribution between the surveys. Ideas from statistics, such as importance sampling, can be exploited to account for these differences. As these very basic algorithmic questions are addressed, the computational implications, using events from real surveys, will have to be understood. Is feature creation and the application of an existing machine-learned framework fast enough for a given data stream? How can loss functions be embedded in computational choices at the feature and the labeling levels? For streaming surveys, how often should the learning model be updated with newly classified examples from the survey itself? What are the roles of massively parallel hardware (e.g. graphical processing units) in feature generation, learning, and classification? Astronomical datasets have always presented novel algorithmic, computational, and statistical challenges. With classification based on noisy and sometimes-spurious data, the forefront of all of these endeavors is already being stretched. As astronomers, expanding the machine-learning literature is a means to an end---if not just a way to keep our heads above water---building a vital set of tools for the exploration of the vast and mysterious dynamic universe. \bigskip {\it The authors acknowledge the generous support of a CDI grant (\#0941742) from the National Science Foundation. We thank Nat Butler and Dan Starr for helpful conversations. J.S.B. thanks those at the Universitat de Barcelona (Spain) for accommodating him in the astronomy department in late 2010, where much of this chapter was written.} \bibliographystyle{plain}
1,108,101,563,797
arxiv
\section{Introduction} Hybridization has an important role in the evolution of new species \cite{Arnold,Mallet}. In phylogenetic analysis, there is an increasing interest in dealing with this issue \cite{Kubatko,JSO,YDN}. The usual phylogenetic tree is replaced by a phylogenetic network \cite{HRS}, and in a Bayesian approach, a prior for the network is needed \cite{JSO}. Very little is known about suitable prior distributions for the topology and node times for such networks. This paper represents an attempt to understand the situation better, and provides some justification for using an exponential distribution as a prior for the hybridization time. The particular biological motivation for this study originates from a theoretical question on the evolution of polyploidy in plants. Polyploids can arise from within a single species (autoployploids) or via hybridization between two species (allopolyploids) in which the genomes of the two parental species are both present in the hybrid. For example, suppose it is known that a tetraploid species of interest resulted from a hybridization between a pair of diploid species which are ancestral to a clade of $n$ extant species. The following question arises: what can we say about the time of the hybridization event prior to a phylogenetic analysis of the genetic data? The same question can be applied to homoploid hybridization, in which there is a hybridization but no change in ploidy. However we will refer to the allopolyploid case above, since the species produced by the Yule process and the hybrids can be conveniently called diploids and tetraploids. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{PolyploidTimes3.pdf} \caption{Main time characteristics of the of the conditional Yule tree for $n=4$ species with one hybridization: $T$ is the time to origin, $T_1,\dots,T_4$ are inter-speciation times, and $\tau_4$ is the time to hybridization.} \label{fig0} \end{figure} We assume a Yule model with the speciation rate $\lambda$ conditioned on $n$ extant species and model the hybridization events by a Poisson process with intensity $\beta$ giving the number of hybridizations per pair of coexisting diploid species per unit of time calibrated by $\lambda$. This means that if there are $k$ coexisting diploid species during a time period $t$, then the number of hybridizations $N_k(t)$ during this period has a Poisson distribution \begin{equation}\label{Poi} P(N_k(t)=j)={\beta{k\choose2}t\over j!}e^{-\beta{k\choose2}t}, \ j=0,1,2,\ldots \end{equation} with expectation \begin{equation}\label{Poim} E(N_k(t))=\beta{k\choose2}t. \end{equation} Counting time backwards, let $T_k$ stand for the time between two consecutive speciation events during which the Yule tree had $k$ branches, $k=2,\ldots,n$, see Fig. \ref{fig0}. In the conditioned Yule model setting (a random phylogeny for $n$ extant species under the assumption of an improper uniform prior for the time of origin \cite{GT}) the times $(T_2,\ldots,T_n)$ are independent and exponentially distributed random variables with parameters $(2\lambda,\ldots,n\lambda)$ respectively. Replacing $t$ by $T_k$ in formula \eqref{Poim} and writing $N_k=N_k(T_k)$ gives $$E(N_k)=\beta{k\choose2}E(T_k)=\gamma(k-1),$$ where the compound parameter $\gamma=\frac{\beta}{2\lambda}$ can be understood as a relative hybridization rate. Thus averaging over possible species trees results in the mean total number of hybridizations $N=N_2+\ldots+N_n$ being \begin{equation}\label{Poit} E(N)=\gamma {n\choose 2}. \end{equation} The main finding of this paper is that the distribution of the time $\tau_n$ to a single hybridization event can be approximated by an exponential distribution with parameter $2\lambda$. This obtained by showing, see Corollary \ref{cor}, that $r$-th moment of $2\lambda\tau_n$ converges to $r!$ which is the $r$-th moment of an exponential distribution with parameter 1. Our simulations show that even for moderate values of $n$ and reasonable values of $\gamma$ the exponential approximation for the time to hybridization seem to be satisfactory. \section{The single hybridization condition}\label{Ssh} Given that there was exactly one hybridization event, $N=1$, we denote by $\tau_n$ the time to hybridization counted backwards from the time of observation. If $N=0$ or $N\ge2$, we put $\tau_n=\infty$. In this section we show among other things that the single hybridization condition has probability \begin{equation}\label{prob} P(\tau_n<\infty)=G_n\prod_{i=1}^{n-1}\frac{1}{1 + i\gamma}, \end{equation} where $G_n=\sum_{k=1}^{n-1} \frac{k\gamma}{1 +k\gamma}$. Observe that if $\tau_n<\infty$, then for some $\kappa_n\in\{2,\ldots,n\}$ hybridization occured during the period when there were $\kappa_n$ ancestral species. \begin{lemma}\label{le1} For any $2\le k\le n<\infty$ \begin{align*} P(\kappa_n=k|\tau_n<\infty)=G_n^{-1} \frac{(k-1)\gamma}{1 +(k-1)\gamma}. \end{align*} \end{lemma} {\sc Proof of Lemma \ref{le1}.} Replacing $t$ by $T_k$ in the right hand side of \eqref{Poi} yields \begin{align*} P(N_k=0|T_2,\ldots,T_n)&=e^{-\beta{k\choose2}T_k},\\ P(N_k=1|T_2,\ldots,T_n)&=\beta{k\choose2}T_ke^{-\beta{k\choose2}T_k}, \end{align*} and since $$\{\kappa_n=k\}=\{N_n=0,\ldots,N_{k+1}=0,N_k=1,N_{k-1}=0,\ldots,N_{2}=0\},$$ we obtain \begin{align}\label{kap} P(\kappa_n=k|T_2,\ldots,T_n)&=\beta{k\choose2}T_k\prod_{i=2}^ne^{-\beta{i\choose2}T_i}, \end{align} and therefore \begin{align}\label{kax} P(\kappa_n=k)&={\beta{k\choose2}\over \lambda k+\beta{k\choose2}}\prod_{i=2}^n{\lambda i\over \lambda i+\beta{i\choose2}}=\frac{(k-1)\gamma}{1 +(k-1)\gamma}\prod_{i=1}^{n-1}\frac{1}{1 + i\gamma}. \end{align} Summing over $k=2,\ldots,n$ we arrive at \eqref{prob}, then the assertion of Lemma \ref{le1} follows by dividing the last expression by \eqref{prob}. \\ If we assume that $n$ diploids and a single hybridization have been observed, then we can apply two basic methods of estimation for the plausible value of the key parameter $\gamma$. The method of moments estimate $\tilde\gamma_n=1/{n\choose2}$ is immediately obtained from \eqref{Poit} by substituting the observed value $N=1$ for $E(N)$. We can also treat the expression for $P(\tau_n<\infty)$ in \eqref{prob} as a likelihood function for $\gamma$ \[L(\gamma)=\prod_{i=1}^{n-1}\frac{1}{1 + i\gamma}\sum_{k=1}^{n-1} \frac{k\gamma}{1 +k\gamma}\] and from it find a maximum likelihood estimate $\hat\gamma_n$. It turns out that for large $n$ the two estimates are close in value \begin{align}\label{mle} \hat\gamma_n\ge\frac{2}{n(n-1)} \mbox{ for } n\ge2, \mbox{ and } \hat\gamma_n\le \frac{2}{n(n-3)} \mbox{ for } n \geq 4. \end{align} To show \eqref{mle} we observe first that the equation $L'(\hat\gamma)=0$ for $\hat\gamma_n$ takes the form \begin{align}\label{ml} A(\hat\gamma) = \hat\gamma B(\hat\gamma)^2, \end{align} where $$A(x) = \sum_{k=1}^{n-1} \frac{k}{(1 +kx)^2} \mbox{\ \ and\ \ } B(x) = \sum_{k=1}^{n-1} \frac{k}{1 +kx}.$$ By the Cauchy-Schwarz inequality we have \begin{align*} B(x)^2 &= \Big(\sum_{k=1}^{n-1}\sum_{i=1}^{k} \frac{1}{1 +kx}1_{\{i\ge1\}}\Big)^2\\&\le\sum_{k=1}^{n-1}\sum_{i=1}^{k} \Big(\frac{1}{1 +kx}1_{\{i\ge1\}}\Big)^2\times \sum_{k=1}^{n-1}\sum_{i=1}^{k} \Big(1_{\{i\ge1\}}\Big)^2 =A(x){n(n-1)\over2}, \end{align*} which together with \eqref{ml} yields $1\le \hat\gamma_n \frac{n(n-1)}{2}$. On the other hand, since for $x\ge0$ $$B(x)\ge A(x)\quad \text{and}\quad (1 +nx)B(x) \geq \frac{{n(n-1)}}{2},$$ it follows from \eqref{ml} that $1+n\hat\gamma_n \ge\hat\gamma_n \frac{n(n-1)}{2}$ and $1\ge\hat\gamma_n \frac{n(n-3)}{2}$. \section{Exact formula for any moment of $\tau_n$} \begin{lemma}\label{rth} For any $r\ge1$ \begin{align*} E\left( \tau_n^r | \tau_n<\infty \right) &= G_n^{-1} \frac{r!}{\lambda^r} \sum_{k=1}^{n-1} \frac{k\gamma}{1 + k\gamma} \sum_{i_1=k}^{n-1}\sum_{i_2=i_1}^{n-1}\dots\sum_{i_r=i_{r-1}}^{n-1}d_{i_1}\cdots d_{i_r}, \end{align*} where $d_j=(1+j)^{-1}(1+\gamma j)^{-1}$. \end{lemma} {\sc Proof of Lemma \ref{rth}.} Under the Poisson model for the flow of hybridization events \begin{align} \tau_n&=X+\sum_{j=\kappa_n+1}^n T_j, \label{repr} \end{align} where $X$ is a random variable uniformly distributed on $[0, T_{\kappa_n}]$. Thus \begin{align*} E\left( \tau_n^r | \kappa_n=k \right) &= E\left( \Big(X + \sum_{j=k+1}^{n} T_{j}\Big)^r \bigg| \kappa_n=k \right)\\ &=E\left( \sum_{\alpha} \frac{r!}{\alpha_k! \cdots \alpha_{n}!} X^{\alpha_k}\prod_{i=k+1}^n T_{i}^{\alpha_{i}} \bigg| \kappa_n=k \right), \end{align*} where the sum is over all vectors $\alpha = (\alpha_k, \dots \alpha_{n})$ of non-negative integers with sum $r$. Next we take such an $\alpha$ and calculate the expectation of \begin{equation*} M_{k,\alpha}= X^{\alpha_k} \prod_{i=k+1}^n T_{i}^{\alpha_{i}} \cdot 1_{\{\kappa_n=k \}} . \end{equation*} We have in view of \eqref{kap} \begin{align*} E(M_{k,\alpha}) &= E \left( \left(T_k^{-1}\int_0^{T_k}x^{\alpha_k}dx\right)\times \prod_{i=k+1}^n T_{i}^{\alpha_{i}} \times \beta{\binom{k}{2}} T_k \prod_{j=2}^n e^{-\beta{\binom{j}{2}}T_j} \right) \\ &= \prod_{j=2}^{k-1} E \Big( e^{-\beta{\binom{j}{2}}T_j} \Big) \beta{\binom{k}{2}}E \Big( { T_k^{1+\alpha_k}e^{-\beta{\binom{k}{2}}T_k}\over 1+\alpha_k}\Big) \prod_{i=k+1}^n E \Big(T_i^{\alpha_{i}}e^{-\beta{\binom{i}{2}}T_i}\Big) \\ &= \prod_{j=2}^{k-1} \frac{1}{1 + (j-1)\gamma} \times \gamma (k-1) \frac{ \alpha_k! d_{k-1}^{\alpha_k} \lambda^{-\alpha_k} }{ (1 + (k-1)\gamma)^2 } \times\prod_{i=k+1}^n \frac{\alpha_{i}! d_{i-1}^{\alpha_{i}} \lambda^{-\alpha_{i}}}{1 + (i-1)\gamma}. \end{align*} Recalling \eqref{kax} we deduce \begin{align*} E(M_{k,\alpha}) &=\frac{(k-1)\gamma}{1 +(k-1)\gamma}\Big( \prod_{i=1}^{n-1}\frac{1}{1 + i\gamma}\Big) \Big(\lambda^{-r} \prod_{i=k}^n \alpha_{i}! d_{i-1}^{\alpha_{i}}\Big)\\ &= P(\kappa_n=k) \lambda^{-r} \prod_{i=k}^n \alpha_{i}! d_{j-1}^{\alpha_{i}}, \end{align*} which implies \begin{align*} E\left( \tau_n^r | \kappa_n=k \right) &= \frac{r!}{\lambda^r} \sum_{\alpha} \prod_{i=k}^n d_{i-1}^{\alpha_{i}} = \frac{r!}{\lambda^r} \!\!\!\!\!\!\!\!\!\! \sum_{\substack{i_1, \dots, i_r \\ k-1 \leq i_1 \dots \leq i_r \leq n-1}} \prod_{j=1}^{r} d_{i_j}. \end{align*} Now to finish the proof of Lemma \ref{rth} it remains to apply Lemma \ref{le1}.\\ In particular, for $r=1$ and $r=2$ Lemma \ref{rth} gives \begin{align} m_n:=\E{\tau_n|\tau_{n}<\infty}&=\lambda^{-1} G_n^{-1}\sum_{k=1}^{n-1} \frac{k\gamma}{1 +k\gamma}\sum_{j=k}^{n-1}d_j, \label{exp} \end{align} and \begin{align*} \E{\tau_n^2|\tau_{n}<\infty}&=2\lambda^{-2}G_n^{-1}\sum_{k=1}^{n-1} \frac{k\gamma}{1 +k\gamma}\sum_{j=k}^{n-1} \sum_{l=j}^{n-1}d_jd_l \nonumber\\ &=\lambda^{-2}G_n^{-1}\sum_{k=1}^{n-1} \frac{k\gamma}{1 +k\gamma}\Big\{\Big(\sum_{j=k}^{n-1} d_j\Big)^2+\sum_{j=k}^{n-1} d_j^2\Big\}, \end{align*} implying \begin{align} \Var{\tau_n|\tau_{n}<\infty}&=G_n^{-1}\sum_{k=1}^{n-1} \frac{k\gamma}{1 +k\gamma}\Big\{\Big(\lambda^{-1}\sum_{j=k}^{n-1} d_j-m_n\Big)^2+\lambda^{-2}\sum_{j=k}^{n-1} d_j^2\Big\}. \label{var} \end{align} Here we have used the following observation: in terms of $Y_n:=\lambda^{-1}\sum_{j=\kappa_n-1}^{n-1} d_j$ we have $m_n=\E{Y_n}$ and \begin{align*} \Var{\tau_n|\tau_{n}<\infty}&=\E{Y_n^2}-m_n^2+\lambda^{-2}G_n^{-1}\sum_{k=1}^{n-1} \frac{k\gamma}{1 +k\gamma}\sum_{j=k}^{n-1} d_j^2\\ &=\E{(Y_n-m_n)^2}+\lambda^{-2}G_n^{-1}\sum_{k=1}^{n-1} \frac{k\gamma}{1 +k\gamma}\sum_{j=k}^{n-1} d_j^2.\end{align*} \section{Convergence to an exponential distribution} For $2\le k\le n<\infty$ and any natural number $r$ define $\eta_{\gamma,k,n}$ and $\zeta_{\gamma,n,r}$ by \begin{align*} P(\kappa_n\le k|\tau_n<\infty)&= {k(k-1)\over n(n-1)}(1+\eta_{\gamma,k,n}),\\ \E{(2\lambda\tau_{n})^r\vert \tau_{n}<\infty} &=r!(1-\zeta_{\gamma,n,r}). \end{align*} \begin{theorem}\label{the} For $2\le k\le n<\infty$ and $r\ge1$ the following bounds are valid \begin{align} -k\gamma&\le\eta_{\gamma,k,n}\le n\gamma,\label{wk}\\ \label{wok} 0&\le\zeta_{\gamma,n,r}\le(1+(r+1)n)\gamma. \end{align} \end{theorem} The discussion in the end of Section \ref{Ssh} concerning \eqref{mle} showed that it is important to consider the values of $\gamma$ close to ${2\over n(n-1)}$. In Figure \ref{fig10} we plotted the upper bounds in \eqref{wok} with $\gamma={2\over n(n-1)}$ as functions of $n$ for the first three moments $r=1,2,3$. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{Figrn10.pdf} \caption{The upper bound in \eqref{wok} for $\gamma={2\over n(n-1)}$ equals ${2\over n(n-1)}+{2(r+1)\over n-1}$. This upper bound is illustrated by plotting three functions of $n$ for the first three moments $r=1,2,3$. } \label{fig10} \end{figure} \begin{corollary}\label{cor} Uniformly over all $(k,n)$ such that $2\le k\le n<\infty$ \begin{align}\label{wk1} P(\kappa_n= k|\tau_n<\infty)\to {2(k-1)\over n(n-1)},\quad n\gamma\to0. \end{align} Moreover, as $n\gamma \rightarrow 0$ for any fixed natural number $r$ \begin{equation}\label{wok1} \E{(2\lambda\tau_{n})^r\vert \tau_{n}<\infty} \to r! \end{equation} uniformly over $\lambda\in(0,\infty)$. \end{corollary} Corollary \ref{cor} is a straightforward consequence of Theorem \ref{the} proved next. Note that in the case $\gamma={2\over n(n-1)}$ the condition $n\gamma \rightarrow 0$ is equivalent to $n\to\infty$. {\sc Proof of Theorem \ref{the}}. According to Lemma \ref{le1} \begin{align*} P(\kappa_n\le k|\tau_n<\infty)=G_k/G_n, \end{align*} and \eqref{wk} follows from \begin{equation}\label{gin} {1\over 1+n\gamma}\gamma \binom{n}{2}\le G_{n}\le{1\over 1+\gamma}\gamma \binom{n}{2}. \end{equation} To prove \eqref{wok} observe first that \begin{align*} \sum\limits_{k=1}^{n-1}k\sum\limits_{i_{1}=k}^{n-1}\ldots&\sum\limits_{i_{r}=i_{r-1}}^{n-1}\left(\frac{1}{1+i_{1}}\cdots\frac{1}{1+i_{r}}\right)\\ &=\sum\limits_{i_{r}=1}^{n-1} \sum\limits_{i_{r-1}=1}^{i_{r}}\ldots\sum\limits_{i_{1}=1}^{i_{2}}\left(\frac{1}{1+i_{1}}\cdots\frac{1}{1+i_{r}}\right)\sum\limits_{k=1}^{i_{1}}k\\ &=2^{-1}\sum\limits_{i_{r}=1}^{n-1} \ldots\sum\limits_{i_{2}=1}^{i_{3}}\left(\frac{1}{1+i_{2}}\cdots\frac{1}{1+i_{r}}\right)\sum\limits_{i_{1}=1}^{i_{2}}i_{1}\\ & = 2^{-r}\binom{n}{2}. \end{align*} Clearly, for any $1\le k\le i_1\le i_2\le\dots\le i_r\le n-1$ we have \begin{align*} {\gamma\over(1+n\gamma )^{r+1}}&{k\over (1+i_1)\cdots(1+i_r)} \\ &\le {k\gamma\over 1+k\gamma}d_{i_1}\cdots d_{i_r}\le{\gamma\over(1+k\gamma)^{r+1}}{k\over (1+i_1)\cdots(1+i_r)}. \end{align*} Thus Lemma \ref{rth} yields \begin{align*} r!{\gamma \binom{n}{2}\over G_{n}(1+n\gamma )^{r+1}}\le\E{(2\lambda\tau_{n})^r\vert \tau_{n}<\infty} \le r!{\gamma \binom{n}{2}\over G_{n}}, \end{align*} and applying \eqref{gin} we get inequalities \begin{align*} {r!\over (1+\gamma)(1+n\gamma )^{r+1}}\le\E{(2\lambda\tau_{n})^r\vert \tau_{n}<\infty} \le r! \end{align*} resulting in \eqref{wok}. \section{Simulation results and discussion} \begin{figure} \centering \includegraphics[width=0.35\textwidth]{HybridMean.pdf} \includegraphics[width=0.35\textwidth]{HybridVar.pdf} \caption{Conditional mean and variance of $\tau_n$ as functions \eqref{exp} and \eqref{var} of the number $n$ of candidate species. Simulations with $\lambda=1$ and $\beta={4\over n(n-1)}$ are compared to the analytical predictions.} \label{fig1} \end{figure} We have checked and illustrated our analytical results using simulations. Our simulation algorithm is based on the following steps to obtain a single hybridization time: \begin{description} \item[Step 1] For ($k=1$ to $n$): $T_k \leftarrow $ sample from the exponential distribution with rate $k \lambda$. \item[Step 2] For ($k=1$ to $n$): $r_k \leftarrow T_k \beta k(k-1) / 2$. \item[Step 3] $R \leftarrow \sum_{k=2}^n r_k$. \item[Step 4] $h \leftarrow $ sample from the Poisson distribution with mean $R$. \item[Step 5] If ($h == 1$) then sample $k \in \{2,3,\dots,n\}$ with probability proportional to $r_k$, and then the hybridization time uniformly in the $k$th interval. \end{description} In Figure \ref{fig1} the mean and variance of $\tau_n$ as functions \eqref{exp} and \eqref{var} of the number $n$ of candidate species are drawn against the values obtained from simulations. Here $\lambda=1$ and $\gamma=\frac{\beta}{2}={2\over n(n-1)}$ with $n$ ranging from 2 to 200. Figure \ref{fig2} shows simulated conditional distributions of $\tau_n$. We can see how the observed distribution profile approaches the exponential curve as $n$ increases from 2 to 20. The Yule model for the unknown species tree is not very realistic but it is a very convenient tool for phylogenetic calculations, see for example \cite{BS}. Therefore, the presented here results should be viewed as just a starting point for the issues raised in this paper. More biologically relevant extensions of the model studied here should take into account the possibility of hybridization between a pair of ancestral species of which either one or both have no direct descendants at present. To include extinct species in the analysis one can use the so-called conditioned {\it birth-death processes} developed in \cite{AP,GT} and successfully used as species tree models for various purposes, see for example \cite{SB}. An important additional parameter arising in this more general setting is the extinction rate $\mu$ for the ancestral species. An crucial biological feature missing in the classical birth-death processes modeling species trees is {\it geographical structure}. Obviously, the probability of hybridization is conditional on geographical proximity. This could presumably be taken into account by combining our model with some statistical biogeography model \cite{LRDS,R}. Another desirable feature missing in the current analysis is the {\it decaying hybridization rate}: the more divergent two species become, the less probable hybridization between them will be. Of course, one should not limit oneself only to one hybridization event allowing for {\it multiple hybridizations}. Furthermore, hybrids can speciate via ordinary speciation, and hybridizations between hybrids also occur, so these processes should be included in a general model. \begin{figure} \centering \includegraphics[width=0.32\textwidth]{HybridTimesHist2.pdf} \includegraphics[width=0.32\textwidth]{HybridTimesHist3.pdf} \includegraphics[width=0.32\textwidth]{HybridTimesHist5.pdf} \includegraphics[width=0.32\textwidth]{HybridTimesHist10.pdf} \includegraphics[width=0.32\textwidth]{HybridTimesHist20.pdf} \caption{Histograms for $\tau_n$ conditional on a single hybridization event for the number $n$ of candidate species. Left to right: $n=2,3,5,10,20$. Simulations with $\lambda=1$ and $\beta={4\over n(n-1)}$.} \label{fig2} \end{figure} \section*{Acknowledgments} KB and GJ were supported by Centre for Theoretical Biology at the University of Gothenburg. BO and SS were supported by Swedish Research Council grants 2009-5202 and 621-2010-5623. KB was supported by Stiftelsen f\"or Vetenskaplig Forskning och Utbildning i Matematik, Knut and Alice Wallenbergs travel fund, Paul and Marie Berghaus fund, Royal Swedish Academy of Sciences, Wilhelm and Martina Lundgrens research fund.
1,108,101,563,798
arxiv
\section{Introduction}\label{sec:intro} \input{sections/1-intro} \section{Basic Concepts and Related Works}\label{sec:bg_ps} \input{sections/2-background_problem_def} \section{Trajectory Co-clustering Approach}\label{sec:proposal} \input{sections/4-proposal} \section{Experimental Evaluation}\label{sec:experiment} \input{sections/5-experiments} \section{Conclusion}\label{sec:conclusion} \input{sections/6-conclusion} \section*{Acknowledgments}\label{sec:tkxTo} This work was financed by the Brazilian agencies Coordenação de Aperfeiçoamento de Pessoal de Nivel Superior - CAPES (Finance code 001), Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq, and supported by the MASTER project that received funding from European Commission’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement N. 777695. \subsection{Datasets} \begin{table}[ht] \centering \caption{Datasets description (averages reported in the format avg$\pm$std).}\label{tab:datasets_ss-ococlus} \resizebox{.475\textwidth}{!} \begin{tabular}{l|c|c|c|c|c} \hline \textbf{Dataset} & \scalebox{.85}{\textbf{\makecell{Num. of\\trajectories}}} & \scalebox{.85}{\textbf{\makecell{Num. of\\unique Elements}}} & \scalebox{.85}{\textbf{\makecell{Num. of \\users}}} & \scalebox{.85}{\textbf{\makecell{Avg. \# of traj.\\per user}}} & \scalebox{.85}{\textbf{\makecell{Avg. traj.\\length per user}}} \\ \hline FS-NY$_{top193}$ & 3079 & 491 & 193 & 15.95$\pm$6.33 & 20.52$\pm$8.06 \\ \hline FS-NY$_{top81}$ & 1749 & 437 & 81 & 21.59$\pm$6.09 & 22.81$\pm$9.66 \\ \hline FS-NY$_{top10}$ & 352 & 310 & 10 & 35.2$\pm$4.4 & 32.23$\pm$15.46 \\ \hline \end{tabular} } \end{table} \begin{table*}[ht] \centering \caption{Co-clustering result for each dataset over different candidate reference using $z = 1$. Average and coefficient of variation are in the form AVG[CV\%] and we assume the absolute value of CV.} \label{tab:exp:main_results} \resizebox{.8\textwidth}{!} \begin{tabular}{cccccccccc} \toprule \multirow{1}{*}{Dataset} & \makecell{Relevance\\reference} & \makecell{Number of\\co-clusters} & \makecell{AVG number\\of trajectories} & \makecell{AVG cost\\} & \makecell{Number of\\elements} & \makecell{Sequence\\length} & \makecell{AVG number\\of users} & \makecell{Overall\\entropy} \\ \hline \multirow{3}{*}{\rotatebox[origin=c]{0}{FS-NY$_{top193}$}} & Trajectory & 69 & 67.46 [60] & -41.3 [102.1] & 32 & 2.09 [13.39] & 24.71 [42.65] & 6.31 \\ & Cost$^\blacktriangle$ & 49 & 75.04 [61.55] & -55.37 [78.49] & 28 & 2.29 [26.63] & 25.65 [48.49] & 5.06 \\ & Both & 77 & 62.88 [64.67] & -40.3 [99.42] & 36 & 2.21 [23.52] & 22.96 [49.65] & 6.45 \\ \hline \hline \multirow{3}{*}{\rotatebox[origin=c]{0}{FS-NY$_{top81}$}} & Trajectory & 53 & 54.08 [60.5] & -34.53 [99.32] & 25 & 2.11 [14.99] & 16.68 [39.44] & 5.89 \\ & Cost$^\blacktriangle$ & 34 & 61.97 [63.88] & -47.97 [77.11] & 34 & 2.32 [29] & 15.35 [49.58] & 4.16 \\ & Both & 57 & 52.21 [63.54] & -34.35 [96.48] & 29 & 2.21 [25.05] & 15.77 [45.68] & 5.94 \\ \hline \hline \multirow{3}{*}{\rotatebox[origin=c]{0}{FS-NY$_{top10}$}} & Trajectory & 22 & 22.59 [34.83] & -19.68 [50.15] & 21 & 2.5 [31.2] & 2.18 [47.24] & 1.02 \\ & Cost$^\blacktriangle$ & 27 & 18.22 [53.01] & -21.89 [39.74] & 26 & 3.22 [34.16] & 1.70 [61.76] & 0.83 \\ & Both & 33 & 18.39 [48.12] & -19.42 [49.07] & 28 & 3 [36.66] & 1.79 [56.42] & 1.02 \\ \bottomrule \multicolumn{7}{l}{Cost$^\blacktriangle$: It uses the other corresponding side of $z$-score, i.e., -1 when $z$=1 and vice-versa.} \end{tabular} } \end{table*} SS-CoClus is evaluated over three datasets extracted from the Foursquare NY dataset~\cite{Yang2014FS-NY}, as detailed in~\autoref{tab:datasets_ss-ococlus}. It shows the total number of trajectories, the number of unique \emph{element} values, the number of users, the average number and length of trajectories per user. This dataset contains check-ins of users collected from April 2008 to October 2010. Furthermore, it is composed by weekly trajectories of check-ins for each user, given the whole set of check-ins The Foursquare API\footnote{https://developer.foursquare.com/} provided the semantic information related to the POI: category (\textit{root-type}), subcategory (\textit{type}), place name (\textit{poi}), and \textit{price} (a numeric classification); the Weather Wunderground API\footnote{https://www.wunderground.com/weather/api/} provided the \textit{weather} conditions. We select the POI subcategory (\textit{type}) to generate the trajectory sequences for all three datasets which contains 491 unique \emph{elements}. It is an intermediate dimension of semantic POI information, i.e., it is not too specific as the place name dimension and it is not too general as the category dimension. \subsection{Result and Analysis} We consider that an \emph{element} is frequent if its frequency is equal or higher than the average. Therefore, the number of unique \emph{elements} considered as frequent are, respectively, 58, 88, and 93 for the datasets FS-NY$_{top10}$, FS-NY$_{top81}$, and FS-NY$_{top193}$.~\autoref{tab:exp:main_results} presents the co-clustering results considering seven characteristics: (i) the relevance reference; (ii) the number of final co-clusters regarding the candidate reference; (iii) the average number of trajectories; (iv) the average cost value; (v) the number of unique clustered \emph{elements}; (vi) the average sequence length; (vii) the average number of unique users in the co-clusters; and (x) the overall entropy of the set of co-clusters. The number of trajectories and the cost value are inversely proportional. So, we combine both using the other corresponding side of the $z$-curve when the reference is the cost value. For example, if $z$ equals 1, it is set to -1 for the cost value reference and vice-versa. Thus,~\autoref{tab:exp:main_results} shows the trajectory co-clustering results over different candidate reference using $z = 1$. In FS-NY$_{top193}$, the number of co-clusters are 69, 49, and 77 for trajectory, cost, and both, respectively. For FS-NY$_{top81}$, the number of co-clusters are 53, 34, and 57 for trajectory, cost, and combine, respectively. In FS-NY$_{top10}$, the number of co-clusters are 22, 27, and 33 for trajectory, cost, and both, respectively. Using the cost as reference identifies the smallest number of co-clusters in FS-NY$_{top193}$ and FS-NY$_{top81}$. It means that these co-clusters have the highest balance between the number of trajectories and the sequence length. Regarding the clustered \emph{elements}, we may notice that less than 50\% of the \emph{elements} are relevant to identify frequent sequences and forming semantic co-clusters. For example, SS-OCoClus uses 93 \emph{elements} to identify the co-cluster in FS-NY$_{193}$. However, the maximum number of unique \emph{elements} in the final result was 36 when pruning the candidates using both trajectories and the cost as reference. Therefore, 36 \emph{elements} are relevant and contributes to identify 77 semantic co-clusters. Furthermore, in FS-NY$_{top81}$ dataset, considering the trajectory, cost and combine references, the number of unique clustered \emph{elements} are 25, 34, and 29, respectively. In the same way, for the FS-NY$_{top10}$ dataset, the number of unique clustered \emph{elements} are 21, 26, and 28, respectively. It can be seen in \autoref{tab:exp:main_results} that the majority of the average sequence length values are less than 3 \emph{elements}, and only in the FS-NY$_{top10}$ dataset was identified an average equal or greater than 3. Using the cost as the co-cluster reference leads to the discovering of co-clusters with long sequences without discarding many trajectories. Besides, in this dataset, we may infer that it is not common to have long frequent mobility patterns. In addition, the coefficient of variation of the sequence length between co-clusters considering all datasets spans from 13.39\% up to 36.66\%. Such variation within the clustering shows that the co-clusters are heterogeneous regarding the sequence length. Another observation is the high heterogeneity between co-clusters regarding the average number of unique users per cluster. It spans from 39.44\% to 61.76\%, where the minimum value occurs in FS-NY$_{top81}$ considering 34 clusters, while the maximum value occurs in FS-NY$_{top10}$ with 27 clusters. \subsection{Basic Concepts} \begin{definition}{\textbf{Semantic Trajectory} ST:} A semantic trajectory ST is a time ordered sequence of elements $ST = \langle e_{1},e_{2},\ldots e_{n}\rangle$, where each element $e_{i}$ has a set of attributes \{$d_{1}$,$d_{2}$,\ldots,$d_{l}$\} characterizing it according to $l$-dimensions. \end{definition} Semantics is any type of information associated to mobility data other than spatial location and time. Several semantic attributes can be added to the \emph{elements} such as the activity, the category of the POI, the weather condition, etc. A dataset with $N$ sequences can be represented as a matrix $D$ with $N$ rows and $M$ columns, where each entry of $D$ represents an \textit{element} of a sequence. An \textit{element} $t_{ij}$ of $D$ ($1\leq i \leq N$ and $1\leq j \leq M$) is equal to the number of times that the $j$-th \textit{element} occurred in the $i$-th sequence. Co-clustering is the grouping task of finding $K$ co-clusters in $D$ where each co-cluster is formed by a subset of rows and columns~\citep{Madeira2004biclustering}. A subset of rows $I$ can be represented as a binary vector of length $N$, where $I_{i} = 1$ indicates that the $i$-th row is present in $I$. Similar to that, a subset of columns $J$ with $J_{j} = 1$ indicates that the $j$-th column is present in $J$. More formally, a co-cluster can be defined as follows: \begin{definition}{\textbf{Co-cluster} C:}\label{def:co-cluster} Let $D$ be a matrix, $I$ be the subset of rows, $J$ be the subset of columns; a co-cluster $C$ is defined as $C = \langle I,J \rangle$. The entries $c_{ij}$ of co-cluster $C$ are formed by the outer product of its subsets $I$ and $J$. Thus, a co-cluster $C$ can represent a submatrix of $D$. \end{definition} We also make use of the overlap coefficient~\cite{Vijaymeena2016survey}. Formally, it is defined as follows: \begin{definition}{\textbf{Overlap Coefficient} Oc:}\label{eq:over_coef} Given two sets $A$ and $B$, the overlap coefficient is defined as the size of intersection of $A$ and $B$ over the size of the smaller set between $A$ and $B$. \end{definition} \subsection{Related Works} Sankaranarayanan and Davis~\cite{Sankaranarayanan2010learning} proposed a mutual information co-clustering method that simultaneously clusters the start and end locations of the pedestrian trajectories. This approach is limited to the paired analysis once it clusters the start and end locations with similar probability in the matrix. Shahan~\cite{Shaham2015co} proposed a co-clustering method to find co-clusters with a fuzzy strategy that groups objects with a lagged (delay) pattern. This approach cannot deal with semantic data and cluster the attributes regardless of the visited order, thereby overlooking hidden mobility patterns. Mohamed et al.~\cite{Mohamed2016co} used a co-clustering method to group trajectories and road segments in the context of the road network. However, this approach focus on identifying similar trajectories with common road segments regardless of the visited order in the sequence. Han et al.~\cite{Han2017linking} proposed a framework designed for criminal investigations by employing a spectral co-clustering approach on access trajectories in social networks. This approach does not group the trajectories, and it uses the spatial and temporal dimensions independently to identify groups of user IDs. Besides that, it does not deal with semantics, the visited order in the trajectories, and constrains the temporal dimension using a time window. Arian et al.~\cite{Arian2017characterizing} used a co-clustering method to group users and activities from the origin-destination location of moving objects. Once it groups users instead of trajectories, the approach cannot find movement patterns in the trajectories. Besides, it groups the \emph{elements} regardless of the visited order in the trajectories, therefore, misinterpreting the mobility patterns. Wang et al.~\cite{Wang2017vessel} used a sparse bilinear decomposition and a sparse multi-linear decomposition to find co-clusters in vessel trajectories. The multi-linear decomposition using tensor groups the regions, time-slices, and ship type in the set of trajectories. Besides, this method does not consider the sequence in the trajectories, and it constrains the temporal dimension. Hu et al.~\cite{Hu2019nonnegative} proposed a co-clustering method based on Nonnegative Matrix Tri-factorization for grouping users and POIs. This method focuses on grouping the POIs regardless of the visited order in the trajectory. Besides, it constrains the time dimension, which misinterprets the movement patterns. \subsection{Main definitions}\label{sec:met_definition} SS-OCoClus forms candidate co-clusters by testing each \emph{element} incrementally. It starts with the most frequent \emph{elements} and then expands the sequence size by adding one frequent \emph{element} at a time. More formally, a candidate co-cluster is defined as follows: \begin{definition}{\textbf{Candidate Co-cluster} CC:}\label{def:cand_seq} Let $seq$ be a sequence of \textit{elements}, $EM$ be the elements mapping inverted index, and $TM$ be the trajectories mapping inverted index, a candidate co-cluster $CC$ is a tuple $CC = \langle S_{TM}, S_{seq}\rangle$, where $S_{seq}$ is a sequence of \textit{elements} that exist in $TM$ with intersection in $EM$, and $S_{TM}$ is a subset of trajectories indices that contains the sequence $S_{seq}$. \end{definition} SS-OCoClus evaluates each candidate co-cluster to keep only the most relevant ones as the semantic trajectory co-clusters. The candidates relevance can be defined in terms of the number of trajectories, cost value, or both as the reference. The measured relevance is compared with a given statistical metric, that can be the average or the $z$-score. More formally, the semantic co-cluster is defined as follows: \begin{definition}{\textbf{Semantic Co-cluster} SC:}\label{def:semantic_co-cluster} An order-aware semantic trajectory co-cluster SC is a candidate co-cluster CC with the relevance not smaller than the used statistical metric. Thus, the semantic co-cluster SC is a subset of trajectories (objects) and \textit{elements} (attributes), where the \textit{elements} represent a frequent contiguous (sub) sequence in the dataset. \end{definition} Supposing that there will be several candidate co-clusters in a dataset, it is not a trivial task to manually define thresholds, such number of rows or columns, to generate candidate co-clusters. Besides, the optimality of a candidate can be expressed as an objective function $\mathcal{F}(I,J)$ that can rank it. Such common functions when performing co-clustering use the logic of perimeter $\mathcal{F}(I,J) = |I|+|J|$ or area $\mathcal{F}(I,J) = |I|\times|J|$~\cite{Avraham2004CostFunc}. Similarly, Lucchese et al.\cite{Lucchese2013unifying} proposed a cost function that combines the perimeter and area into one general function to mine frequent patterns binary datasets. Inspired by the work of Lucchese et al.\cite{Lucchese2013unifying}, we adapted their cost function for tackling order-aware co-clustering. We consider the number of overlapped \emph{elements} instead of counting the number of \textit{false positives} (matrix entries equal to 0 present in a pattern) and \textit{false negatives} (matrix entries equal to 1 not present in a pattern) as noise data. More formally, the number of overlapped \emph{elements} $Cov$ is defined as follows: \begin{definition}{\textbf{Overlapped Elements} $Cov$:}\label{def:over-elem-cov} Given the set of candidate co-clusters $\Phi$ and the candidate co-cluster CC, the number of overlapped elements $Cov$ is defined as the number of intersected elements between CC and $\Phi$. \end{definition} SS-OCoClus uses the cost function in order to evaluate the candidate co-cluster generated by the frequent sequence. Thus, we define the new cost function $\mathcal{F}$ as follows: \begin{definition}{\textbf{Cost Function} $\mathcal{F}$:}\label{def:cost_func_f} Let $\Phi$ be a set of co-clusters, $CC$ be a candidate co-cluster, $|CC_{I}|$ be the size of a subset of trajectories, $|CC_{J}|$ be the length of the co-cluster sequence, and $Cov$ be the number of overlap elements between $CC$ and $\Phi$; a cost function $\mathcal{F}$ is defined as follows: \begin{equation}\label{eq:cost_func} \mathcal{F}(CC,\Phi) = (|CC_{I}|+|CC_{J}|) - (|CC_{I}|\times|CC_{J}|) + Cov \end{equation} \end{definition} \subsection{The proposed Method}\label{sec:met_description} \autoref{alg:tracoclus} shows the organization of our trajectory co-clustering method. It receives three inputs: trajectory dataset $D$, maximum number of candidate co-clusters $K$, an overlap threshold $\epsilon$ between co-clusters, statistical metric $stat\_met$, and relevance reference $rel\_ref$. As output, it returns a set of co-clusters $\Phi$. It starts by initializing the set of co-clusters $\Phi$ (line 1), then the \textit{elements} mapping ($EM$), trajectory mapping ($TM$) and \emph{elements} frequency ($Els\_freq$) data structures (line 2) by preprocessing $D$. Remember that $EM$ and $TM$ are inverted indexes, e.g., dictionaries for simplicity. The $EM$ structure stores in each $element_{m}$ the trajectories ID that contain the $element_{m}$, while $TM$ stores the \emph{elements} order for each trajectory, and $Els\_freq$ stores the overall frequency for each $element_{m}$ in the dataset $D$. These data structures form the basis for our method to identify frequent contiguous sequences in $D$. The algorithm iterates at most $K$ times (line 3), as this parameter determines the maximum number of candidate co-clusters to be extracted. Rather than testing all the possible $2^{M}$ combinations of \textit{elements}, these are sorted by their frequency in descending order (from the most frequent to least) to maximize the probability of identifying frequent sequences in the clustering process (line 4). So, we avoid to test all combinations by focusing on the high frequencies. Next, the subset of trajectories and \emph{elements} are initialized as empty (line 5) to form the candidate co-clusters $CC$ and $CC^{*}$ (line 6). These candidates are used to store the identified frequent sequence pattern. \begin{algorithm}[!ht] \DontPrintSemicolon \SetNoFillComment \footnotesize \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Trajectory dataset $D$, Max number of candidate co-cluster $K$, Overlap threshold $\epsilon$, Statistical metric $stat\_met$, and Relevance reference $rel\_ref$} \Output{Set of co-clusters $\Phi$} $\Phi \leftarrow \{\emptyset\}$\; $EM,TM,Els\_freq \leftarrow initializeData(D)$\; \For{candidate\_iter = 0 to K}{ $sort(Els\_freq,desc)$ \tcp*{descending order} $I,J \leftarrow [\emptyset]$ \tcp*{subset of trajectories and \textit{elements}} $CC^{*} \leftarrow CC,CC \leftarrow \langle I,J \rangle$ \tcp*{co-clusters} $sequence\_cc \leftarrow [\emptyset]$\tcp*{cluster sequence} $Els\_queue \leftarrow queue(Els\_freq)$\; \tcc{step to find a candidate} \For{i = 0 to Els\_queue.length}{ $el\_p \leftarrow Els\_queue.pop()$\; $sequence\_cc.append(el\_p)$\; $Els\_queue.append(el\_p)$\; $Els\_to\_test \leftarrow Els\_queue.length$\; \tcc{step to expand candidate sequence} \While{Els\_to\_test \textgreater 0}{ $el\_q \leftarrow Els\_queue.pop()$\; $Els\_queue.append(el\_q)$\; $CC^{*} \leftarrow candidateCC(sequence\_cc,el\_q,EM,TM)$\; \eIf{$\mathcal{F}$($CC^{*},\Phi$) $\leq$ $\mathcal{F}$($CC,\Phi$) $\And max(Oc(CC^{*},\Phi)) \leq \epsilon$}{ $CC \leftarrow CC^{*}$\; $Els\_to\_test \leftarrow Els\_queue.length$\; $sequence\_cc.append(el\_q)$\; }{ $Els\_to\_test$ -= 1\; } } \eIf{$CC == \emptyset$}{ $sequence\_cc.pop()$\tcp*{test the next el\_p} }{ \textbf{break}\tcp*{candidate identified} } } \eIf{$\mathcal{F}(CC,\Phi) \geq 0$}{ \tcc{keep the most relevant candidates} $prune(\Phi,stat\_met,rel\_ref)$\; \textbf{break}\tcp*{No relevant candidates anymore} }{ $\Phi.append(CC)$\tcp*{store the candidate} $Els\_freq.update(CC)$\tcp*{update frequencies} } } \KwRet{$\Phi$} \caption{order-aware co-clustering}\label{alg:tracoclus} \end{algorithm} SS-OCoClus uses the \textit{element} frequency to identify frequent sequences, assuming that for a sequence to be frequent, the \emph{elements} that belong to the sequence will also be frequent as \textit{a priori} principle~\cite{Bastide2000mining}. The method creates a queue $Els\_queue$ using the \textit{elements} frequency (line 8) aiming to expand the candidate co-cluster sequence. SS-OCoClus starts adding the \emph{element} $el\_p$ in the co-cluster sequence (line 10) to test it with the next \emph{element} in the queue ($el\_q$). The method tries to expand the frequent sequence while the \emph{elements} queue does not reach its end, when there is no \emph{element} to test(line 14). Then, the element $el\_q$ is dequeued (line 15) and it is added to the end of the queue for further analysis (line 16). SS-OCoClus uses the $candidateCC$ function to identify frequent sequences by expanding the candidate co-cluster sequence $sequence\_cc$ with the element $el\_q$ regarding the intersection of each \emph{element} in $EM$ and the order-aware of each trajectory in $TM$ (line 17). If the frequent sequence exits, its pattern is used as a candidate co-cluster. The $candidateCC$ function tries to form two frequent sequences: (i) one sequence in the form of $sequence\_cc \rightarrow el\_q$, and (ii) another sequence in the form of $el\_q \rightarrow sequence\_cc$ (we use $\rightarrow$ to mean \textit{from \textbf{A} to \textbf{B}}). These sequences can represent a pattern formed by a subset of trajectories and \textit{elements} of $D$. From that, given the $sequence\_cc$ and $el\_q$ that form the sequence of elements, the \emph{elements} map $EM$, and the trajectories map $TM$, the $candidateCC$ function returns the frequent sequence pattern with the smallest cost value representing the candidate co-cluster. Next, it checks if the candidate co-cluster $CC^{*}$ cost value is smaller than the candidate co-cluster $CC$ and if its maximum overlap coefficient $Oc$ does not exceed the overlap threshold $\epsilon$ (line 18). If this condition is respected the method performs as follows: the candidate co-cluster is accepted, and SS-OCoClus updates $CC$ by replacing it with $CC^{*}$ (line 19); reassign the counter number with the queue length (line 20); update the candidate co-cluster sequence (line 21). Otherwise, it decrements the counter number of \emph{elements} to test (line 23). After checking all \emph{elements} in the queue trying to expand the candidate co-cluster sequence (line 14), if the candidate $CC$ is empty (line 24) the method removes the \emph{element} $el\_p$ (line 25) and repeats the loop (line 9) to test the next $el\_p$ \emph{element} in the sequence. Otherwise, it stops searching for new candidates (line 27). Next, the method checks if the candidate $CC$ does not have the minimum cost value to accept it as a candidate co-cluster (line 28). The method identifies it automatically thanks to the cost function $\mathcal{F}$ stated in \autoref{def:cost_func_f}. Considering the candidate $CC$, if it does not obtain a minimum cost value (line 28), SS-OCoClus \textit{prunes} the set of candidates to keep the most relevant candidates as the semantic co-clusters $SC$ in $\Phi$ (line 29). For that, SS-OCoClus allows the user to specify the relevance reference $rel\_ref$, where it can consider the number of trajectories, cost value, or both. Besides, the user can choose between two statistical metrics $stat\_met$ to prune the candidate result: first, in terms of the average; and second, in terms of the number of deviations ($z$-score). After this, the method returns the most relevant semantic co-clusters (line 34) regarding the relevance reference $rel\_ref$ and the statistical metric $stat\_met$. However, if candidate $CC$ has a minimum cost value, it is appended to the set of candidates co-clusters $\Phi$ (line 32), and the $Els\_freq$ is updated regarding the $CC$ (line 33). SS-OCoClus updates $Els\_freq$ decrementing the \emph{element} frequency in the $Els\_freq$ by counting the times that an \emph{element} appears in the co-cluster sequence and multiplies by the number of trajectories in the candidate. Thus, the process for finding candidate co-clusters is repeated at most $K$ times, but it also stops when the one last analysed candidate does not reduce the cost function at a minimum value.
1,108,101,563,799
arxiv
\section{Introduction} Understanding the statistical properties of a certain dynamical system is of fundamental importance in many problems coming from pure and applied mathematics, as well as in developing applications to other sciences. \medskip In this article, we will focus on the concept of \textit{statistical stability} of a dynamical system, \textit{i.e.}, how its statistical features change when the systems is perturbed or modified. The interest in this question is clearly motivated by the need of controlling how much, and to which extent, approximations, external perturbations and uncertainties can affect the qualitative and quantitative analysis of its dynamics. \medskip Statistical properties of the long-term evolution of a system are reflected, for instance, by the properties of its invariant measures. When the system is perturbed, it is then useful to understand, and be able to predict, how the relevant\footnote{% The concept of \textit{relevant} is strictly related to the analysis that is carried out. Hereafter, we will be interested in so called \emph{\ physical measures} (see footnote \ref{notap} or \cite{Y}). In other contexts, other kinds of measures might be considered, for example, the so-called measures of maximal entropy.} invariant measures change by the effect of the perturbation, \textit{i.e.}, what is called the \textit{% response} of the system to the perturbation. In particular, it becomes important to get quantitative estimates on their change by effect of the perturbation, as well as understanding the \textit{regularity} of their behavior, for instance differentiability, Lipschitz or H\"{o}lder dependence, etc... \medskip These ideas can be applied to many kinds of systems and these concepts can be studied in many different ways. In this paper we will consider \emph{% discrete deterministic dynamical systems} and \emph{\ deterministic perturbations. } \smallskip More specifically, we will consider systems of the kind $(X,T_{0})$, where $% X $ is a compact metric space and $T_{0}:X\rightarrow X$ a map, whose iterations determine the dynamics; we investigate perturbed systems $% \{(X,T_{\delta })\}_{\delta \in \lbrack 0,\overline{\delta })}$, where $% T_{\delta }:X\rightarrow X$ are such that $T_{\delta }\rightarrow T_{0}$, as $\delta \rightarrow 0$, in some suitable topology. \smallskip For each $\delta \in \lbrack 0,\overline{\delta })$ let $\mu _{\delta }$ be an invariant Borel probability measure for the system $(X,T_{\delta })$; we aim to get information on the regularity of this family of measures, by investigating the regularity of the map $\delta \longmapsto \mu _{\delta }$. This notion of regularity might depend on the topology with which the space of measures is equipped. In this paper we will be interested in absolutely continuous measures with the $L^{1}$ norm, as well as in the whole space of Borel probability measures ${\mathcal{P}}(X)$, endowed with a suitable weak norm, see subection \ref{sec1.1} for more details. \medskip We say that $(X,T_{0},\mu _{0})$ is \emph{statistically stable} (with respect to the considered class of perturbations) if this map is continuous at $\delta =0$ (with respect to the chosen topology on the space of measures in which $\mu_0$ is perturbed). \emph{Quantitative statistical stability} is provided by quantitative estimates on its modulus of continuity. \smallskip Differentiability of this map at $\delta =0$ is referred to by saying that the system has \emph{linear response } to a certain class of perturbations. Similarly, higher derivatives and higher degrees of smootheness can be considered. \medskip These questions are by now well understood in the case of uniformly hyperbolic systems, where it has been established Lipschitz and, in some cases, differentiable dependence of the relevant (physical) invariant measures with respect to the considered perturbation (see, for example, \cite% {BB} for a recent survey on linear response under deterministic perturbations, or the introduction in \cite{GS} for a survey focused on higher-order terms in the response and for results in the stochastic setting). \smallskip For systems having not a uniformly hyperbolic behavior, in presence of discontinuities, or more complicated perturbations, much less is known and results are limited to particular classes of systems; see, for instance, \cite{ASsu} for a general survey and \cite{A}, \cite{AV} ,\cite{BV}, \cite% {BT}, \cite{BBS}, \cite{BKL}, \cite{BK2}, \cite{BS}, \cite{BS2}, \cite{BS2}, \cite{Dol}, \cite{D2}, \cite{D3}, \cite{GL}, \cite{Gmann}, \cite{Gpre}, \cite{Ko}, \cite{KL}, \cite% {met}, \cite{Lin}, \cite{LS}, \cite{SV}, \cite{zz} for other results % {about statistical stability} for different classes of systems. We point out a particular kind of deterministic perturbation which will be considered in this paper: the spatial discretization. In this perturbation, one considers a discrete set in the phase space and replaces the map $T$ with its composition with a projection to this discrete set. This is what happens for example when we simulate the behavior of a system by iterating a map on our computer, which has a finite resolution and each iterate is subjected to numerical truncation. This perturbation changes the system into a periodic one, destroying many features of the original dynamics, yet this kind of simulations are quite reliable in many cases when the resolution is large enough and are widely used in the applied sciences. Why and under which assumptions these simulations are reliable or not is an important mathematical problem, which is still largely unsolved. Few rigorous results have been found so far about the stability under spatial dicretization (see e.g. \cite{Bo}, \cite{GB}, \cite{Gu}, \cite{Gu2}, \cite{mier}). We refer to Section \ref{sectrunc} for a more detailed discussion on the subject. \smallskip The majority of results on statistical stability are established for systems that are, in some sense, \textit{chaotic}. There is indeed a general relation between the speed of convergence to the equilibrium of a system (which reflects the speed of \textit{mixing}) and the quantitative aspects of its statistical stability (see \cite{Gpre}, Theorem 5). \medskip In this paper we consider a class of systems that are not chaotic at all, namely the {\emph{diffeomorphisms of the circle}}. We believe that they provide a good model to start pushing forward this analysis. {In particular, we will start our discussion by investigating the case of {\it rotations of the circle}, and then explaining how to generalize the results to the case of circle diffeomorphisms (see section \ref{sec:stabdiff}).} \medskip We prove the following results. \begin{enumerate} \item The statistical stability of irrational rotations under perturbations that are small in the uniform convergence topology. Here stability is proved with respect to a weak norm on the space ${\mathcal{P}}(X)$, related to the so-called Wassertein distance; see Theorem \ref{statstab}. \item H\"{o}lder statistical stability for Diophantine rotations under the same kind of perturbations, where the H\"{o}lder exponent depends on the Diophantine type of the rotation number. See Theorem \ref{stst2} for the general upper bounds\ and Proposition \ref{berlusconi} for examples showing these bounds are in some sense sharp. \item Differentiable behavior and linear response for Diophantine rotations, under smooth perturbations that preserve the rotation number; for general smooth perturbations the result still holds, but for a Cantor set of parameters (differentiability in the sense of Whitney); see Theorem \ref{KAMandResp} and Corollary \ref{corKAM}. \item We extend these qualitative and quantitative stability results to diffeomorphisms of the circle {satisfying suitable assumptions}; see Theorems \ref{stadiff} and \ref{quantdiff}. \item We prove the statistical stability of diffeomorphisms of the circle under spatial discretizations and numerical truncations, also providing quantitative estimates on the ''error'' introduced by the discretization. \end{enumerate} {We believe that the general statistical stability picture here described for rotations is analogous to the one found, in different settings, for example in \cite{BBS, BS2, BS1, LS} (see also \cite[Section 4]{BB}), where one has a smooth behavior for the response of statistical properties of the system to perturbations not changing the topological class of the system ({\it i.e.}, changing the system to a topologically conjugated one), while we have less regularity, and in particular H\"{o}lder behavior, if the perturbation is allowed to change it. In our case, the rotation number plays the role of determining the topological class of the system.} Some comments on the methodology used to establish these results. As far as items 1 and 2 are concerned, we remark that since rotations are not mixing, the general relation between the speed of convergence to the equilibrium and their statistical stability, that we have recalled above, cannot be applied. However, we can perform some analogous construction considering the speed of convergence to the equilibrium of the Ces\`{a}ro averages of the iterates of a given measure, which leads to a measure of the speed of convergence of the system to its ergodic behavior (see Lemma \ref{stablemma}). Quantitative estimates of this speed of the convergence -- and hence our quantitative stability statement, Theorem \ref{stst2} -- are obtained by means of the so-called Denjoy-Koksma inequality (see Theorem \ref{DK}). \smallskip On the other hand, results in item 3 are obtained as an application of KAM theory for circle maps (see Theorem \ref{KAMVano}), with a particular focus on the dependence of the KAM-construction on the perturbative parameter. In Section \ref{KAMsection} we provide a brief introduction on this subject.% \newline The extension of the statistical stability results established for rotations to{ circle diffeomorphisms} (item 4) is done again by {combining our results for irrational rotations with the general theory of linearization of circle diffeomorphims, including Denjoy theorem, KAM theory and Herman-Yoccoz general theory (see section \ref{secconj})}. The final application to spatial discretizations is obtained as corollary of these statements, which -- thanks to the rather weak assumptions on the perturbations -- are suitable to deal with this particularly difficult kind of setting.\\ {As a final remark, although we have decided to present our results in the framework of circle diffeomorphisms and rotations of the circle, we believe that the main ideas present in our constructions can be naturally applied to extend these results to rotations on higher dimensional tori. }\\ \noindent \textbf{Organization of the article.} In Section \ref{sec1} after introducing some tools from number theory and geometric measure theory we prove qualitative and quantitative statistical stability of irrational rotations. The quantitative stability results are proved first by establishing general H\"older upper bounds in subsection \ref{ub} and then exhibiting particular small perturbations for which we actually have H\"older behavior, hence establishing lower bounds in section \ref{lob}. In Section \ref{KAMsection}, after a brief introduction to KAM theory and to the problem of smooth linearization of circle diffeomorphisms, we prove linear response results for suitable deterministic perturbations of Diophantine rotations. In Section \ref{sec:stabdiff} we show how to extend the results of Section \ref{sec1} to sufficiently smooth circle diffeomorphisms. Finally, in Section \ref{sectrunc} we introduce a class of perturbations coming from spatial discretization and apply our previous results to this kind of perturbations, obtaining some qualitative and quantitative results. \newline \noindent \textbf{Acknowledgments.} The authors are grateful to A. Celletti, R. de la Llave, P-A Guiheneuf, C. Liverani, M. Sevryuk for their helpful suggestions. The authors also thank R. Calleja, A. Alessandra and R. de la Llave for sharing with them their results in \cite{CCdL}.\\ S.G. and A.S. have been partially supported by the research project PRIN Project 2017S35EHN ``{\it Regular and stochastic behavior in dynamical systems}'' of the Italian Ministry of Education and Research (MIUR). AS also acknowledges the support of the MIUR Department of Excellence grant CUP E83C18000100006. \newline \bigskip \section{Statistical stability of irrational rotations} \label{sec1} Irrational rotations on the circle preserve the Lebesgue measure $m$ on the circle {${\mathbb{S}}^1:= {\mathbb{R}}/{\mathbb{Z}}$} and are well known for being uniquely ergodic. It is easy to see that small perturbations of such rotations may have singular invariant measures {(\textit{i.e.}, not absolutely continuous with respect to $m$)}, even supported on a discrete set (see examples in Section \ref{lob}). However, we will show that these measures must be {close, in some suitable sense,} to $m$. \subsection{Weak statistical stability of irrational rotations} \label{sec1.1} In this section, we aim to prove a statistical stability result for irrational rotations in a weak sense; more specifically, {we show that by effect of small natural perturbations, their invariant measures vary continuously with respect to the so-called Wassertein distance.} {This qualitative result might not be surprising for experts, however the construction that we apply also leads to quantitative estimates on the statistical stability, which will be presented in the next subsections.} \medskip {Let us first recall some useful notions that we are going to use in the following}. Let $(X,d)$ be a compact metric space and let ${\mathcal{M}}(X)$ denote the set of signed {finite} Borel measures on $X$. If $g:X\longrightarrow \mathbb{R}$ is a Lipschitz function, we denote its (best) Lipschitz constant by $\mathrm{Lip}(g)$, \textit{i.e.} \begin{equation*} \displaystyle{\mathrm{Lip}(g):=\sup_{x,y\in X,x\neq y}\left\{ \dfrac{% |g(x)-g(y)|}{d(x,y)}\right\} }. \end{equation*} \smallskip \begin{definition} \label{w} Given $\mu, \nu \in {\mathcal{M}}(X) $ we define the \textbf{% Wasserstein-Monge-Kantorovich} distance between $\mu $ and $\nu $ by% \begin{equation} W(\mu ,\nu ):=\sup_{\mathrm{Lip}(g)\leq 1,{\mathcal{M}}g{\mathcal{M}}% _{\infty }\leq 1}\left\vert \int_{\mathbb{S}^1} {g}d\mu -\int_{\mathbb{S}^1} {g}d\nu \right\vert . \end{equation} We denote% \begin{equation*} \|\mu \|_{W}:=W(0,\mu ), \end{equation*}% {where $0$ denotes the trivial measure identically equal to zero.} $\|\cdot \|_{W}$ defines a norm on the vector space of signed measures defined on a compact metric space. \end{definition} {We refer the reader, for example, to \cite{AGS} for a more systematic and detailed description of these topics.}% \newline \medskip Let $T:X\rightarrow X$ be a Borel measurable map. Define the linear functional \begin{equation*} L_{T}:{\mathcal{M}}(X)\rightarrow {\mathcal{M}}(X) \end{equation*}% that to a measure $\mu \in {\mathcal{M}}(X)$ associates the new measure $% L_{T}\mu$, satisfying $L_{T}\mu (A):=\mu (T^{-1}(A)) $ for every Borel set $% A\subset X$; {$L_{T}$ will be called \textit{transfer operator} (observe that $L_{T}\mu$ is also called the {push-forward of $\mu$ by $T$} and denoted by $T_*\mu$)}. If follows easily from the definition, that invariant measures correspond to fixed points of $L_{T}$, \textit{i.e.}, $L_{T}\mu =\mu $. \medskip We are now ready to state our first statistical stability result for irrational rotations. \begin{theorem}[Weak statistical stability of irrational rotations.] \label{statstab} Let $R_{\alpha }:{{\mathbb{S}}^{1}\rightarrow {\mathbb{S}}% ^{1}}$ be an irrational rotation. Let $\{T_{\delta }\}_{0\leq \delta \leq \overline{\delta }}$ be a family of Borel probability measurable maps of ${\mathbb{S}}^1$ to itself such that% \begin{equation*} \sup_{x\in {\mathbb{S}}^{1}}|R_{\alpha }(x)-T_{\delta }(x)|\leq \delta . \end{equation*}% Suppose $\mu _{\delta }$ is an invariant measure\footnote{% In the case when $T_{\delta }$ is continuous such measures must exist by the Krylov-Bogoliubov theorem { \cite{KB}}. In other cases such measures can be absent, in this case our statement is empty.} of $T_{\delta } $. Then \begin{equation*} \lim_{\delta \rightarrow 0}\| m-\mu _{\delta }\|_{W}=0. \end{equation*} \end{theorem} \bigskip {Let us start with the following preliminary computation.} \begin{lemma} \label{stablemma}Let $L$ be the transfer operator associated to an isometry of $\ {\mathbb{S}}^{1}$ and let $L_{\delta }$ be the transfer operator associated to a measurable map $T_{\delta }$. Suppose that $\mu _{\delta }=L_{\delta }\mu _{\delta }.$ Then, for each $n\geq 1$ \begin{equation} \|\mu _{\delta }-m\|_{W} \;\leq\; \big\| m-\frac{1}{n}\sum_{{1\leq }i\leq n}L^{i}\mu _{\delta } \big\|_{W} \;+\; \frac{(n-1)}{2} \; \big\|(L-L_{\delta })\mu _{\delta } \big\|_{W} \end{equation}% where {$L^i := L \circ \ldots \circ L$ ($i$-times)}. \end{lemma} \medskip \begin{proof} The proof is a direct computation. Since $\mu _{\delta }=L_{\delta }\mu _{\delta }$ and $m$ \ is invariant for $L$, then \begin{eqnarray} \label{prodi} \|\mu _{\delta }-m\|_{W} &\leq & \big \| \frac{1}{n}\sum_{1\leq i\leq n}L_{\delta }^{i}\mu _{\delta }-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}m \big\|% _{W} \notag \\ &\leq & \big\|\frac{1}{n}\sum_{1\leq i\leq n}L^{i}(m-\mu _{\delta }) \big\|% _{W}+\big\|\frac{1}{n}\sum_{1\leq i\leq n}(L^{i}-L_{\delta }^{i})\mu _{\delta }\big\|_{W}. \end{eqnarray}% Since \begin{equation*} L^{i}-L_{\delta }^{i}=\sum_{k=1}^{i}L^{i-k}(L-L_{\delta })L_{\delta }^{k-1} \end{equation*}% then% \begin{eqnarray*} (L^{i}-L_{\delta }^{i})\mu _{\delta } &=&\sum_{k=1}^{i}L^{i-k}(L-L_{\delta })L_{\delta }^{k-1}\mu _{\delta } \\ &=&\sum_{k=1}^{i}L^{i-k}(L-L_{\delta })\mu _{\delta }. \end{eqnarray*} Being $L$ is the transfer operator associated to an isometry, then \begin{equation}\label{mis} \|L^{i-k}(L-L_{\delta })\mu _{\delta }\|_{W}\leq \|(L-L_{\delta })\mu _{\delta }\|_{W} \end{equation} and consequently \begin{equation*} {\Vert }(L^{i}-L_{\delta }^{i})\mu _{\delta }{\Vert _{W}}\leq (i-1)\|(L-L_{\delta })\mu _{\delta }\|_{W}. \end{equation*} Substituting in \eqref{prodi}, we conclude \begin{equation*} \|\mu _{\delta }-m\|_{W}\leq \big\|\frac{1}{n}\sum_{1\leq i\leq n}L^{i}(m-\mu _{\delta })\big\|_{W}+\frac{(n-1)}{2}\|(L-L_{\delta })\mu _{\delta }\|_{W}. \end{equation*} \end{proof} \bigskip \begin{lemma} \label{prv1}Under the assumptions of Theorem \ref{statstab}, let $\{\mu _{\delta }\}_{0\leq \delta \leq \overline{\delta }}$ be a family of Borel {probability} measures on $\mathbb{S}^{1},$ then \begin{equation*} \lim_{n\rightarrow \infty } \big \|m-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}\mu _{\delta } \big\|_{W}=0 \end{equation*}% uniformly in $\delta$; namely, for every $\varepsilon>0$ there exists $% \overline{n} = \overline{n}(\varepsilon)$ such that if $n\geq \overline{n}$ then \begin{equation*} \sup_{0 \leq \delta \leq \overline{\delta}} \big\| m-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}\mu _{\delta } \big\|_{W} \leq \varepsilon. \end{equation*} \end{lemma} \medskip \begin{proof} Let $\delta _{x_{o}}$ be the delta-measure concentrated at a point $x_{0}\in \mathbb{S}^{1}$. By unique ergodicity of the system, we get $% \lim_{n\rightarrow \infty }\|m-\frac{1}{n}\sum_{1\leq i \leq n}L^{i}\delta _{x_{0}}\|_{W}=0.$ This is uniform in $x_{0}$; in fact, changing $x_{0}$ is equivalent to compose by a further rotation, which is an isometry and hence does not change the $\|\cdot\|_{W}$ norm. Any measure $\mu _{\delta }$ can be approximated in the $\|\cdot\|_{W}$ norm, with arbitrary precision, by a convex combination of delta-measures, \textit{i.e.}, for each $\varepsilon >0 $ there are $x_{1},...,x_{k}\in {\mathbb{S}}^1$and $\lambda _{1},...,\lambda _{k}\geq 0$, with $\sum_{i\leq k}\lambda _{i}=1$ \ such that \begin{equation*} \big \|\mu _{\delta }-\sum_{1\leq i\leq k}\lambda _{i}\delta_{x_{i}} \big\|_{W}\leq \varepsilon . \end{equation*} Since $R_{\alpha }$ is an isometry the $\|\cdot\|_{W}$ norm is preserved by the iterates of $L.$ Hence for each $n\geq 0,$ we also have% \begin{equation*} \big \|L^{n}\mu _{\delta }-L^{n}\big(\sum_{1\leq i\leq k}\lambda _{i}\delta _{x_{i}}\big) \big\|_{W}\leq \varepsilon, \end{equation*} which implies \begin{equation*} \big \| m- L^{n}\mu _{\delta }\big\|_{W} \leq \varepsilon+ \big\| m -L^{n}\big(\sum_{1\leq i\leq k}\lambda _{i}\delta _{x_{i}}\big) \big\|_{W} \end{equation*} and \begin{equation*} \big \| m- \frac{1}{n}\sum_{1\leq i\leq n}L^{i}\mu _{\delta } \big\|_{W} \leq \varepsilon+ \big\| m -\frac{1}{n}\sum_{1\leq j\leq n}L^{j}\big(\sum_{i\leq k}\lambda _{i}\delta _{x_{i}}\big)\big\|_{W}. \end{equation*} We estimate now the behavior of the right hand side of the last inequality as $n\to \infty$. For any $n$ we have \begin{equation*} \big \|m-\frac{1}{n}\sum_{1\leq j\leq n}L^{j}\big(\sum_{i\leq k}\lambda _{i}\delta _{x_{i}}\big) \big\|_{W}= \big \|\sum_{1\leq i\leq k}\lambda _{i}m-\sum_{1\leq i\leq k} \frac{\lambda_i}{n} \big( \sum_{1\leq j\leq n}L^{j}\delta _{x_{i}} \big) \big\|_{W} \end{equation*}% \ and therefore $\lim_{n\rightarrow \infty }\|\sum_{i\leq k}\lambda _{i}\big(% m-\frac{1}{n}\sum_{j\leq n}L^{j}\delta _{x_{i}}\big)\|_{W}=0 $. From this, the claim of the lemma easily follows. \end{proof} \bigskip We can now prove Theorem \ref{statstab}.\newline \begin{proof}[Proof of Theorem \protect\ref{statstab}] Let $L_{\delta }$ be the transfer operator associated to $T_{\delta }.$ By Lemma \ref{prv1}, $\lim_{n\rightarrow \infty }\|m-\frac{1}{n}\sum_{1\leq i \leq n}L^{i}\mu _{\delta }\|_{W}=0$ uniformly in $\delta$. Since \begin{equation*} \sup_{x\in {\mathbb{S}}^{1}}|R_{\alpha }(x)-T_{\delta }(x)|\leq \delta, \end{equation*} then $\|(L-L_{\delta })\mu _{\delta }\|_{W}\leq \delta $ and \begin{equation} \label{covid} \lim_{\delta \rightarrow 0}\|(L-L_{\delta })\mu _{\delta }\|_{W}=0. \end{equation} By Lemma \ref{stablemma} \ we get that for each $n$ \begin{equation} \big \|\mu _{\delta }-m\|_{W}\leq \|m-\frac{1}{n}\sum_{1\leq i \leq n}L^{i}\mu _{\delta } \big \|_{W}+\frac{(n-1)}{2} \big \|(L-L_{\delta })\mu _{\delta }\big\|_{W}. \end{equation}% It follows from Lemma \ref{prv1} that we can choose $n$ such that $\|m-\frac{% 1}{n}\sum_{1\leq i \leq n}L^{i}\mu _{\delta }\|_{W}$ is as small as wanted. Then, using \eqref{covid}, we can choose $\delta $ sufficiently mall so to make $\frac{(n-1)}{2}\|(L-L_{\delta })\mu _{\delta }\|_{W}$ as small as needed, hence proving the statement. \end{proof} \begin{remark} The qualitative stability statements with respect to the Wasserstein distance proved in this section for circle rotations, extend directly to many other systems, for example to uniquely ergodic rotations on the multidimensional torus. In fact in the proof, aside of the general properies of the Wasserten distance and of pushforward maps, we only use that the system is uniquely ergodic, and the map is an isometry. This property could also be relaxed to a non-expansive property, ensuring that \eqref{mis} is satisfied. \end{remark} \subsection{Quantitative statistical stability of Diophantine rotations, upper bounds\label{ub}} We now consider irrational rotations, {for rotation numbers that are ``badly'' approximable by rationals: the so-called \textit{Diophantine numers% }. In this case, we can provide a quantitative estimate for the statistical stability of the system by showing that the modulus of continuity of the function $\delta \longmapsto \mu_\delta$ is H\"olderian, and that its exponent depends on the Diophantine type of the rotation number.} Let us start by recalling the definition of \textit{Diophantine type} for a real number (see \cite{KN}): {this concept expresses quantitatively the rate of approximability of an irrational number by sequences of rationals}. \newline In what follows, we will also use {$\| \cdot \|_{\mathbb{Z}}$} to denote the distance from a real number to the nearest integer. \begin{definition} \label{linapp} If $\alpha $ is irrational, the Diophantine type of $\alpha $ is defined by% \begin{equation*} \gamma (\alpha ):=\sup \{\gamma\geq 0: \underset{k\rightarrow \infty }{\lim \inf }~\,k^{\gamma }\Vert k\alpha \Vert_{\mathbb{Z}} =0\mathbb{\}}. \end{equation*} \end{definition} We remark that in some cases $\gamma (\alpha )=+\infty$. When $\gamma (\alpha )<+\infty$ we say $\alpha $ is of \textit{finite Diophantine type}.% \newline \begin{remark} The Diophantine type of $\alpha $ can be also defined by% \begin{eqnarray*} \gamma (\alpha )&:=&\inf \left\{ \gamma\geq 0: \,\exists c>0 \; \mbox{s.t.} \;\Vert k\alpha \Vert_{\mathbb{Z}} \geq c_{0}|k|^{-\gamma } \; \forall \, k\in \mathbb{Z}\setminus\{0\} \right\} \\ &=& \inf \left\{ \gamma\geq 0: \,\exists c>0 \; \mbox{s.t.} \; \big|\alpha -% \frac{p}{q}\big|\geq \frac{c}{|q|^{\gamma +1}} \quad \forall \; \frac{p}{q}% \in \mathbb{Q}\setminus\{0\} \right\}. \end{eqnarray*} \end{remark} \medskip {In the light of this last remark on the Diophantine type of a number, we recall the definition of \textit{Diophantine number} as it very commonly stated in the literature.}\newline \begin{definition} \label{DDD}Given $c >0$ and $\tau \geq 0$, we say that a number $\alpha \in (0,1)$ is $(c ,\tau )$-\textit{Diophantine} if \begin{equation} \left\vert \alpha -\frac{p}{q}\right\vert >\frac{c }{|q|^{1+\tau }}\qquad \forall \quad \frac{p}{q}\in \mathbb{Q}\setminus \{0\}. \label{diophantine} \\ \end{equation} We denote by $\mathcal{D}(c, \tau)$ the set of of $(c,\tau)$-{Diophantine} numbers and by $\mathcal{D}(\tau) := \cup_{c>0} \mathcal{D}(c, \tau).$ \end{definition} \medskip \begin{remark} {Comparing with Definition \ref{linapp}, it follows that every $\alpha \in \mathcal{D}(\tau)$ has finite Diophantine type $\gamma(\alpha)\leq \tau$. On the other hand, if $\alpha$ has finite Diophantine type, then $\alpha \in \mathcal{D}(\tau )$ for every $\tau >\gamma (\alpha )$.} \end{remark} \begin{remark} Let us point out the following properties (see \cite[p. 601]{Russmann} for their proofs): \begin{itemize} \item \textrm{if $\tau<1$, the set $\mathcal{D}(\tau)$ is empty; } \item \textrm{if $\tau>1$ the set $\mathcal{D}(\tau)$ has full Lebesgue measure; } \item \textrm{if $\tau=1$, then $\mathcal{D}(\tau)$ has Lebesgue measure equal to ero, but it has Hausdorff dimension equal to $1$ (hence, it has the cardinality of the continuum). } \end{itemize} \textrm{See also \cite[Section V.6]{Herman} for more properties.\newline } \end{remark} Now we introduce the notion of discrepancy of a sequence $x_{1},...,x_{N}\in \lbrack 0,1]$. This is a measure of the equidistribution of the points $% x_{1},...,x_{N}$. Given $x_{1},...,x_{N}\in \lbrack 0,1]$ we define the discrepancy of the sequence by% \begin{equation*} D_{N}(x_{1},...,x_{N}):=\sup_{\alpha \leq \beta ,~\alpha ,\beta \in \lbrack 0,1]}\big |\frac{1}{{N}}\sum_{1\leq i\leq N}1_{[\alpha ,\beta ]}(x_{i})-(\beta -\alpha )\big| \end{equation*}% it can be proved (see \cite[Theorem 3.2, page 123]{KN}) that the discrepancy of sequences obtained from orbits of and irrational rotation is related to the Diophantine type of the rotation number. \begin{theorem} \label{11}Let $\alpha $ be an irrational of finite Diophantine type. Let us denote by $D_{N,\alpha }(0)$\ the discrepancy of the sequence $% \{x_{i}\}_{0\leq i\leq N}=\{\alpha i-\left\lfloor \alpha i\right\rfloor \}_{0\leq i\leq N}$ \ \ (where $\left\lfloor {\cdot}\right\rfloor $ \ stands for the integer part). \ Then: \begin{equation*} D_{N,\alpha }(0)=O(N^{-\frac{1}{\gamma (\alpha )}+\varepsilon }) \qquad \forall\; \varepsilon>0. \end{equation*} \end{theorem} \bigskip From the definition of discrepancy, Theorem \ref{11}, and the fact that the translation is an isometry, we can deduce the following corollary.\newline \begin{corollary} \label{preDK}Let $x_{0}\in [0,1]$, let us denote by $D_{N,\alpha }(x_{0})$\ the discrepancy of the sequence $\{x_{i}\}_{1\leq i\leq N}=\{x_{0}+\alpha i-\left\lfloor x_{0}+\alpha i\right\rfloor \}_{0\leq i\leq N}$. Then Theorem \ref{11} holds uniformly for each $x_{0}$, namely for every $\varepsilon >0$ there exists $C=C(\varepsilon )\geq 0$ \ such that for each \thinspace $x_{0}$ and $N\geq 1$% \begin{equation*} D_{N,\alpha }(x_{0})\leq CN^{-\frac{1}{\gamma (\alpha )}+\varepsilon }. \end{equation*} \end{corollary} \begin{proof} It is sufficient to prove that for each $x_0$ it holds that $D_{N,\alpha}(x_{0})\leq 2D_{N,\alpha}({0})$. Indeed, consider $\varepsilon>0 $ and an interval $I=[\alpha,\beta]$ such that \begin{equation*} D_{N}(x_{1},...,x_{N})-\varepsilon\leq \left| \frac{1}{{N}}\sum_{1\leq i\leq N}1_{I}(x_{i})-(\beta -\alpha )\right|. \end{equation*} Now consider the translation of $I$ by $-x_0$ (mod. $1$): $$S=\{x\in[0,1] \ | \ x+x_0-\lfloor x+x_0 \rfloor\in I\}$$ and the translation of the sequence $x_i$, which is the sequence $y_i=\alpha i-\left\lfloor \alpha i\right\rfloor $. We have that $S $ is composed by at most two intervals $S=I_1\cup I_2$ with lenghts $m(I_1)$ and $m(I_2)$; moreover \begin{equation*} \left |\frac{1}{{N}}\sum_{1\leq i\leq N}1_{I}(x_{i})-(\beta -\alpha )\right|= \left |\frac{1}{{N}}\sum_{1\leq i\leq N}1_{I_1}(y_{i})- m(I_1)+ \frac{1}{{N}}\sum_{1\leq i\leq N}1_{I_2}(y_{i})- m(I_2) \right|. \end{equation*} Then \begin{equation*} D_{N}(x_{1},...,x_{N})-\varepsilon\leq 2 D_{N}(y_{1},...,y_{N}). \end{equation*} Since $\varepsilon$ is arbitrary, we conclude that $D_{N,\alpha}(x_{0})\leq 2D_{N,\alpha}({0})$. \end{proof} \medskip The discrepancy is also related to the speed of convergence of Birkhoff sums of irrational rotations. The following is known as the Denjoy-Kocsma inequality (see \cite[Theorem 5.1, page 143 and Theorem 1.3, page 91]{KN}).% \newline \begin{theorem} \label{DK}Let $f$ be a function of bounded variation, that we denote by $% V(f) $. Let $x_{1},...,x_{N}\in \lbrack 0,1]$ be a sequence with discrepancy $D_{N}(x_{1},...,x_{N})$. Then% \begin{equation*} \left|\frac{1}{N}\sum_{1\leq i\leq N}f(x_{i})-\int_{[0,1]}f~dx \right|\leq V(f)\,D_{N}(x_{1},...,x_{N}). \end{equation*} \end{theorem} \medskip We can now prove a quantitative version of our stability result.\newline \begin{theorem}[Quantitative statistical stability of Diophantine rotations] \label{stst2} Let $R_{\alpha }:{{\mathbb{S}}^{1}\rightarrow {\mathbb{S}}^{1}} $ be an irrational rotation. Suppose $\alpha $ has finite Diophantine type $% \gamma (\alpha ).$ Let $\{T_{\delta }\}_{0\leq \delta \leq \overline{\delta }% }$ be a family of Borel measurable maps of the circle such that% \begin{equation*} \sup_{x\in {\mathbb{S}}^{1}}|R_{\alpha }(x)-T_{\delta }(x)|\leq \delta . \end{equation*}% Suppose $\mu _{\delta }$ is an invariant measure of $T_{\delta }$. Then, for each $\ell <{\frac{1}{\gamma (\alpha )+1}}$ we have: \begin{equation*} \|m-\mu _{\delta }\|_{W}=O(\delta ^{\ell }). \end{equation*} \end{theorem} \bigskip Let us first prove some preliminary result.\newline \begin{lemma} \label{conv2}Under the assumptions of Theorem \ref{stst2}, let $\{\mu _{\delta }\}_{0\leq \delta \leq \overline{\delta }}$ be a family of Borel {% probability} measures on $\mathbb{S}^{1}$. Then, {for every $\varepsilon >0$} \begin{equation} \|m-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}\mu _{\delta }\|_{W}=O(n^{-\frac{1}{% \gamma (\alpha )}+\varepsilon }) \label{ww} \end{equation}% uniformly in $\delta$; namely, {for every $\varepsilon >0$}, there exist $C={% C(\varepsilon )}\geq 0$ such that for each $\delta $ and $n\geq 1$ \begin{equation*} \|m-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}\mu _{\delta }\|_{W}\leq Cn^{-\frac{1% }{\gamma (\alpha )}+\varepsilon }. \end{equation*} \end{lemma} \bigskip \begin{proof} Let us fix $\varepsilon >0.$ By Theorem \ref{DK} and Corollary \ref{preDK} we have that there is $C\geq 0$ such that for each Lipschitz function $f$ with Lipschitz constant $1$, and for each $x_{0}\in {\mathbb{S}}^{1}$ we have% \begin{equation*} \left|\frac{1}{n}\sum_{1\leq i\leq n}f(R_{\alpha }^{i}(x_{0}))-\int_{[0,1]}f~dx\right |\leq C\, n^{-\frac{1}{\gamma (\alpha )}% +\varepsilon } \qquad \forall\; n\geq 1. \end{equation*} Let $\delta _{x_{0}}$ be the delta-measure concentrated at a point $x_{0}\in \mathbb{S}^{1}$. By definition of $\|\cdot\|_{W}$, we conclude that \begin{equation} \|m-\frac{1}{n}\sum_{1\leq i\leq n}L^{i}\delta _{x_{0}}\|_{W}\leq Cn^{-\frac{% 1}{\gamma (\alpha )}+\varepsilon }. \label{www} \end{equation}% \newline Now, as in the proof of Lemma \ref{stablemma}, any measure $\mu _{\delta }$ can be approximated, arbitary well, in the $\|\cdot\|_{W}$ norm by a convex combination of delta-measures and we obtain $($\ref{ww}$)$ from $(\ref{www})$% , exactly in the same way as done in the proof of Lemma \ref{stablemma}. \end{proof} \bigskip \begin{proof}[Proof of Theorem \protect\ref{stst2}] Let $L_{\delta }$ be the transfer operator of $T_{\delta }.$ Let us fix $% \varepsilon >0$; without loss of generality we can suppose $\varepsilon <% \frac{1}{\gamma (\alpha )}.$ By lemma \ref{conv2} we have that \begin{equation*} \|m-\frac{1}{n}\sum_{1\leq i \leq n}L^{i}\mu _{\delta }\|_{W}\leq Cn^{-\frac{% 1}{\gamma (\alpha )}+\varepsilon }. \end{equation*} By Lemma \ref{stablemma} \ we get that for each $n\geq 1$ \begin{equation} \|\mu _{\delta }-m\|_{W}\leq \big \|m-\frac{1}{n}\sum_{1\leq i \leq n}L^{i}\mu _{\delta }\big\|_{W}+\frac{(n-1)}{2}\big \|(L-L_{\delta })\mu _{\delta }\big\|_{W}. \end{equation} Hence% \begin{eqnarray} \|\mu _{\delta }-m\|_{W} &\leq &Cn^{-\frac{1}{\gamma (\alpha )}+\varepsilon }+\frac{(n-1)}{2}\|(L-L_{\delta })\mu _{\delta }\|_{W} \label{stimaboh} \\ &\leq &{\ Cn^{-\frac{1}{\gamma (\alpha )}+\varepsilon }+\frac{(n-1)}{2}% \delta }, \notag \end{eqnarray}% {\ where we have used that, since }$\sup_{x\in {\mathbb{S}}^{1}}|R_{\alpha }(x)-T_{\delta }(x)|\leq \delta $, then \begin{equation*} \|(L-L_{\delta })\mu _{\delta }\|_{W}\leq \delta. \end{equation*} Since the inequality is true for each $n\geq1$, we can now consider $n$ minimizing \begin{equation*} {F(n):=Cn^{-\frac{1}{\gamma (\alpha )}+\varepsilon }+\frac{n-1}{2}\delta .} \end{equation*}% The extension to $\mathbb{R}$ of the funcion $F$ is convex {and it goes to $% +\infty $ both as $x\rightarrow 0^{+}$ and as $x\rightarrow +\infty $.} Let us denote \ $a:=\frac{1}{\gamma (\alpha )}-\varepsilon {>0}$, then $% F(x)=Cx^{-a}+\frac{x-1}{2}\delta .$ This is minimized at {\ \begin{equation*} x_{\ast }:=(2aC)^{\frac{1}{a+1}}\delta ^{-{\frac{1}{a+1}}}:=\tilde{c}% \;\delta ^{-\frac{1}{a+1}}. \end{equation*}% } {\ Consider $n_{\ast }=\left\lfloor x_{\ast }\right\rfloor $ and observe that% \begin{eqnarray*} F(n_{\ast }) &=&\frac{C}{n_{\ast }^{a}}+\frac{n_{\ast }-1}{2}\delta \leq \frac{C}{n_{\ast }^{a}}+\frac{n_{\ast }}{2}\delta =O(\delta ^{\frac{a}{a+1}}) \\ F(n_{\ast }+1) &=&\frac{C}{(n_{\ast }+1)^{a}}+\frac{n_{\ast }}{2}\delta \leq \frac{C}{n_{\ast }^{a}}+\frac{n_{\ast }}{2}\delta =O(\delta ^{\frac{a}{a+1}% }). \end{eqnarray*}% } Substituting in \eqref{stimaboh} we conclude:% \begin{eqnarray*} \|\mu _{\delta }-m\|_{W} &\leq &\min \{F(n_{\ast }),F(n_{\ast }+1)\}=O(\delta ^{\frac{a}{a+1}}) \\ &=&O\big(\delta ^{\frac{1-\varepsilon \gamma (\alpha )}{1+(1-\varepsilon )\gamma (\alpha )}}\big) \end{eqnarray*}% proving the statement. \end{proof} \begin{remark} {We remark that, as it follows from the above proof, the constants involved in $O(\delta ^{\ell })$ in the statement of Theorem \ref{stst2} only depend on $\alpha $ and $\ell$.} \end{remark} \subsection{Quantitative statistical stability of Diophantine rotations, lower bounds\label{lob}} {In this subsection we discuss that the upper bound on the statistical stability obtained in Theorem \ref{stst2} is essentially optimal.} We show that for a rotation $R_{\alpha }$ with rotation number $\alpha$ of Diophantine type $1< \gamma(\alpha) \leq +\infty$, there exist perturbations of ``size $\delta$'', for which the unique physical invariant measure {varies} in a H% \"{o}lder way.\newline More specifically, for any $r\geq 0$ we will construct a sequence $\delta _{n}\rightarrow 0$ \ and $C^\infty$-maps $T_{n}$ such that: $\|R_{\alpha }-T_{n}\|_{C^{r}}\leq \delta _{n}$, $T_{n}$ has {a} unique physical invariant {% probability} measure $\mu _{n}$ and $\|\mu _{n}-m \|_{W}\geq C\delta_n ^{\frac{% 1}{p}}$ for some $C\geq 0$ and $p>1$.\newline \begin{proposition} \label{berlusconi} Let us consider the rotation $R_{\alpha }:\mathbb{S}% ^{1}\rightarrow \mathbb{S}^{1}$, where $\alpha$ is an irrational number with {$1< \gamma(\alpha) \leq +\infty$}. For each {$r\geq 0$ } and $\gamma ^{\prime }<\mathcal{\gamma }(\alpha )$ there exist a sequence of numbers $% \delta _{j}> 0 $ and $C^\infty$ diffeomorphisms $T_{j}:\mathbb{S}% ^{1}\rightarrow \mathbb{S}^{1}$ such that $\|T_{j}-R_{\alpha }\|_{C^{r}}\leq 2\delta _{j}$ \ and \begin{equation*} \|m-\mu _{j}\|_{W}\geq \frac{1}{2}{\delta _{j}^{\frac{1}{\gamma ^{\prime }+1}}} \end{equation*}% for every $j\in \mathbb{N}$ and for every $\mu _{j}$ invariant measure of $% T_{j}$. \end{proposition} \begin{proof} We remark that the unique invariant measure for $R_{\alpha }$ is the Lebesgue measure $m.$ Let us choose $\gamma ^{\prime }<\gamma (\alpha )$; {it follows from the definition of $\gamma (\alpha )$} that there are infinitely many integers ${k_{j}\in \mathbb{N}}$ and ${p_{j}\in \mathbb{Z}}$ such that \begin{equation*} |k_{j}\alpha -p_{j}|\leq \frac{1}{k_{j}^{\gamma ^{\prime }}}\qquad \Longleftrightarrow \qquad \big|\alpha -\frac{p_{j}}{k_{j}}\big|\leq \frac{1% }{k_{j}^{\gamma ^{\prime }+1}}. \end{equation*} Let us set $\delta _{j}:=-\alpha +\frac{p_{j}}{k_{j}}$. Clearly, $|\delta _{j}|\leq \frac{1}{k_{j}^{\gamma ^{\prime }+1}}\longrightarrow 0$ as $% j\rightarrow \infty $. Consider $\hat{T}_{j}$ defined as $\hat{T}_{j}(x)=R_{\alpha +\delta _{j}}(x)$% ; for each $r\geq 0$ we have that \ $\|\hat{T}_{j}-R_{\alpha }\|_{C^{r}}=|\delta _{j}|$. Since $(\delta _{j}+\alpha )=\frac{p_{j}}{k_{j}} $ is rational, {every orbit is $k_{j}$-periodic. Let us consider the orbit starting at $0$ and denote it by} \begin{equation*} y_{0}:=0,\;y_{1}:=\delta _{j},\;\ldots ,\;y_{k_{j}-1}:=1-\delta _{j},\;y_{k_{j}}:=0\;(\mathrm{mod.} \,{\mathbb{Z}}). \end{equation*}% Consider the measures \begin{equation*} \mu _{j}=\frac{1}{k_{j}}\sum_{0\leq i<k_{j}}\delta _{y_{i}}, \end{equation*}% where $\delta _{y_{i}}$ is the delta-measure concentrated at $y_{i}$. The measure $\mu _{j}$ is clearly invariant for the map $\hat{T}_{j}$ and it can be directly computed that% \begin{equation*} \|m-\mu _{j}\|_{W}\geq \frac{1}{2k_{j}}. \end{equation*}% {Observe that $|\delta _{j}|\leq \frac{1}{k_{j}^{\gamma ^{\prime }+1}}$, hence} we get $|\delta _{j}|^{\frac{1}{\gamma ^{\prime }+1}}\leq \frac{1}{% k_{j}}$; then% \begin{equation*} \|m-\mu _{j}\|_{W}\geq \frac{1}{2}{|\delta _{j}|^{\frac{1}{\gamma ^{\prime }+1}}}. \end{equation*} This example can be further improved by perturbing the map $\hat{T}% _{j}=R_{\alpha +\delta _{j}}$ to a new map $T_{j}$ in a way that the measure $\mu _{j}$ (supported on the attractor of $T_{j}$) and the measure \footnote{% The \textit{translated measure} is defined as follows: $[\mu _{j}+\frac{1}{% 2k_{j}}](A):=\mu _{j}(A-\frac{1}{2k_{j}})$ \ for each measurable set $A$ in $% \mathbb{S}^{1}$, where $A-\frac{1}{2k_{j}}$ is the translation of the set $A$ by $-\frac{1}{2k_{j}}$.} $\mu _{j}+\frac{k_{j}}{2}$ (supported on the repeller of $T_{j}$) are the only invariant measures of $T_{j}$, and $\mu _{j}$ is the unique physical measure for the system. This can be done by making a $C^{\infty }$ perturbation on $\hat{T}_{j}=R_{\alpha +\delta _{j}}$% , as small as wanted in the $C^{r}$-norm. In fact, let us denote, as before, by $\{y_{k}\}_{k}$\ the periodic orbit of $0$ for $R_{\alpha +\delta _{j}}$. Let us consider a $C^{\infty }$ function $g:[0,1]\rightarrow \lbrack 0,1]$ such that: \begin{itemize} \item $g$ is negative on the each interval $[y_{i},y_{i}+\frac{1}{2k_{j}}]$ and positive on each interval $[y_{i}+\frac{1}{2k_{j}},y_{i+1}]$ (so that $% g(y_{i}+\frac{1}{2k_{j}})=0$ ); \item $g^{\prime }$ is positive in each interval $[y_{i}+\frac{1}{3k_{j}}% ,y_{i+1}-\frac{1}{3k_{j}}]$ and negative in $[y_{i},y_{i+1}]-[y_{i}+\frac{1}{% 3k_{j}},y_{i+1}-\frac{1}{3k_{j}}]$. \end{itemize} Considering $D_{\delta }:{\mathbb{S}}^{1}\rightarrow {\mathbb{S}}^{1}$, defined by $D_{\delta }(x):=x+\delta g(x)$ $\func{(mod.\; {\mathbb{Z}})}$, it holds that the iterates of this map send all the space, with the exception of the set $\Gamma_{\mathrm{rep}}:=\{y_{i}+\frac{1}{2k_{j}}: \;0 \leq i< k_{j}\}$ (which is a repeller), to the set $\Gamma_{ \mathrm{att}% }:=\{y_{i}: \; 0\leq i< k_{j}\}$ (the attractor). Then, define $\ T_{j}$ by composing $R_{\alpha +\delta _{j}}$ and $D_{\delta }$, namely \begin{equation*} T_{j}(x):=D_{\delta _{j}}(x+(\delta _{j}+\alpha )). \end{equation*} The claim follows by observing that for the map $T_{j}(x)$, both sets \ $% \Gamma_{\mathrm{att}}$ and $\Gamma_{\mathrm{rep}}$ are invariant and, in particular, the whole space ${\mathbb{S}}^{1}-\Gamma_{\mathrm{rep}}$ is attracted by $\Gamma_{\mathrm{att}}$. \end{proof} \bigskip The construction done in the previous proof can be extended to show H\"{o}% lder behavior for the average of a given \emph{fixed} regular observable. We show an explicit example of such an observable, with a particular choice of rotation number $\alpha$. \begin{proposition} \label{30}Consider a rotation $R_{\alpha }$ with rotation angle $\alpha :=\sum_{1}^{\infty }2^{-2^{2i}}$. Let $T_{j}$ be its perturbations as constructed in Proposition \ref{berlusconi} and let $\mu _{j}$ denote their invariant measures; recall that $\|T_{j}-R_{\alpha }\|_{C^{k}}\leq 2|\delta _{j}|=2\sum_{n+1}^{\infty }2^{-2^{2i}}$.\newline Then, there is an observable $\psi :{\mathbb{S}}^{1}\rightarrow \mathbb{R}$, with derivative in $L^{2}({\mathbb{S}^1})$, and $C\geq 0$ such that% \begin{equation*} \left |\int_{\mathbb{S}^1} \psi d{m}-\int_{\mathbb{S}^1} \psi d\mu _{j} \right|\geq C\sqrt{\delta _{j}}. \end{equation*} \end{proposition} \bigskip \begin{proof} Comparing the series with a geometric one, we get that \begin{equation*} \sum_{n+1}^{\infty }2^{-2^{2i}}\leq 2^{-2^{2(n+1)}+1}. \end{equation*} By this, it follows \begin{equation*} \|2^{2^{2n}}\alpha \|\leq 2^{-2^{2(n+1)}+1}=\frac{1}{2(2^{2^{2+2n}})}=\frac{1% }{2(2^{2^{2n}})^{4}}. \end{equation*} Since it also holds that $\|2^{2^{2n}}\alpha \|\geq 2^{-2^{2(n+1)}}$, the we conclude that $\gamma (\alpha )=$ $4$. Following the construction in the proof of Proposition \ref{berlusconi}, we have that with a perturbation of size less than $2^{-2^{2(n+1)}+1}$ the angles $\alpha _{j}:=\alpha -\delta _{j}=\sum_{1}^{j}2^{-2^{2i}}$ generate orbits of period $2^{2^{2j}}$. Now let us construct a suitable observable which can ``see'' the change of the invariant measure under this perturbation. Let us consider \begin{equation} \psi (x):=\sum_{i=1}^{\infty }\frac{1}{(2^{2^{2i}})^{2}}\cos (2^{2^{2i}}2\pi x) \label{obss} \end{equation}% and debote by $\psi _{k}(x):=\sum_{i=1}^{k}\frac{1}{(2^{2^{2i}})^{2}}\cos (2^{2^{2i}}2\pi x)$ its truncations. Since for the observable $\psi $, the $% i $-th Fourier coefficient decreases like $i^{-2}$, then $\psi $ has derivative in $L^{2}({\mathbb{S}^1})$. Let $\{x_{i}\}_{i}$ be the periodic orbit of $0$ for the map $R_{\alpha _{j}}$ and let $\mu _{j}:=\frac{1}{2^{2^{2i}}}\sum_{i=0}^{\alpha _{j}-1}\delta _{x_{i}}$ be the physical measure supported on it. Since $2^{2^{2j}}$\ divides $2^{2^{2(j+1)}}$ then\ $\sum_{i=1}^{2^{2^{2j}}}\psi _{k}(x_{i})=0$ for every $k<j$, thus $\int_{\mathbb{S}^1} \psi _{j-1}~d\mu _{j}=0.$ Then \begin{eqnarray*} v_{j}:= &&\int_{\mathbb{S}^1} \psi ~d\mu _{j}\geq \frac{1}{(2^{2^{2j}})^{2}}% -\sum_{j+1}^{\infty }\frac{1}{(2^{2^{2i}})^{2}} \\ &\geq &2^{-2^{2j+1}}-2^{-2^{2(j+1)}+1}. \end{eqnarray*}% For $j$ big enough% \begin{equation*} 2^{-2^{2j+1}}-2^{-2^{2(j+1)}+1}\geq \frac{1}{2}(2^{-2^{2j}})^{2}. \end{equation*}% Summarizing, with a perturbation of size \begin{equation*} \delta _{j}=\sum_{j+1}^{\infty }2^{-2^{2i}}\leq 2\cdot 2^{-2^{2(j+1)}}=2^{-2^{2(j+1)}}=2(2^{-2^{2j}})^{4} \end{equation*} we get a change of average for the observable $\psi $ from $\int_{\mathbb{S}% ^1} \psi dm=0$ to $v_{n}\geq \frac{1}{2}(2^{-2^{2j}})^{2}$. Therefore, there is $C\geq 0$ such that with a perturbation of size $\delta _{j}$, we get a change of average for the observable $\psi $ of size bigger than $C\sqrt{% \delta _{j}}.$ \end{proof} \bigskip \begin{remark} Using in (\ref{obss}) {$\frac{1}{(2^{2^{2i}})^{\sigma}}$, for some $\sigma>2$}, instead of $\frac{1% }{(2^{2^{2i}})^{2}}$, we can obtain a smoother observable. Using rotation angles with bigger and bigger Diophantine type, it is possible to obtain a dependence of the physical measure on the perturbation with worse and worse H% \"{o}lder exponent. Using angles with infinite Diophantine type it is possible to have a behavior whose modulus of continuity is worse than the H% \"{o}lder one. \end{remark} \bigskip \section{Linear response and KAM theory} \label{KAMsection} {In this section, we would like to discuss differentiable behavior and linear response for Diophantine rotations, under suitable smooth perturbations. In particular, we will obtain our results by means of the so-called KAM theory.} {Let us first start by explaining more precisely,} what linear response means.\newline Let $(T_{\delta })_{\delta \geq 0}$ be a one parameter family of maps obtained by perturbing an initial map $T_{0}$. We will be interested on how the perturbation made on $T_{0}$ affects some invariant measure of $T_{0}$ of particular interest. For example its physical measure. Suppose hence $% T_{0}$ has a physical measure $\mu _{0}$ and let $\mu _{\delta }$ be\ physical measures of $T_{\delta }$. \footnote{\label{notap} An invariant measure $\mu $ is said to be \emph{physical} if there is a positive Lesbegue measure set $B$ such that for each continuous observable $f $% \begin{equation*} \int_{\mathbb{S}^1} f~d\mu =\underset{n\rightarrow \infty }{\lim }\frac{% f(x)+f(T(x))+...+f(T^{n}(x))}{n+1} \end{equation*}% for each $x\in B$ (see \cite{Y}).} The linear response of the invariant measure of $T_{0}$ under {a} given perturbation is defined, {if it exists}, by the limit \begin{equation} \dot{\mu}:=\lim_{\delta \rightarrow 0}\frac{\mu _{\delta }-\mu _{0}}{\delta } \label{LRidea} \end{equation} where the meaning of this convergence can vary from system to system. In some systems and for a given perturbation, one may get $L^{1}$-convergence for this limit; in other systems or for other perturbations one may get convergence in weaker or stronger topologies. The linear response to the perturbation hence represents the first order term of the response of a system to a perturbation and when it holds, a linear response formula can be written {as}: \begin{equation} \mu _{\delta }=\mu _{0}+\dot{\mu}\delta +o(\delta ) \label{lin} \end{equation}% which holds in some weaker or stronger sense. We remark that given an observable function $c:X\rightarrow \mathbb{R}$, if the convergence in \eqref{LRidea} is strong enough with respect to the regularity \footnote{% For example, $L^{1}$ convergence in $($\ref{LRidea}$)$ allows to control the behavior of $L^{\infty }$ observables in $($\ref{LRidea2}$)$, while a weaker convergence in $($\ref{LRidea}$)$, for example in the Wasserstein norm (see definition \ref{w})\ allows to get information on the behavior of Lipschitz obsevable.} of $c$, we get \begin{equation} \lim_{t\rightarrow 0}\frac{\int_{\mathbb{S}^1} \ c\ d\mu _{t}-\int_{\mathbb{S% }^1} \ c\ d\mu _{0}}{t}=\int_{\mathbb{S}^1} \ c\ d\dot{\mu} \label{LRidea2} \end{equation}% showing how the linear response of the invariant measure controls the behavior of observable averages.\newline \subsection{Conjugacy theory for circle maps} \label{secconj} {Let us recall some classical results on smooth linearization of circle diffeomorphisms and introduce KAM theory}. Let $\mathrm{Diff}_+^r({{\mathbb{S}}^1})$ denote the set of orientation preserving homeomorphism of the circle of class $C^r$ with $r\in \mathbb{N}% \cup \{+\infty, \omega \}$. Let $\mathrm{rot}(f) \in {{\mathbb{S}}^1}$ denote the rotation number of $f$ (see, for example, \cite[Section II.2]% {Herman} for more properties on the rotation number).\newline A natural question is to understand when a circle diffeomorphism is conjugated to a rotation with the same rotation number, namely whether there exists a homeomorphim $h: {\ \mathbb{S}^1 }\longrightarrow {{\mathbb{S}}^1}$ such that the following diagram commutes: \begin{equation*} \begin{array}{ccc} {{\mathbb{S}}^1} & \overset{f}{\longrightarrow } & {{\mathbb{S}}^1} \\ \uparrow {\small h} & & \uparrow {\small h} \\ {{\mathbb{S}}^1} & \overset{R_{\mathrm{rot (f)}}}{\longrightarrow } & {{% \mathbb{S}}^1}% \end{array}% \end{equation*} \textit{i.e.}, $h^{-1} \circ f \circ h = R_{\mathrm{rot}(f)}$. Moreover, whenever this conjugacy exists, one would like to understand what is the best regularity that one could expect. \begin{remark} \textrm{Observe that if $h$ exists, then it is essentially unique, in the sense that if $h_i: \mathbb{S}^1\longrightarrow {{\mathbb{S}}^1}$, $i=1,2$, are homeomorphisms conjugating $f$ to $R_{\mathrm{rot }(f)}$, then $h_1 \circ h_2^{-1}$ must be a rotation itself: $h_1\circ h_2^{-1} = R_\beta$ for some $\beta\in {\mathbb{S}}^1 $ (see \cite[Ch. II, Proposition 3.3.2]{Herman}% ).} \end{remark} This question has attracted a lot of attention, dating back, at least, to Henri Poincar\'e. {Let us start by recalling the following result due to Denjoy \cite{Den} shows that diffeomorphisms with irrational rotation number and satisfying some extra mild regularity assumption (for example, $C^2$ diffeomorphisms do satisfy it) are conjugated to irrational rotations by an homeomorphism.} \begin{theorem}[Denjoy] \label{Denteo}Let $T$ be an orientation preserving diffeomorphism of the circle with an irrational rotation number $\alpha $ and such that $\log (T^{\prime })$ has bounded variation. Then there exists a homeomorphism $h:% \mathbb{S}^{1}\rightarrow \mathbb{S}^{1}$ such that% \begin{equation*} {T \circ h= h \circ R_{\alpha }.} \end{equation*} \end{theorem} { \begin{remark} Denjoy constructed diffeomorphisms $T$ only of class $C^1$ that are not conjugated to rotations ({\it i.e.}, such that the support of their invariant measure $\mu$ is not the whole ${\mathbb S^1}$). These are usually called in the literature {\it Denjoy-type} diffeomorphisms. \end{remark} } {Some of the first contributions about smooth linearization ({\it i.e.}, obtaining a conjugacy of higher regularity)} were due to V.I. Arnol'd \cite{Arnold} and J. Moser \cite{Moser}. These results are in the perturbative setting and are generally referred to as \textit{KAM theory}. Namely, they consider perturbations of \textit{Diophantine} rotations \begin{equation} \label{deffeps} f_\varepsilon(x) = R_\alpha + \varepsilon u(x,\varepsilon) \end{equation} and prove that, under suitable regularity assumptions on $u$, there exist $% \varepsilon_0>0$ (depending on the properties of $\alpha$ and $u$) {and a Cantor set ${\mathcal C} \subset (-\varepsilon_0, \varepsilon_0)$ such that $f_\varepsilon$ is conjugated to a $R_{\mathrm{rot} (f_\varepsilon)}$ for every $\varepsilon \in {\mathcal C}$.} {Observe that the conjugacy does not exist in general for an interval of $\varepsilon$, but only for those values of $\varepsilon$ for which the rotation number of $f_\varepsilon$ satisfies suitable arithmetic properties ({\it e.g.}, it is Diophantine)}. See below for a more precise statement. \begin{remark} {Observe that $f_\varepsilon$ has not necessarily rotation number $\alpha$, even if one asks that $u(\cdot, \varepsilon)$ has zero average.} \end{remark} \begin{remark} In the analytic setting, KAM theorem for circle diffeomorphisms was firstly proved by Arnol'd (see \cite[Corollary to Theorem 3, p. 173]{Arnold}), showing that the conjugation is analytic. In the smooth case, it was proved by Moser \cite{Moser} under the assumption that $u$ is sufficiently smooth (the minimal regularity needed was later improved by R\"ussmann \cite% {Russmann2}). The literature on KAM theory and its recent developments is huge and we do not aim to provide an accurate account here; for reader's sake, we limit ourselves to mentioning some recent articles and surveys, like \cite{BroerSevryuk, DL, Dumas, Massetti, MatherForni, Wayne} and references therein. \end{remark} \smallskip Later, Herman \cite{Herman} and Yoccoz \cite{Yoccoz,Yoccoz2} provided a thorough analysis of the situation in the general (non-perturbative) context. Let us briefly summarize their results (see also \cite% {EliassonFayadKrikorian} for a more complete account). \newline \begin{theorem}[Herman \protect\cite{Herman}, Yoccoz \protect\cite{Yoccoz, Yoccoz2}] \textcolor{white}{per andare a capo} \label{thmhermanyoccoz} \begin{itemize} \item Let $f \in \mathrm{Diff}_+^r({{\mathbb{S}}^1})$ and $\mathrm{rot}(f) \in \mathcal{D}(\tau)$. If $r>\max\{3, 2\tau-1\}$, then there exists $h\in \mathrm{Diff}_+^{r-\tau-\varepsilon}({{\mathbb{S}}^1})$, for every $% \varepsilon>0 $, conjugating $f$ to $R_{\mathrm{rot (f)}}$. \item Let $f \in \mathrm{Diff}_+^\infty({{\mathbb{S}}^1})$ and $\mathrm{rot}% (f) \in \mathcal{D}(\tau)$. Then, there exists $h\in \mathrm{Diff}% _+^{\infty}({{\mathbb{S}}^1})$ conjugating $f$ to $R_{\mathrm{rot (f)}}$. \item Let $f \in \mathrm{Diff}_+^\omega({{\mathbb{S}}^1})$ and $\mathrm{rot}% (f) \in \mathcal{D}(\tau)$. Then, there exists $h\in \mathrm{Diff}% _+^{\omega}({{\mathbb{S}}^1})$ conjugating $f$ to $R_{\mathrm{rot (f)}}$.% \newline \end{itemize} \end{theorem} \begin{remark} \textrm{The above results can be generalized to larger classes of rotation number, satisfying a weaker condition than being Diophantine. Optimal conditions were studied by Yoccoz and identified in \textit{Brjuno numbers} for the smooth case and in those satisfying the so-called ${\mathcal{H}}$% -condition (named in honour of Herman); we refer to \cite{Yoccoz, Yoccoz2} for more details on these classes of numbers.\newline } \end{remark} \bigskip \subsection{Linear response for Diophantine circle rotations} In this subsection we describe how, as a corollary to KAM theory, one can prove the existence of linear response for Diophantine rotations.\newline Let us state the following version of KAM theorem, whose proof can be found in \cite[Theorem 9.0.4]{Vano} (cf. also \cite[Theorem 2]{BroerSevryuk} and \cite{CCdL}). % \medskip \begin{theorem}[KAM Theorem for circle diffeomorphisms] \label{KAMVano} Let $\alpha \in \mathcal{D}({\tau})$, with $\tau>1$ and let us consider a smooth family of circle diffeomorphisms \begin{equation*} f_\varepsilon(x) = R_\alpha + \varepsilon u(x,\varepsilon) \qquad |\varepsilon|< 1 \end{equation*} with \begin{itemize} \item[\textrm{(i)}] $u(x,\varepsilon) \in C^{\infty}({\mathbb{S}}^1)$ for every $|\varepsilon|<1$; \item[\textrm{(ii)}] the map $\varepsilon \longmapsto u(\cdot, \varepsilon)$ is $C^{\infty}$; \item[\textrm{(iii)}] $\int_{{\mathbb{S}}^1} u(x,\varepsilon) dx = A\varepsilon^m + o(\varepsilon^m)$, where $A\neq 0$ and $m\geq 0$. \end{itemize} Then, there exists a Cantor set ${\mathcal{C}}\subset (-1,1)$ containing $0$% , such that for every $\varepsilon \in {\mathcal{C}}$ the map $% f_{\varepsilon }$ is smoothly conjugated to a rotation $R_{\alpha _{\varepsilon }}$, with $\alpha _{\varepsilon }\in \mathcal{D}(\tau )$. More specifically, there exists \begin{equation*} h_{\varepsilon }(x)=x+\varepsilon v(x,\varepsilon )\in C^{\infty }({\mathbb{S% }}^{1}) \end{equation*}% such that \begin{equation} \begin{array}{ccc} {{\mathbb{S}}^{1}} & \overset{f_{\varepsilon }}{\longrightarrow } & {{% \mathbb{S}}^{1}} \\ \uparrow {\small h_{\varepsilon }} & & \uparrow {\small h_{\varepsilon }} \\ {{\mathbb{S}}^{1}} & \overset{R_{\alpha _{\varepsilon }}}{\longrightarrow } & {{\mathbb{S}}^{1}}% \end{array}% \qquad \Longleftrightarrow \qquad f_{\varepsilon }\circ h_{\varepsilon }=h_{\varepsilon }\circ R_{\alpha _{\varepsilon }}. \label{conjugation} \end{equation}% Moreover: \begin{itemize} \item the maps $\varepsilon \longmapsto h_{\varepsilon }$ and $\varepsilon \longmapsto \alpha _{\varepsilon }$ are $C^{\infty }$ on the Cantor set ${% \mathcal{C}}$, in the sense of Whitney; \item $\alpha_{\varepsilon} = \alpha + A\varepsilon^{m+1} + o(\varepsilon^{m+1}). $\newline \end{itemize} \end{theorem} \medskip \begin{remark} \textrm{\label{rm8} Observe that $f_{\varepsilon}$ does not have necessarily rotation number $\alpha$. In particular, the map $rot:\mathrm{Diff}_{+}^{0}(% \mathbb{S}^{1})\longrightarrow $}$\mathbb{S}^{1}$\textrm{\ is continuous with respect to the $C^{0}$-topology (see for example \cite[Ch. II, Proposition 2.7]{Herman})} \end{remark} \begin{remark} \label{remarkteokam} \hspace{0.1 cm}\newline \begin{itemize} \item[\textrm{(i)}] Theorem \ref{KAMVano} is proved in \cite{Vano} in a more general form, considering also the cases of $u(x,\varepsilon)$ being analytic or just finitely differentiable (in this case, there is a lower bound on the needed differentiablity, cf. Theorem \ref{thmhermanyoccoz}). In particular, the proof of the asymptotic expansion of $\alpha_{\varepsilon}$ appears on \cite[p. 149]{Vano}. \item[\textrm{(ii)}] One could provide an estimate of the size of this Cantor set: {\ there exist $M>0$ and $r_0>0$ such that for all $0<r<r_0$ the set $(-r,r)\cap {\mathcal{C}} $ has lebesgue measure $\geq M r^{\frac{1}{m+1}% }$ } (see \cite[formula (9.2)]{Vano}). \item[\textrm{(iii)}] A version of this theorem in the analytic case, can be also found in \cite[Theorem 2]{Arnold}; in particular, in \cite[Sections 8]% {Arnold} it is discussed the property of monogenically dependence of the conjugacy and the rotation number on the parameter.\newline These results can be extended to arbitrary smooth circle diffeomorphisms with Diophantine rotation numbers and to higher dimensional tori (see \cite% {Vano}). \end{itemize} \end{remark} \medskip Let us discuss how to deduce from this result the existence of linear response for the circle diffeomorphisms $f_{\varepsilon }$.\newline \begin{theorem} \label{KAMandResp} Let $\alpha \in \mathcal{D}({\tau})$, with $\tau>1$ and let us consider a family of circle diffeomorphisms obtained by perturbing the rotation $R_\alpha$ in the following way: \begin{equation*} f_\varepsilon(x) = R_\alpha + \varepsilon u(x,\varepsilon) \qquad |\varepsilon|< 1, \end{equation*} where $u(x,\varepsilon) \in C^{\infty}({\mathbb{S}}^1)$, for every $% |\varepsilon|<1$, and the map $\varepsilon \longmapsto u(\cdot, \varepsilon)$ is $C^{\infty}$.\newline Then, the circle rotation $R_\alpha$ admits linear response, in the limit as $\varepsilon$ goes to $0$, by effect of this family of perturbations.\newline More precisely, there exists a Cantor set $\mathcal{C}\subset (-1,1)$ such that \begin{equation} \label{limitlinearresponse} \lim_{\varepsilon \in \mathcal{C}, \varepsilon \rightarrow 0} \frac{% \mu_\varepsilon - m}{\varepsilon} = 2\pi i \sum_{n \in \mathbb{Z }\setminus \{0\}} \left(\frac{n\, \hat{u}(n)}{1- e^{2\pi i n \alpha}}\right) e^{2\pi i n x} \qquad \mbox{(in the $L^1$-sense)} \end{equation} where $\mu_\varepsilon$ denotes the unique invariant probability measure of $% f_\varepsilon$, for $\varepsilon \in {\mathcal{C}}$, and $\{\hat{u}% (n)\}_{n\in {\mathbb{Z}}}$ the Fourier coefficients of $u(x,0)$.\newline \end{theorem} \medskip \begin{remark} \label{remarkKAM} In this article we focus on the circle; however, a similar result could be proved for rotations on higher dimensional tori, by using analogous KAM results in that setting (see for example \cite{Vano}). \end{remark} \medskip As we have already observed in Remark \ref{rm8}, the rotation number of $% f_{\varepsilon}$ varies continuously with respect to the perturbation, from here the need of taking the limit in \eqref{limitlinearresponse} on a Cantor set of parameters (corresponding to certain Diophantine rotation numbers {for which the KAM algorithm can be applied}). { Under the assumption that the perturbation does not change the rotation number, {and this is Diophantine}, then the KAM algorithm can be applied for all values of the parameters $\varepsilon$, hence $\mathcal{C}$ coincides with the whole set of parameters; therefore the limit in \eqref{limitlinearresponse} can be taken in the classical sense.} \begin{corollary} \label{corKAM} Under the same hypotheses and notation of Theorem \ref% {KAMandResp}, if in addition we have that $\mathrm{rot}(f_\varepsilon) = \alpha$ for every $|\varepsilon|<1$, then there exists linear response without any need of restricting to a Cantor set and it is given by \begin{equation} \lim_{\varepsilon \rightarrow 0} \frac{\mu_\varepsilon - m}{\varepsilon} = 2\pi i \sum_{n \in \mathbb{Z }\setminus \{0\}} \left(\frac{n\, \hat{u}(n)}{% 1- e^{2\pi i n \alpha}}\right) e^{2\pi i n x} \qquad \mbox{(in the $L^1$-sense)}. \end{equation} \end{corollary} \bigskip { \begin{proof} {\bf (Corollary \ref{corKAM}).} As we have remarked above, this corollary easily follows from Theorem \ref{KAMandResp} by observing that $\mathrm{rot}(f_\varepsilon) = \alpha \in \mathcal{D}({\tau})$ for every $|\varepsilon|<1$, hence $\mathcal{C} \equiv (-1,1)$. In fact, this follows from \cite[Section 9.2, pp. 147-148]{Vano}: in their notation our parameter $\varepsilon$ corresponds to $\mu$ and their $a(\mu)$ corresponds to our $\mathrm{rot}(f_\varepsilon)$. In particular, they define the Cantor set as ${\mathcal C}_F = v^{-1}(D_\Upsilon)$ (see \cite[p.148]{Vano}): in our notation this corresponds to the values of $\varepsilon \in (-1,1)$ for which $\mathrm{rot}(f_\varepsilon)$ belongs to the a certain set of Diophantine numbers that includes $\alpha$. Since, by hypothesis, $\mathrm{rot}(f_\varepsilon)\equiv \alpha$, it follows that ${\mathcal C}\equiv (-1,1)$ and, in particular, the limit in \eqref{limitlinearresponse} is meant in the classical sense. \end{proof} } \bigskip Let us now prove Theorem \ref{KAMandResp}.\newline \begin{proof} {\bf (Theorem \ref{KAMandResp}).} First of all, applying Theorem \ref{KAMVano}, it follows that for every $% \varepsilon \in {\mathcal{C}}$, the map $f_{\varepsilon }:= R_\alpha + \varepsilon u(x,\varepsilon)$ possesses a unique invariant probability measure given by \begin{equation*} \mu _{\varepsilon }={h_{\varepsilon }}_{\ast }m \end{equation*}% where $m$ denotes the Lebesgue measure on ${{\mathbb{S}}^{1}}$ and ${% h_{\varepsilon }}_{\ast }$ denotes the push-foward by $h_{\varepsilon }$; in particular, $\mu _{0}=m$. This measure is absolutely continuous with respect to $m $ and its density is given by \begin{equation} \frac{d\mu _{\varepsilon }}{dx}(x)=\frac{1}{\partial _{x}h_{\varepsilon }(h_{\varepsilon }^{-1}(x))}. \label{density} \end{equation}% In fact, if $A$ is a Borel set in ${{\mathbb{S}}^{1}}$, then \begin{equation*} \mu _{\varepsilon }(A)=\int_{A}\mu _{\varepsilon }(dy)=\int_{h_{\varepsilon }(A)}\partial _{x}(h_{\varepsilon }^{-1})(x)\,dx=\int_{h_{\varepsilon }(A)}% \frac{dx}{\partial _{x}h_{\varepsilon }(h_{\varepsilon }^{-1}(x))}. \end{equation*} Hence, it follows from \eqref{density} that \begin{eqnarray} \label{densitymueps} \frac{d\mu _{\varepsilon }}{dx}(x) &= & \frac{1}{\partial_x h_{\varepsilon} (h_{\varepsilon}^{-1}(x))} = \frac{1}{1 + \varepsilon \partial_x v (h_{\varepsilon}^{-1}(x),0) + o(\varepsilon)} \notag \\ &=& \frac{1}{1 + \varepsilon \partial_x v(x,0) + o_{\mathcal{C}}(\varepsilon)% } = 1-\varepsilon \partial_x v(x,0) + o_{\mathcal{C}}(\varepsilon), \end{eqnarray} where $o_{\mathcal{C}}(\varepsilon)$ denotes a term that goes to zero faster than $\varepsilon \in {\mathcal{C}}$, uniformly in $x$.\newline Then the linear response is given by% \begin{equation*} \dot{\mu}=\lim_{\varepsilon \in {\mathcal{C}},\varepsilon \rightarrow 0}% \frac{\mu _{\varepsilon }-\mu _{0}}{\varepsilon }=\lim_{\varepsilon \in {% \mathcal{C}},\varepsilon \rightarrow 0}\frac{\mu _{\varepsilon }-m}{% \varepsilon } \end{equation*}% which, passing to densities and using \eqref{densitymueps}, corespond to% \begin{equation*} \lim_{\varepsilon \in {\mathcal{C}},\varepsilon \rightarrow 0}\frac{1}{% \varepsilon }(1-\varepsilon \partial _{x}v(x,0)+o_{0}(\varepsilon )-1)=-\partial _{x}v(x,0). \end{equation*} Giving a formula for the response% \begin{equation} \label{linearresponse} \frac{d\dot{\mu}}{dx}(x)= - \partial_x v (x,0). \end{equation} \medskip Moreover, we can find a more explicit representation formula {(the above formula, in fact, is somehow implicit, since $v$ depends on $h_\varepsilon$)}. Observe that it follows from \eqref{conjugation} that $f_\varepsilon \circ h_\varepsilon = h_\varepsilon \circ R_{\alpha_\varepsilon}$: \begin{equation} \label{boh} x + \varepsilon v(x,\varepsilon) + \alpha + \varepsilon u(x + \varepsilon v(x,\varepsilon), \varepsilon) = x + \alpha_\varepsilon + \varepsilon v(x+\alpha_\varepsilon,\varepsilon). \end{equation} Recall, from the statement of Theorem \ref{KAMVano} that \begin{equation*} \alpha_{\varepsilon} = \alpha + A\varepsilon^{m+1} + o(\varepsilon^{m+1}), \end{equation*} where $m$ and $A$ are defined by (see item (ii) in Theorem \ref{KAMVano}) \begin{equation*} <u(\cdot,\varepsilon)>:=\int_{{\mathbb{S}}^1} u(x,\varepsilon) dx = A\varepsilon^m + o(\varepsilon^m). \end{equation*} Hence, expanding equation \eqref{boh} in terms of $\varepsilon$ and equating the terms of order $1$, we obtain the following (observe that $% \alpha_\varepsilon$ will contribute to the first order in $\varepsilon$ only if $m=0$ and, therefore, $A= <u(\cdot,0)>:= \int_{{\mathbb{S}}^1} u(x,0) dx \neq 0$): \begin{equation} \label{homologicaleq} v(x+\alpha, 0) - v(x,0) = u(x,0) - <u(\cdot,0)> \qquad \forall \,x\, \in {{% \mathbb{S}}^1}, \end{equation} the so-called \textit{homological equation}. Observe that it makes sense that we need to subtract to $u(x,0)$ its average, if this is not zero. In fact, in order for \eqref{homologicaleq} to have a solution, its right-hand side must have zero average: to see this, it is sufficient to integrate both sides and use that the Lebesgue measure is invariant under $R_\alpha$: \begin{equation*} \int_{{\mathbb{S}}^1} u(x,0) \, dx = \int_{\mathbb{S}^1} v(x+\alpha,0) \, dx - \int_{\mathbb{S}^1} v(x,0) \, dx =0. \end{equation*} Let us now find an expression for $v(x,0)$ in Fourier series. In fact, let us consider: \begin{equation*} v(x,0):= \sum_{n\in \mathbb{Z}} \hat{v}(n) e^{2\pi i n x} \qquad \mathrm{and} \qquad u(x,0):= \sum_{n\in \mathbb{Z}} \hat{u}(n) e^{2\pi i n x}. \end{equation*} In Fourier terms, \eqref{homologicaleq} becomes: \begin{equation*} \sum_{n\in \mathbb{Z}} \hat{v}(n) \left( e^{2\pi i n \alpha} -1 \right) \, e^{2\pi i n x} = \sum_{n\in \mathbb{Z }\setminus \{0\}} \hat{u}(n) e^{2\pi i n x} \end{equation*} and therefore for $n\neq 0$ \begin{equation*} \hat{v}(n) = \frac{\hat{u}(n)}{e^{2\pi i n \alpha} -1}; \end{equation*} we do not determine $\hat{v}(0)$, as it should be expected, since $v$ is determined by \eqref{homologicaleq} only up to constants. Substituting in \eqref{linearresponse}, we conclude: \begin{eqnarray*} \frac{d \dot{\mu}}{dx}(x) &=& - \partial_x v (x,0) = - 2\pi i \sum_{n\in \mathbb{Z }} \,n \,\hat{v}(n) e^{2\pi i n x} \\ &=& 2\pi i \sum_{n \in \mathbb{Z }\setminus \{0\}} \left(\frac{n\, \hat{u}(n)% }{1- e^{2\pi i n \alpha}}\right) e^{2\pi i n x}. \end{eqnarray*} \end{proof} \section{Beyond rotations: the case of circle diffeomorphisms \label{sec:stabdiff} % } {In this section, we want to describe how it is possible to} extend our previous results from irrational rotations to diffeomorphisms of the circle having irrational rotation number. We prove the following: \begin{theorem} \label{stadiff}Let $T_{0}$ be an orientation preserving diffeomorphism of the circle with an irrational rotation number $\alpha $ and such that $\log (T^{\prime })$ has bounded variation (for example f is of class $C^{2}$). {Let $\mu_0$ be its unique invariant (absolutely continuous) probability measure (see Theorem \ref{Denteo})}.\ Let $\{T_{\delta }\}_{0\leq \delta \leq \overline{\delta }}$ be a family of Borel measurable maps of the circle such that% \begin{equation*} \sup_{x\in {\mathbb S}^{1}}|T_{0}(x)-T_{\delta }(x)|\leq \delta . \end{equation*}% Suppose that for each $0\leq \delta \leq \overline{\delta }$, $\mu _{\delta } $ is an invariant measure of $T_{\delta }$. Then \begin{equation*} \lim_{\delta \rightarrow 0}\int_{{\mathbb S}^{1}} f~d\mu _{\delta }=\int_{{\mathbb S}^{1}} f~d\mu _{0} \end{equation*}% for all $f\in C^{0}(\mathbb{S}^{1}).$ \end{theorem} {The proof will follow by combining Theorem \ref{statstab} with Denjoy Theorem \ref{Denteo}.\\ } \begin{proof}[Proof of Theorem \protect\ref{stadiff}] By Theorem \ref{Denteo} we can coniugate $T_{0}$ with the rotation $% R_{\alpha }.$ We apply the same coniugation to $T_{\delta }$ for each $% \delta >0$ obtaining a family of maps {$U_{\delta }:= h \circ T_\delta \circ h^{-1}$}. We summarize the situation in the following diagram% \begin{equation} \begin{array}{ccc} {{\mathbb{S}}^1} & \overset{T_0}{\longrightarrow } & {{\mathbb{S}}^1} \\ \downarrow {\small h} & & \downarrow {\small h} \\ {{\mathbb{S}}^1} & \overset{R_{\alpha}}{\longrightarrow } & {{% \mathbb{S}}^1}% \end{array}% \qquad% \begin{array}{ccc} {{\mathbb{S}}^1} & \overset{T_\delta}{\longrightarrow } & {{\mathbb{S}}^1} \\ \downarrow {\small h} & & \downarrow {\small h} \\ {{\mathbb{S}}^1} & \overset{U_\delta}{\longrightarrow } & {{% \mathbb{S}}^1}% \end{array}% \label{diagrams} \end{equation}% Since $h$ is an homeomorphism of a compact space it is uniformly continuous. This implies that \begin{equation*} \lim_{\delta \rightarrow 0}\sup_{x\in {\mathbb S}^{1}}|R_{\alpha }(x)-U_{\delta }(x)|=0. \end{equation*}% Let $\overline{\mu }_{\delta }:=h_{\ast }\mu _{\delta }.$ These measures are invariant for $U_{\delta }.$\ \ Then, by Theorem \ref{statstab} we get% \begin{equation*} \lim_{\delta \rightarrow 0}||\overline{\mu }_{\delta }-m||_{W}=0. \end{equation*}% This implies (uniformly approximating any continuous fuction with a sequence of Lipschitz ones) that for each $g\in C^{0}(\mathbb{S}^{1})$% \begin{equation} \lim_{\delta \rightarrow 0}\int_{\mathbb S^1} g~d\overline{\mu }_{\delta }=\int_{\mathbb S^1} g~dm. \label{inte} \end{equation}% Now consider $f\in C^{0}(\mathbb{S}^{1})$ and remark that {(using the definition of push-forward of a measure)} \begin{eqnarray*} \int_{\mathbb S^1} f~~d\mu _{\delta } &=&\int_{\mathbb S^1} f\circ h^{-1} \circ h~d\mu _{\delta }=\int_{\mathbb S^1} f\circ h^{-1}~d\overline{\mu }_{\delta }, \\ \int_{\mathbb S^1} f~d\mu _{0} &=&\int_{\mathbb S^1} f\circ h^{-1}~d\overline{\mu }_{0}. \end{eqnarray*}% By \ref{inte}, considering $g=f\circ h^{-1}$ this shows \begin{equation*} \lim_{\delta \rightarrow 0}\int_{\mathbb S^1} f~d\mu _{\delta }=\int_{\mathbb S^1} f~d\mu _{0}. \end{equation*} \end{proof} \bigskip {Similarly, one can extend the quantitative stability results proved in Theorem \ref{stst2} to smooth diffeomorphisms of the circle}. { \begin{remark} We point out that the following theorem holds under much less regularity for $T_0$ (the proof remains the same). In fact, it is enough that $T_0\in C^r({\mathbb S^1})$ with $r$ sufficiently big so that the cojugation $h$ is bi-Lipschitz; compare with Theorem \ref{thmhermanyoccoz}. \end{remark} } \medskip \begin{theorem} \label{quantdiff}Let $T_{0}$ be a $C^{\infty }$ diffeomorphism of the circle with Diophantine rotation number $\alpha \in \mathcal{D}(\tau )$, for some $\tau>1$. Let $% \{T_{\delta }\}_{0\leq \delta \leq \overline{\delta }}$ be a family of Borel measurable maps of the circle such that% \begin{equation*} \sup_{x\in {\mathbb S^{1}}}|T_{0}(x)-T_{\delta }(x)|\leq \delta . \end{equation*}% Suppose that for each $0\leq \delta \leq \overline{\delta }$, $\mu _{\delta } $ is an invariant measure of $T_{\delta }$. Then, for each $\ell <{\frac{1% }{\gamma (\alpha )+1}}$ we have: \begin{equation*} \Vert m-\mu _{\delta }\Vert _{W}=O(\delta ^{\ell }). \end{equation*} \end{theorem} \begin{proof} By Theorem \ref{thmhermanyoccoz} , there exists $h\in \mathrm{Diff}% _{+}^{\infty }({{\mathbb{S}}^{1}})$ conjugating $T_{0}$ with the rotation $% R_{\alpha }.$ We apply the same coniugation to $T_{\delta }$ for each $% \delta >0$ obtaining a family of maps $U_{\delta }.$ \ The situation is still summarized by $(\ref{diagrams}).$ Since $h$ is a bilipschitz map we have \begin{equation*} \lim_{\delta \rightarrow 0}\sup_{x\in {\mathbb S^{1}}}|R_{\alpha }(x)-U_{\delta }(x)|=0 \end{equation*}% and there is a $C\geq 1$ such that for any pair of probability measures $\mu _{1},\mu _{2}$% \begin{equation*} C^{-1}||\mu _{1}-\mu _{2}||_{W}\leq ||h_{\ast }^{-1}\mu _{1}-h_{\ast }^{-1}\mu _{2}||_{W}\leq C||\mu _{1}-\mu _{2}||_{W} \end{equation*}% (and the same holds for $h_{\ast }$). Let $\overline{\mu }_{\delta }:=h_{\ast }(\mu _{\delta }).$ These measures are invariant for $U_{\delta }. $\ \ By Theorem \ref{stst2} we then get that for each $\ell <{\frac{1}{\gamma (\alpha )+1}}$ we have: \begin{equation*} \Vert m-\overline{\mu }_{\delta }\Vert _{W}=O(\delta ^{\ell }). \end{equation*}% This imply \begin{equation*} \Vert \mu _{0}-\mu _{\delta }\Vert _{W}=||h_{\ast }^{-1}m-h_{\ast }^{-1}% \overline{\mu }_{\delta }||_{W}=O(\delta ^{\ell }). \end{equation*} \end{proof} \bigskip {Finally, one can also extend the existence of linear response, along the same lines of Theorem \ref{KAMandResp} and Corollary \ref{corKAM}. In fact, as observe in Remark \ref{remarkteokam} ({\it iii}), KAM theorem can be extended to sufficiently regular diffeomorphisms of the circle (one can prove it either directly ({\it e.g.}, \cite{Arnold, BroerSevryuk, Moser, Russmann,Vano}), or by combining the result for rotations of the circle, with Theorem \ref{thmhermanyoccoz}). Since the proof can be adapted {\it mutatis mutandis} {(of course, leading to a different expression for the linear response}), we omit further details.}\\ \medskip \section{Stability under discretization and numerical truncation}\label{sectrunc} As an application of what discussed in this section we want to address the following question: \medskip \noindent \textbf{Question:} \emph{Why are numerical simulations generally quite reliable, in spite of the fact that numerical truncations are quite bad perturbations, transforming the system into a piecewise constant one, having only periodic orbits?} \medskip Let us consider the uniform grid $E_{N}$ on $\mathbb{S}^{1}$ defined by% \begin{equation*} E_{N}=\left\{\frac{i}{N}\in \mathbb{R}/\mathbb{Z}: \quad 1\leq i\leq N \right\}. \end{equation*}% In particular when $N=10^{k}$ the grid represents the points which are representable with $k$ decimal digits. Let us consider the projection $P_{N}:% \mathbb{S}^{1}\rightarrow E_{N}$ defined by \begin{equation*} P_{N}(x)=\frac{\left\lfloor Nx\right\rfloor }{N}, \end{equation*} where $\lfloor \cdot \rfloor$ is the floor function. Given a map $T:$ $\mathbb{S}^{1}\rightarrow \mathbb{S}^{1}$ and let $N\in {% \mathbb{N}}$; we define its \textit{$N$-discretization} $T_{N}:\mathbb{S}% ^{1}\rightarrow \mathbb{S}^{1}$ \ by% \begin{equation*} T_{N}(x):=P_{N}(T(x)). \end{equation*}% This is an idealized representation of what happens if we try to simulate the behavior of $T$ on a computer, having $N$ points of resolution. Of course the general properties of the systems $T_{N}$ and $T$ are a priori completely different, starting from the fact that $T_{N}$ is forced to be periodic. Still these simulations gives in many cases quite a reliable picture of many aspects of the behavior of $T$, which justifies why these naive simulations are still much used in many applied sciences. Focusing on the statistical properties of the system and on its invariant measures, one can investigate whether the invariant measures of the system $T_{N}$ (when they exist) converge to the physical measure of $T$, and in general if they converge to some invariant measure of $T$. In this case, the statistical properties of $% T $ are in some sense robust under discretization. Results of this kind have been proved for some classes of pievewise expanding maps (see \cite{Bo}, \cite{GB})\ and for topologically generic diffeomorphisms of the torus (see \cite{Gu}, \cite{Gu2}, \cite{mier}). Since the discretization is a small perturbation in the uniform convergence topology, a direct application of Theorem \ref{stadiff} gives \begin{corollary} \label{discrediffeo}Let $T_{0}$ be an orientation preserving diffeomorphism of the circle with an irrational rotation number $\alpha $ and such that $\log (T_{0}^{\prime })$ has bounded variation and let $N\geq 1 $. Let $T_{N}=P_{N}\circ T_{0}$ be the family of maps given by its $% N-discretizations$. Suppose $\mu _{N}$ is an invariant measure of $T_{N}$. Then \begin{equation*} \lim_{N\rightarrow \infty }\int_{\mathbb S^1} f~d\mu _{N}=\int_{\mathbb S^1} f~d\mu _{0} \end{equation*}% for all $f\in C^{0}(\mathbb{S}^{1}).$ \end{corollary} \begin{proof} The statement follows by Theorem \ref{stadiff} noticing that \begin{equation*} \sup_{x\in {\mathbb{S}}^{1}}|T_{0}(x)-T_{N}(x)|\leq \frac{1}{N}. \end{equation*} \end{proof} We think this result is very similar to the one shown in Proposition 8.1 of \cite{mier}. Comparing this kind of results with the ones in \cite{Gu}, we point out that in this statement we do not suppose the system to be topologically generic and that the convergence is proved for all discretizations, while in \cite{Gu} the convergence is proved for a certain sequence of finer and finer discretizations. \newline As an application of our quantitative stability result (Theorem \ref{stst2} and \ref{quantdiff}), we can also provide a quantitative estimate for the speed of convergence of the invariant measure of the $N$-discretized system to the original one. We remark that as far as we know, there are no other similar quantitative convergence results of this kind in the literature.% \newline \begin{corollary} \label{quantdiscrediffeo}Let $T_{0}$ be a $C^{\infty }$ diffeomorphism of the circle with Diophantine rotation number $\alpha \in \mathcal{D}(\tau ).$ Let $T_{N}=P_{N}\circ T_0$ be the family of its $N$-discretizations. Suppose $\mu _{N}$ is an invariant measure of $T_{N}$. Then, for each $\ell <% {\frac{1}{\gamma (\alpha )+1}}$ \begin{equation*} \Vert m-\mu _{N}\Vert _{W}=O(N^{-\ell }). \end{equation*} \end{corollary} The proof of Corollary \ref{quantdiscrediffeo} \ is {similar to} the one of Corollary \ref{discrediffeo}. \section*{Abstract (Not appropriate in this style!)}% \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}% \quotation \fi }% }{% }% \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{}% \@ifundefined{maketitle}{\def\maketitle#1{}}{}% \@ifundefined{affiliation}{\def\affiliation#1{}}{}% \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}% \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}% \@ifundefined{newfield}{\def\newfield#1#2{}}{}% \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }% \newcount\c@chapter}{}% \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}% \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}% \@ifundefined{subsection}{\def\subsection#1% {\par(Subsection head:)#1\par }}{}% \@ifundefined{subsubsection}{\def\subsubsection#1% {\par(Subsubsection head:)#1\par }}{}% \@ifundefined{paragraph}{\def\paragraph#1% {\par(Subsubsubsection head:)#1\par }}{}% \@ifundefined{subparagraph}{\def\subparagraph#1% {\par(Subsubsubsubsection head:)#1\par }}{}% \@ifundefined{therefore}{\def\therefore{}}{}% \@ifundefined{backepsilon}{\def\backepsilon{}}{}% \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}% \@ifundefined{registered}{% \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi}% \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr \mathhexbox20D}}}}{}% \@ifundefined{Eth}{\def\Eth{}}{}% \@ifundefined{eth}{\def\eth{}}{}% \@ifundefined{Thorn}{\def\Thorn{}}{}% \@ifundefined{thorn}{\def\thorn{}}{}% \def\TEXTsymbol#1{\mbox{$#1$}}% \@ifundefined{degree}{\def\degree{{}^{\circ}}}{}% \newdimen\theight \@ifundefined{Column}{\def\Column{% \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}% \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{% \rightline{\rlap{\box\z@}}% \vss }% }% }}{}% \@ifundefined{qed}{\def\qed{% \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}% }}{}% \@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}% \@ifundefined{tciLaplace}{\def\tciLaplace{L}}{}% \@ifundefined{tciFourier}{\def\tciFourier{F}}{}% \@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}% \@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}% \@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}% \@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}% \@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}% \@ifundefined{vvert}{\def\vvert{\Vert}}{ \@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}% \@ifundefined{dB}{\def\dB{\hbox{{}}}}{ \@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{ \@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{ \@ifundefined{note}{\def\note{$^{\dag}}}{}% \defLaTeX2e{LaTeX2e} \ifx\fmtnameLaTeX2e \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \fi \def\alpha{{\Greekmath 010B}}% \def\beta{{\Greekmath 010C}}% \def\gamma{{\Greekmath 010D}}% \def\delta{{\Greekmath 010E}}% \def\epsilon{{\Greekmath 010F}}% \def\zeta{{\Greekmath 0110}}% \def\eta{{\Greekmath 0111}}% \def\theta{{\Greekmath 0112}}% \def\iota{{\Greekmath 0113}}% \def\kappa{{\Greekmath 0114}}% \def\lambda{{\Greekmath 0115}}% \def\mu{{\Greekmath 0116}}% \def\nu{{\Greekmath 0117}}% \def\xi{{\Greekmath 0118}}% \def\pi{{\Greekmath 0119}}% \def\rho{{\Greekmath 011A}}% \def\sigma{{\Greekmath 011B}}% \def\tau{{\Greekmath 011C}}% \def\upsilon{{\Greekmath 011D}}% \def\phi{{\Greekmath 011E}}% \def\chi{{\Greekmath 011F}}% \def\psi{{\Greekmath 0120}}% \def\omega{{\Greekmath 0121}}% \def\varepsilon{{\Greekmath 0122}}% \def\vartheta{{\Greekmath 0123}}% \def\varpi{{\Greekmath 0124}}% \def\varrho{{\Greekmath 0125}}% \def\varsigma{{\Greekmath 0126}}% \def\varphi{{\Greekmath 0127}}% \def{\Greekmath 0272}{{\Greekmath 0272}} \def\FindBoldGroup{% {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}% } \def\Greekmath#1#2#3#4{% \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF}% \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{% \newcounter{equationnumber} \def\mathletters{% \addtocounter{equation}{1} \edef\@currentlabel{\arabic{equation}}% \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0}% \edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}% } \def\endmathletters{% \setcounter{equation}{\value{equationnumber}}% } }{} \@ifundefined{BibTeX}{% \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}% \@ifundefined{AmS}% {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}% A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}% \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}% \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}% \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation}% \fi \fi \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} \def\QATOP#1#2{{#1 \atop #2}}% \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}% \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}% \def\QABOVE#1#2#3{{#2 \above#1 #3}}% \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}% \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}% \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}% \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}% \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}% \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}% \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}% \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}% \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}% \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\tint{\mathop{\textstyle \int}}% \def\tiint{\mathop{\textstyle \iint }}% \def\tiiint{\mathop{\textstyle \iiint }}% \def\tiiiint{\mathop{\textstyle \iiiint }}% \def\tidotsint{\mathop{\textstyle \idotsint }}% \def\toint{\mathop{\textstyle \oint}}% \def\tsum{\mathop{\textstyle \sum }}% \def\tprod{\mathop{\textstyle \prod }}% \def\tbigcap{\mathop{\textstyle \bigcap }}% \def\tbigwedge{\mathop{\textstyle \bigwedge }}% \def\tbigoplus{\mathop{\textstyle \bigoplus }}% \def\tbigodot{\mathop{\textstyle \bigodot }}% \def\tbigsqcup{\mathop{\textstyle \bigsqcup }}% \def\tcoprod{\mathop{\textstyle \coprod }}% \def\tbigcup{\mathop{\textstyle \bigcup }}% \def\tbigvee{\mathop{\textstyle \bigvee }}% \def\tbigotimes{\mathop{\textstyle \bigotimes }}% \def\tbiguplus{\mathop{\textstyle \biguplus }}% \def\dint{\mathop{\displaystyle \int}}% \def\diint{\mathop{\displaystyle \iint}}% \def\diiint{\mathop{\displaystyle \iiint}}% \def\diiiint{\mathop{\displaystyle \iiiint }}% \def\didotsint{\mathop{\displaystyle \idotsint }}% \def\doint{\mathop{\displaystyle \oint}}% \def\dsum{\mathop{\displaystyle \sum }}% \def\dprod{\mathop{\displaystyle \prod }}% \def\dbigcap{\mathop{\displaystyle \bigcap }}% \def\dbigwedge{\mathop{\displaystyle \bigwedge }}% \def\dbigoplus{\mathop{\displaystyle \bigoplus }}% \def\dbigodot{\mathop{\displaystyle \bigodot }}% \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}% \def\dcoprod{\mathop{\displaystyle \coprod }}% \def\dbigcup{\mathop{\displaystyle \bigcup }}% \def\dbigvee{\mathop{\displaystyle \bigvee }}% \def\dbigotimes{\mathop{\displaystyle \bigotimes }}% \def\dbiguplus{\mathop{\displaystyle \biguplus }}% \if@compatibility\else \RequirePackage{amsmath} \makeatother \endinput \fi \typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE} \def\makeatother\endinput{\makeatother\endinput} \bgroup \ifx\ds@amstex\relax \message{amstex already loaded}\aftergroup\makeatother\endinput \else \@ifpackageloaded{amsmath}% {\message{amsmath already loaded}\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amstex}% {\message{amstex already loaded}\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amsgen}% {\message{amsgen already loaded}\aftergroup\makeatother\endinput} {} \fi \egroup \let\DOTSI\relax \def\RIfM@{\relax\ifmmode}% \def\FN@{\futurelet\next}% \newcount\intno@ \def\iint{\DOTSI\intno@\tw@\FN@\ints@}% \def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}% \def\iiiint{\DOTSI\intno@4 \FN@\ints@}% \def\idotsint{\DOTSI\intno@\z@\FN@\ints@}% \def\ints@{\findlimits@\ints@@}% \newif\iflimtoken@ \newif\iflimits@ \def\findlimits@{\limtoken@true\ifx\next\limits\limits@true \else\ifx\next\nolimits\limits@false\else \limtoken@false\ifx\ilimits@\nolimits\limits@false\else \ifinner\limits@false\else\limits@true\fi\fi\fi\fi}% \def\multint@{\int\ifnum\intno@=\z@\intdots@ \else\intkern@\fi \ifnum\intno@>\tw@\int\intkern@\fi \ifnum\intno@>\thr@@\int\intkern@\fi \int \def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi \ifnum\intno@>\tw@\intop\intkern@\fi \ifnum\intno@>\thr@@\intop\intkern@\fi\intop}% \def\intic@{% \mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}% \def\negintic@{\mathchoice {\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}% \def\ints@@{\iflimtoken@ \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits \else\multint@\nolimits\fi \eat@ \else \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits\else \multint@\nolimits\fi}\fi\ints@@@}% \def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}% \def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}% \def\intdots@{\mathchoice{\plaincdots@}% {{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}% \def\RIfM@{\relax\protect\ifmmode} \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi} \let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice {\textdef@\displaystyle\f@size{#1}}% {\textdef@\textstyle\tf@size{\firstchoice@false #1}}% {\textdef@\textstyle\sf@size{\firstchoice@false #1}}% {\textdef@\textstyle \ssf@size{\firstchoice@false #1}}% \glb@settings} \def\textdef@#1#2#3{\hbox{{% \everymath{#1}% \let\f@size#2\selectfont #3}}} \newif\iffirstchoice@ \firstchoice@true \def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}% \def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}% \def\multilimits@{\bgroup\vspace@\Let@ \baselineskip\fontdimen10 \scriptfont\tw@ \advance\baselineskip\fontdimen12 \scriptfont\tw@ \lineskip\thr@@\fontdimen8 \scriptfont\thr@@ \lineskiplimit\lineskip \vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}% \def\Sb{_\multilimits@}% \def\endSb{\crcr\egroup\egroup\egroup}% \def\Sp{^\multilimits@}% \let\endSp\endSb \newdimen\ex@ \[email protected] \def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}% \def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow \mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\overrightarrow{\mathpalette\overrightarrow@}% \def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \let\overarrow\overrightarrow \def\overleftarrow{\mathpalette\overleftarrow@}% \def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\overleftrightarrow{\mathpalette\overleftrightarrow@}% \def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr \leftrightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\underrightarrow{\mathpalette\underrightarrow@}% \def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}% \let\underarrow\underrightarrow \def\underleftarrow{\mathpalette\underleftarrow@}% \def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}% \def\underleftrightarrow{\mathpalette\underleftrightarrow@}% \def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th \hfil#1#2\hfil$\crcr \noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}% \def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@} \let\nlimits@\displaylimits \def\setboxz@h{\setbox\z@\hbox} \def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr \hfil$#1\m@th\operator@font lim$\hfil\crcr \noalign{\nointerlineskip}#2#1\crcr \noalign{\nointerlineskip\kern-\ex@}\crcr}}}} \def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\copy\z@\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill \mkern-6mu\box\z@$} \def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}} \def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}} \def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@} \def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@} \def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}} \def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@ \hbox{$#1\m@th\operator@font lim$}}}} \def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}} \def\mathpalette\varlimsup@{}@#1{\mathop{\overline {\hbox{$#1\m@th\operator@font lim$}}}} \def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}% \begingroup \catcode `|=0 \catcode `[= 1 \catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12 |gdef|@alignverbatim#1\end{align}[#1|end[align]] |gdef|@salignverbatim#1\end{align*}[#1|end[align*]] |gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]] |gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]] |gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]] |gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]] |gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]] |gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]] |gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]] |gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]] |gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]] |endgroup \def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim You are using the "align" environment in a style in which it is not defined.} \let\endalign=\endtrivlist \@namedef{align*}{\@verbatim\@salignverbatim You are using the "align*" environment in a style in which it is not defined.} \expandafter\let\csname endalign*\endcsname =\endtrivlist \def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim You are using the "alignat" environment in a style in which it is not defined.} \let\endalignat=\endtrivlist \@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the "alignat*" environment in a style in which it is not defined.} \expandafter\let\csname endalignat*\endcsname =\endtrivlist \def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim You are using the "xalignat" environment in a style in which it is not defined.} \let\endxalignat=\endtrivlist \@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using the "xalignat*" environment in a style in which it is not defined.} \expandafter\let\csname endxalignat*\endcsname =\endtrivlist \def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim You are using the "gather" environment in a style in which it is not defined.} \let\endgather=\endtrivlist \@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the "gather*" environment in a style in which it is not defined.} \expandafter\let\csname endgather*\endcsname =\endtrivlist \def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim You are using the "multiline" environment in a style in which it is not defined.} \let\endmultiline=\endtrivlist \@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using the "multiline*" environment in a style in which it is not defined.} \expandafter\let\csname endmultiline*\endcsname =\endtrivlist \def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim You are using a type of "array" construct that is only allowed in AmS-LaTeX.} \let\endarrax=\endtrivlist \def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.} \let\endtabulax=\endtrivlist \@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type of "array*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endarrax*\endcsname =\endtrivlist \@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endtabulax*\endcsname =\endtrivlist \def\endequation{% \ifmmode\ifinner \iftag@ \addtocounter{equation}{-1} $\hfil \displaywidth\linewidth\@taggnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \else $\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \fi \else \iftag@ \addtocounter{equation}{-1} \eqno \hbox{\@taggnum} \global\@ifnextchar*{\@tagstar}{\@tag}@false% $$\global\@ignoretrue \else \eqno \hbox{\@eqnnum $$\global\@ignoretrue \fi \fi\fi } \newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} \@ifundefined{tag}{ \def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}} \def\@tag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@tagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} }{} \def\tfrac#1#2{{\textstyle {#1 \over #2}}}% \def\dfrac#1#2{{\displaystyle {#1 \over #2}}}% \def\binom#1#2{{#1 \choose #2}}% \def\tbinom#1#2{{\textstyle {#1 \choose #2}}}% \def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}% \makeatother \endinput
1,108,101,563,800
arxiv
\section{Introduction} Experiments were performed using coherent Raman heterodyne scattering (RHS) techniques \cite{Mlynek1983} for the three-level schemes shown in Fig. \ref{raman_heterodyne}. This method employs a radio-frequency (RF) field resonant with a Zeeman transition \ket{1}$\leftrightarrow$\ket{2} or \ket{3}$\leftrightarrow$\ket{4} to create a ground- or excited-state electron spin coherence (Fig. \ref{raman_heterodyne}A). A resonant laser beam drives the optical transition \ket{1}$\leftrightarrow$\ket{3} or \ket{1}$\leftrightarrow$\ket{4}. The combined optical and RF coherences then induce a coherence on the other optical transition \ket{2}$\leftrightarrow$\ket{3} or \ket{1}$\leftrightarrow$\ket{3} that then interferes with the laser field, producing a beat note at the \ket{1}$\leftrightarrow$\ket{2} or \ket{3}$\leftrightarrow$\ket{4} frequencies that can be detected using a fast photodiode. \begin{figure} \includegraphics[width=\columnwidth]{fig1_v7.eps} \caption{A) Energy levels of Er$^{3+}$ and RHS schemes for ground- and excited-state spin coherence studies (\ket{1}$\leftrightarrow$\ket{2} and \ket{3}$\leftrightarrow$\ket{4} are electron spin transitions). B) RHS measurements at 3 K on Er$^{3+}$ at site 1 of Y$_2$SiO$_5$ . Lines: fitted Zeeman transition frequencies for the zero-nuclear-spin Er$^{3+}$ isotopes. Circle: spin echo experimental condition.} \label{raman_heterodyne} \end{figure} We use a 50 ppm Er$^{3+}$ -doped YSO sample grown by Scientific Materials Corp. and cut along the three dielectric axes: $b$, $D_1$, $D_2$ \citep{Li1992}. Er$^{3+}$ can substitute for Y$^{3+}$ at two inequivalent crystallographic sites (denoted 1 and 2). Furthermore, each site has two sub-groups with different local orientations that exhibit different Zeeman effects unless the magnetic field is applied either parallel or perpendicular to the $b$ axis. The sample was mounted in an Oxford Optistat helium cryostat with a magnetic field applied along $D_1$ using a Helmholtz coil. An external-cavity diode laser was set at 1536.49 nm (vacuum) in resonance with the transition between the lowest crystal field levels of the $^4$I$_{15/2}$ and $^4$I$_{13/2}$ multiplets for ions at site 1 \cite{Bottger2006a}. The laser was amplified and then focused into the crystal with propagation along the crystal's $b$ axis and polarization along the $D_2$ axis. The CW laser incident on the crystal produced nearly equal populations in both the ground and excited states over $\sim$1 MHz bandwidth. Optical pumping also induced population differences within the ground- and excited-state Zeeman sub-levels necessary to detect spin coherence with the RHS method. Transmitted light was detected by an AC-coupled photoreceiver with 1 GHz bandwidth. RF pulses with magnetic field amplitudes of up to several Gauss were applied along $b$ through a copper wire RF waveguide held next to the crystal surface. Fig. \ref{raman_heterodyne}B shows the RHS spectra for frequencies of up to 1 GHz as a function of magnetic field strength. In these experiments, an RF spectrum analyzer was used to generate the constant RF excitation and to analyze the photoreceiver signal (see Supplemental Material \cite{SM}). The four straight lines observed in Fig. \ref{raman_heterodyne}B correspond to electron-spin transitions for the ground and excited states of Er$^{3+}$ isotopes with zero nuclear spin, as deduced from the magnetic field direction and known \textbf{g} tensors \cite{Guillot-Noel2006,Sun2008}. Two transitions are observed for each state since the magnetic field was not exactly parallel to $D_1$, resulting in two inequivalent sub-groups for each site. The corresponding effective ground-state values $g_{g}$ are 4.75 and 3.85 ($\pm$ 0.3), and the excited-state values $g_{e}$ are 4.35 and 3.27 ($\pm$ 0.3), depicted in Fig. \ref{raman_heterodyne}B). All spin transitions exhibited linewidths of $\sim$10 MHz, similar to those previously reported \cite{Probst2013,Guillot-Noel2006}. No significant variation in linewidth with magnetic field strength was observed, indicating that broadening of the spin transitions does not arise from inhomogeneity in the \textbf{g} tensors. Other transitions with more complex field dependence are also visible in Fig. \ref{raman_heterodyne}B) and are attributed to hyperfine transitions of the $^{167}$Er$^{3+}$ isotope ($I=7/2$, abundance 22.9 \%). \begin{figure} \includegraphics[width=1\columnwidth]{Fig2_v4.eps} \caption{A) Examples of optically detected electron spin echoes in the $^4$I$_{13/2}$ excited state for different pulse delays with $T= 1.9$ K and $B = 8.7$ mT. B) Measurement (circles) and fit (line) of echo area decay at 1.9 K, giving $T_2=1.6 \pm 0.2$ $\mu$s and $x=1.4 \pm 0.2$. Inset: RF pulse sequence.} \label{echo} \end{figure} Excited-state spin echoes were measured using RF pulses generated by a gated source. The photoreceiver signal was amplified, filtered, and then down mixed with a local oscillator (see Supplemental Material \cite{SM}). The DC magnetic field of 8.7 mT was applied along the same orientation as in the CW experiments, and an RF frequency of 400 MHz was used to study the excited-state transition (Fig. \ref{raman_heterodyne}C) for temperatures from 1.6 to 3.5 K. The signal for the 2-pulse echo sequence is shown in Fig. \ref{echo}A for two different delays between the excitation pulses. Pulse lengths of 150 ns were used to maximize the echo signal. The detected echo has a $\pi$ phase shift relative to the excitation pulses, confirming that the entire sequence was phase coherent. The large excited-state population created by CW laser excitation allowed strong echo signals to be produced. By varying the delay between the pulses, we measured the decay of the integrated echo signal area (Fig. \ref{echo}B). The decay was fitted with a Mims shape \cite{Mims1968}: $ A(\tau)=A\exp{[-(2t_{12}/T_{2e})^x]} \label{streched} $ where $T_{2e}$ is the 1/e phase coherence lifetime and $t_{12}$ is the delay between excitation pulses. The extracted $T_{2e}$ at 1.9 K was 1.6 $\pm\ 0.2$ $\mu$s (200 kHz homogeneous linewidth), with $x = 1.4 \pm 0.2$. The non-exponential behavior indicates a spectral diffusion effect due to interactions with the bath of Er$^{3+}$ spins in the lattice \cite{Mims1968}. The effect of $^{89}$Y nuclear spins is expected to be much smaller because of their weak magnetic moment and slow flip rates \cite{Bottger2006b}. Due to the narrow optical excitation bandwidth, the excited-state ion concentration for site 1 is $\sim$10$^{3}$ times lower than the ground-state concentration so that they do not contribute significantly to spectral diffusion. At the Er$^{3+}$ concentration and low fields studied here, ground-state spins are only weakly polarized for both sites and are expected to relax mainly by mutual spin flip-flop processes (spin diffusion), even at very low temperatures. \begin{figure} \includegraphics[width=1\columnwidth]{fig3_ver6.eps} \caption{A) Excited-state stimulated spin echo decay measured at 2.5 K (circles) and fit to the spectral diffusion model (line, see text). Inset: RF pulse sequence. B) Experimental and modeled coherence and population lifetimes as a function of temperature.} \label{temperature} \end{figure} To probe the decoherence mechanisms, we measured 3-pulse stimulated spin echoes using the pulse sequence shown in Fig. \ref{temperature}A. Stimulated echo measurements allow both spin relaxation and spectral diffusion dynamics to be studied over the timescale of $T_1$, whereas 2-pulse echoes are limited to the much shorter $T_2$ time scale \cite{Mims1968,Bottger2006b}. We measured 3-pulse echo decays as a function of $t_{23}$, with $t_{12}$ fixed at 0.3 $\mu$s for temperatures between 1.6 and 3 K. An example decay is shown in Fig. \ref{temperature}A. The initial fast decay is due to spectral diffusion while the slower exponential decay component results from population relaxation. Effects of spectral diffusion on echo decays can be modeled by a time-dependent effective homogeneous linewidth $\Gamma_{\mathrm{eff}}$ \citep{Bottger2006b}, with the echo amplitude given by \begin{equation} A(t_{12},t_{23}) = A_0 e^{(-\frac{t_{23}}{T_{1e}})}e^{[-2t_{12}\pi\Gamma_{\mathrm{eff}}(t_{12},t_{23})]}. \label{tpecho} \end{equation} The excited-state spins probed in the echo sequence are perturbed by the bath of ground-state Er$^{3+}$ spins that relax at a rate $R$ with a distribution of interaction strengths characterized by $\Gamma_{\mathrm{SD}}$ so that the effective linewidth in Eq. \ref{tpecho} can be written as \cite{Bottger2006b} \begin{equation} \Gamma_{\mathrm{eff}}(t_{12},t_{23}) = \Gamma_0+\frac{1}{2}\Gamma_{\mathrm{SD}}(Rt_{12}+1-e^{-Rt_{23}}). \label{sd} \end{equation} To reduce the number of fit parameters, $T_{1e}$ was determined from an exponential fit to the tail of the 3-pulse echo decays. Moreover, at the temperatures and magnetic field used here, $\Gamma_{\mathrm{SD}}$ and the flip-flop rate $R_{\mathrm{ff}}$ are temperature independent so that the relaxation rate $R$ can be modeled by $ R = R_{\mathrm{ff}}+\alpha_{O,g} / [ \exp \left( \Delta_g/ T \right)-1] $ as a function of temperature $T$. The second term corresponds to the resonant two-phonon Orbach process where $\Delta_g$ is the energy of the next crystal field level above the ground state (in Kelvin) and $\alpha_{O,g}$ is the coupling strength. The two-phonon Raman and one-phonon direct terms are both negligible for our conditions. $\Gamma_0$ is also taken as independent of temperature. Fits of this model to the experimental 2- and 3- pulse echo decays (see Supplemental Material \cite{SM}) give the following values: $\Gamma_0 = 2.7 \times 10^5$ Hz, $\Gamma_{\mathrm{SD}} = 4.3 \times 10^5$ Hz, $R_{\mathrm{ff}} = 2.1 \times 10^4$ s$^{-1}$, $\alpha_{O,g} = 50 \times 10^{10}$ Hz and $\Delta_g = 40$ K. The experimental parameters compare well with theoretical estimates for decoherence from spin flip flops of site 1 Er$^{3+}$ ground-state spins (see Supplemental Material \cite{SM}), with calculated values of $\Gamma_{\mathrm{SD, theory}} = 4.4 \times 10^5$ Hz and $R_{\mathrm{ff,theory}} = 1.6 \times 10^5$ s$^{-1}$. The crystal field level splitting of 57 K \cite{Bottger2006a} is larger than the fitted value, a common observation in spin-lattice relaxation studies \cite{Young1966,Wolfowicz2015,Bottger2016}. For our field orientation, the ground-state $g$ factors for site 1 and 2 spins are 4 and 14 \cite{Sun2008} so that the site 2 flip-flop rate will be faster than for site 1 by a factor of roughly $(14/4)^4= 150$. This much faster rate causes decoherence due to site 2 spins to be reduced because of the well-known "motional narrowing" effect \cite{Bloembergen48}. Consequently, we attribute $\Gamma_0$ to site 2 ground-state spins that produce decoherence over sub-$\mu$s timescales. This conclusion is supported by a simple theoretical estimate of $\Gamma_{\mathrm{0,theory}} =6 \times 10^5$ Hz for this effect that is consistent with the observed value (see Supplemental Material \cite{SM}). Fig. \ref{temperature} shows the calculated variation of $T_{1g}=1/R$ as a function of temperature. Below 2.2 K, relaxation is dominated by the flip-flop process. A plot of $T_{1e}$ extracted from Eq. 1 and the curve calculated from Eq. \ref{sd} by setting $t_{23}=0$ is shown in Fig. \ref{temperature}B. The $T_{1e}$ variations are explained by the sum of the optical excitation and emission rates combined with Raman and Orbach spin relaxation processes (see Fig. 3B and Supplemental Material \cite{SM}). At the lowest temperatures, $T_{1e}$ reached 1.2 ms, limited by the optical stimulated emission rate due to continuous excitation by the laser. In a pulsed configuration without laser excitation during the echo sequence, this would increase to the limit of $T_{\mathrm{1,opt}}=8$ ms \cite{thiel2012}. For our conditions, $T_{2e}$ is limited by decoherence due to relaxation of ground-state spins. This decoherence would be reduced at lower temperatures since the Orbach and Raman contributions rapidly decrease as $\exp(-\Delta_e/T)$ and $T^9$, respectively, while flip-flop rates decrease as $ [\mathrm{sech}( g_{\mathrm{eff}}\mu_B B/2kT )]^2 $, where $k$ is the Boltzmann factor. For example, consider a weak field of 50 mT that is compatible with superconducting resonators and where the excited-state splitting is about 2 GHz, also in the range of typical microwave photons. At a temperature of 20 mK, decoherence only results from site 1 ground-state spin flip-flops because site 2 spins, with their large $g$ factor, are completely polarized. The site 1 flip-flop rate $R_{\mathrm{ff}}$ would be reduced by a factor of $\sim$350 compared to 2 K. Together, these effects would result in a much longer excited-state spin coherence lifetime of $T_{2e} \approx 1.8$ ms (see Supplemental Material \cite{SM}). This is likely to approach the decoherence limit due to $^{89}$Y spin flips \cite{Bottger2006b}. In contrast, even at very low temperatures, site 1 ground-state excitations will still experience rapid decoherence through the flip-flop process due to the large number of other resonant spins present in the lattice. In fact, spins excited into the higher energy spin state will have an increasing number of neighbors in the lower energy spin state that they can flip flop with as the temperature is decreased, accelerating decoherence to as much as twice the high-temperature rate. For our system, this effect will limit $T_{2g}$ to less than 30 $\mu$s even at the lowest temperatures. Moreover, a rephasing control pulse applied over the entire spin linewidth would cause strong ISD, reducing $T_{2g}$ to $\sim$1 $\mu$s, independent of temperature (see Supplemental Material \cite{SM}). These effects explain why we did not observe any ground-state spin echo for the conditions used in the excited-state spin echo measurements. We note that our analysis is also in qualitative agreement with the ground-state site 2 coherence lifetime of $5.6$ $\mu$s that has been observed at 30 mK with a different magnetic field orientation \cite{Probst2013}. Finally, we turn to an excited-state scheme for an optical to microwave transducer. Previous proposals used the ground-state spin by coupling it off-resonantly \cite{Williamson2014} or resonantly \cite{OBrien2014} to a microwave cavity. One limiting factor for efficient conversion in \cite{OBrien2014} is the short coherence lifetime compared to the coupling strength. To exploit the potential increase in coherence time for the excited state, the protocol from \cite{OBrien2014} can be modified in the following way. The first step is the same as in \cite{OBrien2014}, we prepare a narrow spectral feature and then apply a magnetic field gradient to produce an inhomogeneously broadened feature. An incoming optical photon is then absorbed on the \ket{1} - \ket{4} transition. The induced inhomogeneous broadening and the free evolution of the system lead to a dephasing of the optical coherence, preventing re-emission and ensuring that the optical photon is stored as a matter excitation. In the next step, we apply a $\pi$-pulse on the \ket{1} - \ket{3} transition, bringing the population from state \ket{1} to \ket{3}. The subsequent free evolution will further de-phase the system due to the inhomogeneous broadening of the spin state, but now at a possibly different rate. After a delay, we apply a second $\pi$-pulse to bring population back into \ket{1}, while simultaneously reversing the field gradient to begin the rephasing procedure. Once the dephasing due to inhomogeneous broadening of the excited state is compensated, we apply another $\pi$-pulse, moving the population to state \ket{3} to complete the rephasing procedure, leaving the system in a collective state that strongly couples to a microwave cavity. Assuming the same spin-cavity coupling strength of $v/2\pi = 34\ \text{MHz}$ as in \cite{Probst2013}, which is justified since the principal values of the magnetic $g$ tensors for the $^4$I$_{15/2}$ and $^4$I$_{13/2}$ states in Er$^{3+}$ :Y$_2$SiO$_5$ are roughly the same \cite{Sun2008}, and using our measured spin coherence lifetime of $T_{2e}= 1.6\ \mu s$, we can estimate the conversion efficiency using Eq. 8 in \cite{OBrien2014} to be $\eta \gtrsim 99\%$. A further advantage of using the excited-state spin transition is the fact that only optically excited ions, i.e. those in the laser beam cross section, will interact with the microwave cavity, making spatial hole burning, which might be required in the previous proposals, superfluous. We note that the proposed protocol can also be reversed, i.e. it allows conversion of a microwave photon to a propagating telecom photon. Moreover, the bandwidth of the optical photon can be tuned in the protocol by controlling the strength of the field gradient. In conclusion, we observed electron spin echoes in the optically excited state of an erbium doped crystal. Coherence lifetimes of up to 1.6 $\mu$s were recorded for a field of 8.7 mT at 1.9 K, and a detailed analysis of the decoherence processes suggest that ms lifetimes could be reached for conditions used in superconducting qubit studies. We propose a scheme to exploit these long coherence lifetimes for reversible optical to microwave conversion, with our analysis predicting near unity conversion efficiency. Overall, the possibility of using excited-state spin transitions opens a new and attractive way to coherently interface RE ensembles with microwave cavities and may stimulate new proposals for transducer devices. We thank J. Bartholomew, N. Sinclair, M. Falamarzi, D. Oblak, and W. Tittel for useful discussions, and A. Marsh and R. Nerem for assistance during measurements. This work received funding from the joint French-US ANR project DISCRYS (No. 14-CE26-0037-01) and US NSF grant no. CHE-1416454, as well as Nano'K project RECTUS, the University of Calgary, and NSERC. {\it Note:} A related experiment using hyperfine states of $^{167}$Er$^{3+}$ has been performed in parallel \cite{Rakonjac2018}.
1,108,101,563,801
arxiv
\section{Introduction} The physics of columnar crystals is relevant to the Abrikosov lattice of flux lines in Type-II superconductors and liquid crystalline materials like concentrated phases of long polymers or discotics. The stability of the columnar crystal has been investigated, and various mechanisms proposed for its melting. Conventional melting, which arises when phonon displacements reach a fixed fraction of the lattice constant, can easily be located via the Lindemann criterion~\cite{Nelson:directed,Jain:ice}. Melting destroys the two-dimensional crystalline order perpendicular to the columns leading to a nematic liquid of lines or columns, which is entangled at sufficiently high densities. Crystal defects play an important role above the melting transition. If edge dislocations in the crystal proliferate, they drive the shear modulus to zero, leading to a liquid-like shear viscosity. However, dislocations alone cannot destroy the six-fold orientational order of the triangular lattice in a two-dimensional cross-section. Thus, provided disclination lines do not also proliferate, the resulting liquid of lines is hexatic, not isotropic~\cite{MarchNel:hexatic}. The screw component of the unbound dislocations leads to entanglement. A finite concentration of unbound disclinations superimposed on the hexatic liquid leads to isotropic in-plane order. Another kind of transition is brought about by vacancy/interstitial line defects in columnar crystals composed of long, continuous lines. As discussed in Ref.~\cite{Frey:defect}, under suitable conditions (such as high field and small interlayer coupling in layered superconductors), it can become favourable for these line defects to proliferate. If this happens at a temperature $T_d$ below the melting temperature $T_m$, then the phase that exists between $T_d$ and $T_m$ will be simultaneously crystalline and highly entangled. In the boson analogy of an aligned system of lines, where the lines represent two-dimensional bosons traveling in the ``time-like'' axial ($\hat{\mathbf z}$) direction~\cite{Nelson:directed}, such a phase is analogous to the supersolid phase of the bosonic system which incorporates vacancies and interstitials in its ground state. This entangled solid melts into an entangled liquid or an entangled hexatic at even higher temperatures. The proliferation of vacancy or interstitial strings could also affect a crystal-to-hexatic transition mediated by dislocations. Dislocations in the columnar crystalline geometry are normally constrained to lie in the vertical plane formed by their Burger's vector and the $\hat{\mathbf z}$-axis, because a dislocation in a two-dimensional cross-section can move along the columnar axis only through glide parallel to its Burger's vector. Transverse motion (climb) would require it to absorb or emit vacancies or interstitials. This becomes possible in the supersolid phase, thus allowing dislocation loops to take on arbitrary non-planar configurations which would have to be included in the treatment of Ref.~\cite{MarchNel:hexatic} to study melting out of a supersolid phase~\cite{MR:supersolid}. Vacancy/interstitial strings in a columnar crystal tend to be lines themselves because of the continuity of the columns. If the columns are constrained to be continuous across the entire sample (as is the case for vortex lines in Type II superconductors), these defects must either thread the entire sample (Fig.~\ref{string}) or appear in vacancy/interstitial pairs forming loops (Fig.~\ref{loop})~\cite{Frey:defect}. The situation is different, however, for finite-length polymers, or columns of discotic liquid crystal molecules which can break and reform freely. As illustrated in Fig.~\ref{polymer}a, a slice through a low temperature configuration in a polymer columnar crystal (with translational order perpendicular to the column axis but not parallel to it) would consist of tightly bound polymer ``heads and tails''. At higher temperatures, however, the heads and tails will separate, either moving apart to form a vacancy string or sliding past each other to form a line of interstitials (Fig.~\ref{polymer}b)~\cite{polymer-defects}. In columnar discotic crystals with similar translational order, ``heads'' and ``tails'' are absent at low temperatures, but appear spontaneously when vacancy and interstitial strings are excited (Fig.~\ref{discotic}). (Head and tail defects appear superficially like dislocations in the cross sections shown in Figs.~\ref{polymer} and~\ref{discotic}. A three-dimensional analysis of lines and columns in neighbouring sheets like that shown in Figs.~\ref{string} and~\ref{loop} is necessary to clearly reveal that these are strings of vacancies and interstitials.) \noindent \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=3.2in \epsfbox{string.eps} \caption{Vacancy string ${\mathbf r}_d(z)$ (thick dashed curve) meandering through a columnar crystal. Dashed lines represent columns just above or below the plane of the figure. (Taken from Ref.~\protect\cite{Frey:defect}.)} \label{string} \end{figure} \end{minipage} \hfill \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=2.8in \epsfbox{loop.eps} \smallskip \caption{Vacancy-interstitial loop in a columnar crystal. Dashed lines represent columns just above or below the plane of the figure. (Taken from Ref.~\protect\cite{Frey:defect}.)} \label{loop} \end{figure} \end{minipage} \medskip \noindent \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=3in \epsfbox{poly.eps} \caption{Formation of vacancy/interstitial strings by sliding of polymers within columns in a columnar crystal of finite-length polymers.} \label{polymer} \end{figure} \end{minipage} \hfill \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=3in \epsfbox{discotic.eps} \bigskip \caption{Formation of vacancy/interstitial strings by sliding of polymers within columns in a columnar crystal of finite-length polymers.} \label{discotic} \end{figure} \end{minipage} \medskip Unlike dislocation lines, these strings (and loops) are not constrained to be planar: the lines can jump to any neighbouring lattice site as they traverse the crystal. Several horizontal jumps connecting a head to a tail are shown in Fig.~\ref{jumps}. Note that \underline{left}ward deflections of the interstitial segment connecting a head to a tail are accompanied by \underline{right}ward deflections of the lines or columns themselves. A typical string can be approximated by an alternating sequence of straight segments and kinks joining the head of one column or polymer chain to the tail of another (see Fig.~\ref{walk}). Vacancy/interstitial strings are suppressed at low temperatures because they have a finite line tension, and hence an energy proportional to their length. At higher temperatures, heads and tails can move apart, forming variable-length strings that wander or ``diffuse'' perpendicular to their length by forming kinks. These strings thus resemble living polymers~\cite{Safran}, except that they are directed, on average, along the $\hat{\mathbf z}$-axis. In polymer crystals, the number of such strings is determined by the fixed concentration of heads and tails. In columnar discotic crystals, heads and tails can be created freely, and it is appropriate to treat their statistical mechanics in a grand canonical ensemble by introducing a head/tail fugacity, similar to the fugacity which controls defect concentrations in theories of vortex or dislocation unbinding transitions~\cite{Nelson:trans}. We assume here that we can treat polymer crystals using the same formalism provided we tune the head/tail fugacity to achieve the fixed concentration determined by the mean polymer length. Long polymers imply a dilute distribution of heads and tails. We exclude, for simplicity, the possibility of hairpin excitations in polymer systems, which can be regarded as doubly quantized interstitial excitations leading to a higher energy. As we shall see, the sharp defect proliferation transition discussed in Ref.~\cite{Frey:defect} is blurred when there is a finite concentration of heads and tails in equilibrium. \noindent \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=2in \epsfbox{jumps.eps} \caption{Illustration of a vacancy string (thick dashed curve) joining a column head to another column's tail in a columnar crystal composed of long-chain polymers.} \label{jumps} \end{figure} \end{minipage} \hfill \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=2.8in \epsfbox{walk.eps} \caption{Schematic of a defect string (composed of straight segments and kinks) wandering through the columnar crystal.} \label{walk} \end{figure} \end{minipage} \medskip Given an appropriate combination of parameters, namely, low line tension combined with head/tail and kink energies comparable to the temperature, the entropy of diffusion of the strings can overcome the line tension and lead to string proliferation, allowing heads and tails to separate to arbitrarily large distances. As in its bosonic counterpart, there exists off-diagonal long-range order in this phase, represented by \begin{equation} \label{entangle} \lim_{|{{\mathbf r}_\perp}'-{\mathbf r}_\perp|\rightarrow \infty} \langle\psi({\mathbf r}_\perp,z)\psi^*({{\mathbf r}_\perp}',z')\rangle \neq 0 \end{equation} where $\psi$ and $\psi^*$ are head and tail ``destruction'' and ``creation'' operators~\cite{Nelson:directed}, implying entanglement of lines on a macroscopic scale. If defects are absent or appear only in closed loops, the expression above would vanish as $|{{\mathbf r}_\perp}' - {\mathbf r}_\perp| \rightarrow \infty$. Once defects proliferate, a line can wander to any other column and Eq.~(\ref{entangle}) has a finite limit. A crystal with proliferating vacancies and interstitials is an incommensurate phase --- the magnitude of the smallest reciprocal vector $G = 4\pi/\sqrt{3}a_0$ is no longer related to the areal density in the obvious way as $\rho = \sqrt{3}G^2/8\pi^2$ because the density differs from its defect-free value $\rho_0 = 2/\sqrt{3}a_0^2$ ($a_0$ being the lattice constant of the triangular lattice in cross-section). All crystals of pointlike atoms or molecules are trivially ``incommensurate'' in this sense --- the corresponding pointlike vacancies and interstitials proliferate at any finite temperature. It is the anomalous suppression of vacancies and interstitials and their organization into lines at low temperatures in columnar crystals which makes these materials unusual. In this paper we apply the physics of directed lines to vacancy/interstitial strings. With this in mind, we briefly review the elasticity theory of these systems in the next Section. In Sec. \ref{single} we model a single string and estimate its transverse wandering. The form of this wandering is unchanged by coupling to phonon distortions of the lattice, as shown in Appendix \ref{bend}. So is its magnitude, as calculated in Appendix \ref{DR}. In Sec. \ref{many} we apply the statistical mechanics of living polymers to an ensemble of directed strings and calculate their volume fraction, average length, etc. in the non-interacting limit. A simple quadratic-interaction model is presented in Section \ref{int}, similar to the one discussed via the boson mapping in Ref.~\cite{Nelson:directed}, and we reproduce the results therein. Numerical calculations of the line tensions of various species of defects are presented in Sec. \ref{num}. The interaction potentials considered are repulsive and monotonic; we study simple power laws as well as a screened Debye-H\"{u}ckel interaction. We find many metastable species of vacancies. However, the lowest energy defect is always found to be the one with the highest symmetry in its category. For very short range interactions, this is the symmetric vacancy ($V_6$), whereas for most interactions the centered interstitial ($I_3$) is most favoured. Appendix \ref{Ewald} contains details of the Ewald summation calculations for the potentials considered here. \section{Review of elasticity theory} \label{theory} Before discussing defects in a columnar crystal, we review the aspects of elasticity theory common to all the systems mentioned in the Introduction. We consider lines or columns aligned along a common direction ($\hat{\mathbf z}$) up to thermal fluctuations, with crystalline order in any cross-section perpendicular to the columnar axis. In the case of flux lines, the average direction of alignment is imposed by an external field (${\mathbf H}=H \hat{\mathbf z}$) and local deviations from this direction cost energy. With columnar crystals of long-chain molecules composed of covalently bonded nematogens or disk-shaped molecules cylindrically stacked via hydrogen bonds, or amphiphilic molecules in cylindrical micellar aggregates, the columnar axis represents spontaneously broken rotational symmetry. Therefore local deviations from the alignment direction are not penalized, but undulations of the column are. The rotational symmetry can, however, be broken by imposing an external field. In addition, the two-dimensional crystalline order resists shear and areal deformations perpendicular to the $\hat{\mathbf z}$-axis. Low-energy fluctuations of the system can be described by a ``continuum'' model that works for small amplitude, long-wavelength deformations~\cite{SelB,Nelson:directed,deGennes}. The important fluctuations in this limit can be characterized by a two-dimensional displacement field ${\mathbf u}({\mathbf r}_\perp,z)$, representing the average deviation of lines in the ($x,y$) plane in a small region centered at $({\mathbf r}_\perp,z)$. With it can be associated a local areal density change $\delta \rho/\rho_0 = - {\mathbf \nabla}_\perp \cdot {\mathbf u}$ ($\rho_0 = 2/\sqrt{3} a_0^2$) and a local nematic director ${\hat{\mathbf n}} = {\hat{\mathbf z}} + {\mathbf t}$, with ${\mathbf t} \equiv \partial{\mathbf u}/\partial z$. The free energy of the system is a sum of nematic and crystalline contributions: \begin{equation} \label{Ftot} {\mathcal F} = {\mathcal F}_{nematic} + {\mathcal F}_{crystal}, \end{equation} To the lowest order in the fluctuations, these are given by \begin{equation} {\mathcal F}_{nematic} = \frac{1}{2} \int\!\! d^{3}r \left[K_1 ({\mathbf \nabla}_\perp \cdot {\mathbf t})^2 + K_2 ({\mathbf \nabla}_\perp \times {\mathbf t})^2 + K_3 (\partial_z {\mathbf t})^2 \right] \end{equation} and \begin{equation} \label{Fxtal} {\mathcal F}_{crystal} = \int\!\! dz\! \int\!\! d^2{\mathbf r}_\perp \left[\mu\,{u_{i j}}^2 + \frac{1}{2} \lambda\,\left(\frac{\delta\rho}{\rho_0}\right)^2 \right] \end{equation} where $K_1, K_2, K_3$ are the Frank constants for splay, twist and bend respectively, and $\lambda$ and $\mu$ are the Lam\'{e} coefficients. The matrix $u_{ij} = (\partial_i u_j + \partial_j u_i)/2$ is the linearized 2D strain field. In the presence of an external field $H {\hat{\mathbf z}}$, one should add to ${\mathcal F}$: \begin{equation} \label{ext} {\mathcal F}_{ext} = \frac{1}{2} \chi_a H^2 \int\!\! dz \int\!\! d^2r_\perp |{\mathbf t}|^2 , \end{equation} where $\chi_a$ is the anisotropic part of the susceptibility~\cite{deGennes}. The last two contributions to ${\mathcal F}$ are quadratic in the derivatives, and can be rewritten as \begin{eqnarray} {\mathcal F}_{crystal} + {\mathcal F}_{ext} = & \frac{1}{2} \int\!\! d^{3}r \left[c_{11} ({\mathbf \nabla}_\perp \cdot {\mathbf u})^2 + c_{66} ({\mathbf \nabla}_\perp \times {\mathbf u})^2 + c_{44} (\partial_z{\mathbf u})^2 \right] \nonumber\\ & \mbox{ } + \mu \;(\mbox{surface terms}) \end{eqnarray} where $c_{11} \equiv \lambda + 2 \mu$, $c_{66} \equiv \mu$, and $c_{44} \equiv \chi_a H^2 \rho$. The surface terms become important when there are defects within the bulk of the crystal, like vacancy/interstitial strings, represented by cuts joining column-end singularities in the field ${\mathbf u}({\mathbf r}_\perp,z)$. Evaluating these terms over a cylindrical surface enclosing such a string yields the energy cost of the defect string: a line tension $\tau_z \approx \mu a^2$ due to the elastic distortion around the string, in addition to a core energy $E_c$ per unit length (of the same order of magnitude) within the cylindrical core. ${\mathcal F}_{nematic}$ can be further simplified if, as is often the case with nematic polymers, the splay and twist constants are small in comparison to the bend constant. Specifically, if $K_1$ and $K_2$ satisfy $K_{1,2} a_0^{-1}/\sqrt{K_3 c_{11}} \ll 1$~\cite{Jain:ice}, then they can be neglected. For long-wavelength distortions along the columnar axis, the dominant free energy contribution is then $K_3 (\partial_z^2 {\mathbf u})^2$ in the absence of an external field. $K_3$ can be simply related to the persistence length $l_P$ of the polymer as $K_3 = k_B T l_P \rho$. \noindent \begin{minipage}{2in} \begin{figure} \centering \leavevmode \epsfxsize=2in \epsfbox{distort.eps} \caption{Distortion induced by a column end in the neighbouring columnar crystalline matrix. The distortion is confined to a vertical extent $|z| < \sqrt{\lambda_L r_\perp}$ (shaded region) around the column end.} \label{distort} \end{figure} \end{minipage} \hfill \newlength{\textw} \setlength{\textw}{\textwidth} \addtolength{\textw}{-2.3in} \begin{minipage}{\textw} The statistical mechanics of defects in polymer liquid crystals has been discussd in detail by Selinger \& Bruinsma~\cite{SelB:defect,Meyer:defect}. The presence of defects imposes a deformation on the $T=0$ equilibrium configuration. In the case of a semi-infinite vacancy/interstitial string with a head or tail at the origin, this distortion follows from minimization of the free energy above with respect to ${\mathbf u}({\mathbf r}_\perp,z)$ under the constraint \begin{equation} \label{constraint} {\mathbf \nabla}_\perp \cdot {\mathbf u} = \pm \rho_0^{-1} \delta({\mathbf r}_\perp) \theta(z) + (\mbox{non-singular terms}) \end{equation} where the $\pm$ sign refers to a column tail/head located at the origin. Since the planar distortion about a string has azimuthal symmetry in the continuum approximation, ${\mathbf \nabla}_\perp \times {\mathbf u} = 0$. Hence, the only relevant terms in the free energy are the bend and bulk distortion terms (neglecting splay). The resulting distortion around the column end spans a parabolic region about the radial direction (see Fig.~\ref{distort}) defined by \begin{equation} \label{lambda} z^2 \lesssim \lambda_L r_\perp \end{equation} where $\lambda_L = \sqrt{K_3/c_{11}}$ is the length scale relating the distortions parallel and perpendicular to ${\hat{\mathbf z}}$. \end{minipage} \medskip Selinger \& Bruinsma also calculate the interaction energy between two column ends by superimposing the distortion created by each. They find the interesting result that a head and tail in a \textit{nematic} medium attract weakly if they fall within each other's region of influence, as just described, but repel otherwise. However, in a columnar crystal (with non-zero shear modulus), the interaction is always a strong attractive linear potential due to the finite line tension associated with the string of distortions joining a head to a tail. \section{Wandering of a single string} \label{single} Consider a single vacancy/interstitial string in a hexagonal columnar crystal of, say, polymer strands with lattice constant $a_0$ and monomer spacing $c$ along the columnar axis $\hat{\mathbf z}$. For a discotic columnar liquid crystal, $c$ is the spacing between oblate molecules along the column axis. For a flux line in a layered Type-II superconductor with magnetic field perpendicular to the layers, $c$ is the layer spacing. If the string is vertical, the energy per unit length $\tau_z$ is of the order of $\mu a_0^2$ (see Section \ref{theory}) where $\mu$ is the in-plane shear modulus of the crystal. For a horizontal string, $\tau_\perp = \varepsilon_k/a_0$ where the kink energy $\varepsilon_k \sim \kappa^{1/4} \mu^{3/4} a_0^2$~\cite{Nelson:directed}, $\kappa \equiv K_3/\rho$ being the bending rigidity. The ratio is $\tau_\perp/\tau_z \sim (\kappa/\mu)^{1/4}/a_0 \sim l^*/a$ where $l^*$ is the kink size. Typically $l^* \gg a_0$, so that the strings are predominantly vertical, with few kinks. For flux lines on the other hand, the kink energy is $g^{1/2} \mu^{1/2} a_0$ with $g \equiv c_{44}/\rho$, where $c_{44}$ is the tilt modulus and $\rho$ is the areal line density. The ratio is then $(g/\mu)^{1/2}/a_0$. In highly anisotropic layered superconductors, this ratio can be small, favouring large, nearly horizontal defect excursions. We will for now work with nearly vertical strings, allowing for a gas of kinks sufficiently dilute so that the interaction between kinks can be ignored (see Fig.~\ref{walk}). We thus assign to a string of vertical extent $l$ and $n_k$ kinks an energy $l \tau + n_k \varepsilon_k + 2 \varepsilon_0$ where $\tau \equiv \tau_z$ and $\varepsilon_0$ is the energy of a polymer end. We expect that the results for defects with a high density of kinks would be qualitatively similar. In units such that $k_B=1$, the partition function of a string of length $l$ is \begin{equation} {\mathcal Z}_1 = (1 + q e^{-\varepsilon_k/T})^{l/l^*} e^{-l \tau/T} \end{equation} where $T$ is the temperature, and $q$ is the two-dimensional co-ordination number of the lattice on which the defect string lives --- for a symmetric vacancy this is the same as that of the original triangular lattice, $q = 6$, whereas for a symmetric interstitial it is that of the dual honeycomb lattice, $q = 3$ (see Section \ref{num}). The above expression represents the freedom of the string to jump to any of the neighbouring lattice sites anywhere along its length. These transverse meanderings cause an entropic lowering of the free energy per unit length of the string: \begin{eqnarray} f_1 &=& \lim_{l \rightarrow \infty} -T \ln{{\mathcal Z}_1} /l \nonumber \\ &=& \tau - \frac{T}{l^*} \ln{\left(1 + q e^{-\varepsilon_k/T}\right)} \nonumber \\ &\simeq& \tau - \frac{T q}{l^*} e^{-\varepsilon_k/T} \quad\mbox{for}\quad e^{-\varepsilon_k/T} \ll 1 \end{eqnarray} If $N_k$ is the total number of kinks, the average kink density is \begin{eqnarray} n_k \equiv \frac{\langle{N_k}\rangle}{l} &=& \frac{1}{l^*} \frac{q e^{\varepsilon_k/T}}{1+q e^{\varepsilon_k/T}} \nonumber \\ &\simeq& \frac{q}{l^*} e^{-\varepsilon_k/T}, \quad\mbox{for}\quad e^{-\varepsilon_k/T} \ll 1. \end{eqnarray} Thus, kinks are on the average $l_k = l^* e^{\varepsilon_k/T}/q$ monomers apart. The assumption of dilute kinks then translates into the condition $l^* n_k \ll 1$, or, $\varepsilon_k \gg T$, which can be rephrased as $\langle |{\mathbf u}|^2 \rangle/a_0^2 \ll 1$~\cite{Nelson:directed,Jain:ice}, a condition clearly satisfied by a crystal below its Lindemann melting point. The above is a ``diffusive'' model for the string --- if ${\mathbf d}$ denotes the horizontal end-to-end displacement, the mean square wandering is $\langle|{\mathbf d}|^2\rangle = 2 D l$, where the ``diffusion constant'' $D$ is given by $2 D = a_0^2 n_k$. Consider a continuum description of the string in terms of a function ${\mathbf r}_d(z)$, ${\mathbf r}_d(z)$ being the transverse displacement. Provided the average slope $|d{\mathbf r}_d/dz|$ is small, this ``diffusive'' wandering would correspond to an effective Hamiltonian of the form \begin{equation} \label{defEg} H_1 = \int_0^l dz \left[\frac{g}{2} \left|\frac{d{\mathbf r}_d}{dz}\right|^2 + \tau\right], \quad g=\frac{T}{D} \end{equation} Here we have assumed that the string is wandering within a frozen crystal. However, the lattice around the vacancy/interstitial string responds to its presence by collapsing or expanding around it. For a straight string at ${\mathbf r}_d = {\mathbf 0}$, the deformation ${\mathbf u}({\mathbf r}_\perp,z)$ is given by \begin{equation} {\mathbf u}_d({\mathbf r}_\perp,z) = \pm \frac{\Omega}{2 \pi} \frac{{\mathbf r}_\perp}{r_\perp^2} \end{equation} in the continuum description of the crystal, that is, away from the defect where the deformations are small. $\Omega$ is the area change due to the vacancy/interstitial, $\Omega \simeq a_0^2$. The energy of this deformation has to be included in the energy cost of the defect string. Again invoking the continuum approximation, we assume that for a defect string with small average slope, the resulting deformation away from the string in any plane perpendicular to $\hat{\mathbf z}$ would be approximately that resulting from a straight string at the location of the defect in that plane: \begin{equation} {\mathbf u}({\mathbf r}_\perp,z) \simeq {\mathbf u}_d({\mathbf r}_\perp-{\mathbf r}_d(z),z). \end{equation} (In general ${\mathbf u}({\mathbf r}_\perp,z)$ would depend on the derivatives of ${\mathbf r}_d(z)$ as well.) Within this approximation, the distortion energy of the crystal with bending Frank's constant $K_3 \equiv T l_P \rho$ is, keeping terms up to fourth-order in the derivatives (see Appendix \ref{bend}): \begin{equation} \label{defE4} \frac{\Delta H_1}{T} \sim l_P \int dz \left[ \left|\frac{d^2 {\mathbf r}_d}{dz^2}\right|^2, a_0^{-2} \left|\frac{d{\mathbf r}_d}{dz}\right|^4 \right] \end{equation} These impart an effective stiffness to the defect string and suppress transverse fluctuations over a length scale $\sim a_0\sqrt{D K_3/T} \sim a_0\sqrt{l_P n_k}$. However, they do not change the long scale diffusive nature of the string. The lattice distortions renormalize the diffusion constant of the string when the symmetry direction of the crystal is externally imposed, as in the case of flux lines, or in a polymer crystal with an external field along the $\hat{\mathbf z}$-direction. The tilt modulus $c_{44}$ is then non-zero (Eq.~(\ref{ext})), and D is renormalized to $D_R$, where (see Appendix \ref{DR}) \begin{equation} \frac{1}{D_R} \simeq \frac{1}{D} + {\mathcal O}\left(\frac{c_{44}}{T \rho}\right) \end{equation} For a dense vortex \textit{liquid} this effect has been analyzed in detail by Marchetti~\cite{Marchetti:D} and $D$ is found to be renormalized to a value independent of its bare value in the long-wavelength limit. The correction comes from convection of a tagged flux line along the local tangent-field direction. If a similar calculation is carried out for a \textit{crystal} of spontaneously aligned long semi-flexible polymers (see Appendix \ref{DR}), one finds a qualitatively different renormalization of $D$ --- the correction in the long-wavelength limit is proportional to its bare value, and $\delta D/D \sim 1.45 \langle|{\mathbf u}|^2\rangle/a_0^2 \lesssim 3~\%$ using $c_L^2 \simeq 1/50$~\cite{cl} ($c_L$ is the Lindemann constant for melting of a columnar crystal). The correction is negligible. It can be ignored for another reason --- the idea of convection of a line by the mean local field, although appropriate for a dense fluid, would not be applicable in a crystalline environment where diffusion can only occur through discrete jumps from column to column. Although thermal fluctuations are already implicit in the exponential factor in $D = a_0^2 n_k/2$ coming from $n_k$, defects in this case move only on a discrete lattice, without phonon fluctuations. To summarize this section, we characterize the statistical mechanics of a defect string with a head/tail energy $\varepsilon_0$, a line tension $\tau$, and a diffusion constant $D$. The latter two can be combined in an effective chemical potential $\overline{\mu} \equiv T \mu_d$ per kink size ($l^*$) of the string: \begin{equation} \mu_d = l^* (-\tau/T + n_k) = q e^{-\varepsilon_k/T} - \varepsilon_k/T, \end{equation} with $n_k$ related to $D$ through $D = a_0^2 n_k/2$. Because $n_k$ is exponentially small, $\mu_d \approx -l^* \tau/T \approx -l^* \mu a_0^2/T$ and is usually negative, which suppresses long vacancy \& interstitial strings. Turning it positive would require raising the temperature and lowering the kink energy $\varepsilon_k$, and is favored by a larger co-ordination number $q$. Although we have assumed a constant shear modulus, the presence of the defects themselves can drive it down exponentially with the defect concentration, as discussed by Carruzzo \& Yu~\cite{CarYu:shear}. Thus, positive $\mu_d$ becomes possible when softening of the bare elastic constants with increasing defect concentration is taken into account. \section{Statistical Mechanics of non-interacting strings} \label{many} At any finite temperature, a crystal with a negative string line-chemical potential will contain a distribution of thermally excited vacancy and interstitial strings. Since the string energy is proportional to length in the non-interacting-kinks approximation, the equilibrium probability distribution would be an exponentially decaying function of length with mean determined by the line chemical potential, in the dilute string-gas limit where inter-string interactions can also be neglected~\cite{Safran}. In discotic crystals string heads and tails can be created as necessary. In a crystal of long polymers, the number of heads and tails is fixed by the mean polymer length. Let N be the total number of possible kink sites in the lattice, $N = \mbox{volume} \times \rho/l^*$, and ${\mathcal P}_l$ be $1/N \times$ the number of defect strings $l$-links long. Assuming that only one kind of defect string is present --- those with the lowest line tension --- we can write the defect free energy in terms of $\{{\mathcal P}_l\}$ as~\cite{Safran} \begin{equation} \label{F} {\mathcal F}_d(\{{\mathcal P}_l\}) = \sum_l N {\mathcal P}_l (2 \varepsilon_0 - l T \mu_d) + T \sum_l N {\mathcal P}_l (\ln{{\mathcal P}_l} - 1) \end{equation} Minimizing with respect to the $\{{\mathcal P}_l\}$ yields the expected exponential distribution: \begin{equation} {\mathcal P}_l = h^2 z^l \end{equation} where $z = e^{\mu_d}$, and the head/tail fugacity $h = e^{-\varepsilon_0/T}$ is expected to be small. For hexagonal columnar crystals of \textit{polymers}, we work in a grand canonical ensemble and adjust $\varepsilon_0$ so that the average head/tail concentration agrees with the fixed value determined by the mean polymer length. The head/tail concentration will be small if the polymers are long. For \textit{discotic} crystals, the grand canonical ensemble is the natural one and the head/tail concentration fluctuates, with an average value determined by the fixed value of $h = e^{-\varepsilon_0/T}$, and the monomer fugacity $z = e^{\mu_d} < 1$. The net defect volume fraction $\phi$ is \begin{equation} \phi \equiv \sum_l l {\mathcal P}_l = h^2 \frac{z}{(1-z)^2} . \end{equation} The total number of strings $N_d \equiv N n_s$ is given by the string density \begin{equation} n_s \equiv \sum_l {\mathcal P}_l = \frac{h^2}{1-z} \end{equation} A defect monomer is most likely to be found in a string of mean length (in units of the kink size) \begin{equation} l_m = \frac{1}{|\mu_d|} \end{equation} The length distribution has an average at $2 l_m$, and a spread also of $\sqrt{2} l_m$. The form (\ref{F}) of the energy, linear in $l$, is really applicable only when $l \gg 1$, so that end effects can be parametrized by the $l$-independent constant $\varepsilon_0$. Then, $\mu_d$ is close to $0$, and the relation $\phi \simeq n_s l_{mp}$ holds. The asymptotic behaviours in the dilute and dense limits are as follows: \begin{eqnarray} \phi &=& \left\{\begin{array}{ll} h^2 e^{\mu_d}, & z \ll 1 \\ \frac{h^2}{|\mu_d|^2}, & z \lesssim 1 \end{array}\right. \\ n_s &=& \left\{\begin{array}{ll} h^2, & z \ll 1 \\ \frac{h^2}{|\mu_d|}, & z \lesssim 1 \end{array}\right. \end{eqnarray} A string proliferation transition thus occurs at $\mu_d = 0$ in this model, corresponding to a temperature $T_d = \tau l_k$. In the limit $\varepsilon_0 \rightarrow \infty$, it corresponds to the appearance of a supersolid phase~\cite{Frey:defect} which is simultaneously crystalline and entangled, where infinitely long vacancy/interstitial strings facilitate the wandering and entanglement of lines in the crystalline phase. If the melting temperature $T_m > T_d$, this supersolid/incommensurate solid phase will exist between $T_d$ and $T_m$. The non-interacting approximation breaks down in the vicinity of $T_d$ as calculated here, and its estimate will have to be refined by including interactions. For finite $\varepsilon_0$, the sharp transition discussed in Ref. ~\cite{Frey:defect} will be blurred, as discussed in Sec. \ref{int}. \section{$\phi^2$-interaction model} \label{int} Interactions between polymer ends in a columnar crystal have been calculated by Selinger \& Bruinsma~\cite{SelB:defect} within the continuum approximation. Because of the uniaxial anisotropy, the interaction has a rather complicated form. The distortion due to an isolated head or tail placed at the origin at in-plane distance $r_\perp$ extends over a vertical extent $|z| \sim \sqrt{\lambda_L r_\perp}$ where $\lambda_L = \sqrt{K_3/c_{11}}$ (see Eq.~(\ref{lambda})). The resulting interaction between heads and tails falls as $1/|z|^3$ for predominantly vertical separations $z$ ($|z| \gg \sqrt{\lambda_L r_\perp}$), and as $-1/(\lambda_L r_\perp)^{3/2}$ for predominantly horizontal separations $r_\perp$. In polymer crystals, these contributions must be superimposed on the linear energy cost of the vacancy or interstitial string joining them. At low defect densities where the string length is much smaller than the average separation of string centers of mass, we have $1/|\mu_d| \ll 1/\phi^{1/3}$, i.e., $|\mu_d| \gg h^{2/3}$, and a string interacts with other strings as a head-tail dipole. The effective interaction between dipoles then falls off very rapidly, becoming short-ranged not only in the axial, but also in the radial direction. At the other extreme, the strings are long, which would happen in the vicinity of the head-tail unbinding transition and in the supersolid phase itself. End-interactions can then be neglected and the remaining interaction between effectively infinite strings becomes predominantly ``radial'' (i.e., perpendicular to $\hat{\mathbf z}$) provided the root mean square tilt with respect to the $\hat{\mathbf z}$ axis is small. The defects are then non-interacting in the continuum model unless their anisotropy is taken into account. The interaction between defects with n-fold symmetry (n = 2, 3 or 6) falls off at least as fast as $1/r^n$ (see Appendix \ref{n-def}). This interaction has an azimuthal dependence of the form $\cos{n \theta}$ or higher harmonics. The angular average vanishes, leading to an effective interaction which vanishes as an even higher power which is effectively short-ranged. As mentioned in the Introduction, the lowest-energy vacancy or interstitial defects for simple repulsive pair potentials in the radial direction are in fact of high (three-fold or six-fold) symmetry. We discuss here the simplest model for a short-ranged interaction --- a repulsive $\phi^2$ model that has been treated earlier in Ref.~\cite{Nelson:directed} using a coherent state path integral representation which exploits an analogy with the quantum mechanics of two-dimensional bosons. The defect volume fraction $\phi$ corresponds to the mean square boson field amplitude $\langle|\psi|^2\rangle$ in that description. Here, we reproduce the essential results without resorting to the sophisticated boson formalism. Upon adding a term $u \phi^2 /2$ to the free energy $f \equiv F/N T$ in Eq.~(\ref{F}) of the previous section, we find after minimization, \begin{equation} {\mathcal P}_l = h^2 e^{l (\mu_d - u \phi)} . \end{equation} As discussed in Ref.~\cite{Nelson:directed}, the coupling $u$ is an excluded volume parameter describing defect line repulsion. Thus $\phi$ and $N_d$ have the same form as before, but with $z$ replaced by an effective fugacity $\zeta$: \begin{equation} \label{zeta} z \rightarrow \zeta(z,\phi) \equiv z e^{-u \phi} , \end{equation} so that \begin{equation} \label{phi-zeta} \phi(h,\zeta) = h^2 \frac{\zeta}{(1-\zeta)^2} . \end{equation} The volume fraction $\phi(h,z)$ now has to be solved for self-consistently from Eq.~(\ref{phi-zeta}). Note that the effective chemical potential has been reduced by $u \phi$ due to the repulsive interaction: \begin{equation} \mu_{eff} \equiv \ln{\zeta} = \mu_d - u \phi \end{equation} Accordingly, the mean string length $l_m$ changes to \begin{equation} l_m = -\frac{1}{\ln{\zeta}} \equiv \frac{1}{u \phi - \mu_d} . \end{equation} The free energy of the distribution is $f \approx -u \phi^2 /2$. The behaviour of the string volume fraction for $h = 0$ and $h \neq 0$ is illustrated schematically in Fig.~\ref{phimu}. Four distinct regimes emerge, with the following asymptotic behaviours: \begin{figure} \centering \leavevmode \epsfxsize=6.1in \epsfbox{phimu.eps} \caption{The volume fraction $\phi$ is plotted against the effective defect chemical potential $\mu_d$ for the $\phi^2$-interaction model of a gas of defect strings. The strings are short and dilute in regime A, but long, dense and entangled in regime B. (Taken from Ref.~\protect\cite{Nelson:directed}.)} \label{phimu} \end{figure} \begin{enumerate} \item $\mu_d \ll -1$ (point A in Fig.~\ref{phimu}): \begin{equation} \phi \simeq h^2 e^{\mu_d},\quad n_s \simeq h^2,\quad l_m = \frac{1}{|\mu_d|} . \end{equation} This is again the dilute limit where heads and tails are tightly bound. \item $-1 \ll \mu_d \ll -(u h^2)^{1/3}$: \begin{equation} \label{noni} \phi \simeq \frac{h^2}{|\mu_d|^2},\quad n_s \simeq \frac{h^2}{|\mu_d|},\quad l_m = \frac{1}{|\mu_d|} . \end{equation} These results are again identical to those for non-interacting strings. This correspondence is expected, because $|\mu_d| > (u h^2)^{1/3} > u \phi$, therefore the effective chemical potential is still approximately $\mu_d$. The relation $\mu_d \sim -(u h^2)^{1/3}$ marks the limit of validity of the non-interacting approximation, as we argued in the beginning of this section. As we approach this limit, we find for $h \rightarrow 0$: $\phi, n_s \rightarrow 0$, whereas $l_m \rightarrow \infty$. Thus, the strings are still dilute, although lengthening. Note that the results in this regime coincide with those of Ref.~\cite{Nelson:directed} in the short and dilute strings limit. \item $|\mu_d| \ll (u h^2)^{1/3} \equiv \mu_c$ ($\mu_d$ around the transition which occurs for $h = 0$): \begin{equation} \phi \simeq \frac{h^2}{|\mu_c|^2} \left[ 1 + \frac{2}{3}\frac{\mu_d}{\mu_c}\right],\quad n_s \simeq \frac{h^2}{|\mu_c|} \left[ 1 + \frac{1}{3}\frac{\mu_d}{\mu_c}\right],\quad l_m \simeq \frac{1}{|\mu_c|} \left[ 1 + \frac{1}{3}\frac{\mu_d}{\mu_c}\right] . \end{equation} These results can be matched onto those in the non-interacting regime above by replacing $\mu_d$ with \begin{equation} \mu_{eff} = -\mu_c + \mu_d/3 = -\mu_c \left(1-\frac{\mu_d}{3 \mu_c}\right) , \end{equation} which is now dominated by the repulsive interaction: $\mu_{eff} \approx - u \phi$. The unphysical divergences of the non-interacting model have been suppressed and we find at the transition point: \begin{equation} \phi = \frac{h^{2/3}}{u^{4/3}},\quad n_s = \frac{h^{4/3}}{u^{1/3}},\quad l_m = \frac{1}{u^{1/3} h^{2/3}} . \end{equation} Note that all quantities have interesting singularities in the limit $h \rightarrow 0$. If the head/tail fugacity $h$ is small, the defect volume fraction remains negligible at the transition, but the average string length grows large so that it could become greater than the inter-string separation, now given by $1/\phi^{1/2}$. Indeed, $1/\phi^{1/2} \ll l_m$ if $h \ll 1/u^2$ which would be true if polymer ends are highly unfavourable. This long \& dilute regime interpolates between the short \& dilute and the long \& dense limits described in Ref.~\cite{Nelson:directed}. \item $\mu_d \gg \mu_c$ (Point B in Fig.~\ref{phimu}):\\ In this limit, we have \begin{equation} \mu_{eff} = -\mu_c \sqrt{\frac{\mu_c}{\mu_d}} . \end{equation} The repulsion now keeps in check the string proliferation, and $\mu_{eff}$ approaches $0$ as $1/\sqrt{\mu_d}$. Thus, \begin{equation} \phi \simeq \frac{\mu_d}{u},\quad n_s \simeq h \sqrt{\frac{\mu_d}{u}},\quad l_m \simeq \frac{1}{|\mu_c|} \sqrt{\frac{\mu_d}{\mu_c}} . \end{equation} This is the phase where strings are dense and entangled --- $\phi$ is ${\mathcal O}(1)$. These results also agree with Ref.~\cite{Nelson:directed}. \end{enumerate} As the head/tail fugacity $h \rightarrow 0$, the intermediate regime~3 above (around $\mu = 0$) shrinks to zero. At $h = 0$, heads/tails are completely expelled, and we have a second-order phase transition at $\mu_d = 0$ with $\phi = 0$ for $\mu_d < 0$, and growing as $\mu_d$ for $\mu_d > 0$, as in Ref.~\cite{Nelson:directed}. This limit corresponds to the situation in thermally excited vortex lattices~\cite{Frey:defect} because flux lines cannot start or stop within the sample. In the boson picture, $h$ acts like an external field coupled to the order parameter, injecting magnetic monopoles into the superconductor. We have neglected vacancy/interstitial loops, which exist even in the limit $h \rightarrow 0$. For finite $h$, their contribution can be neglected near the transition because for long loops, the energy of a loop exceeds the energy of a string of the same vertical extent: Whereas a string of length $l$ has energy $l \tau_{interstitial} + 2 \varepsilon_0$ (we expect interstitials to be the preferred defect at the transition in most cases), the energy of a vacancy-interstitial loop of the same length would be approximately $l\: (\tau_{vacancy} + \tau_{interstitial})$. For large $l$, the difference $l \tau_{vacancy} - 2 \varepsilon_0$ will strongly suppress vacancy/interstitial loops. Because of this energetic barrier, loops cannot become arbitrarily large, and cannot cause entanglement over macroscopic scales. For $h = 0$, as is the case for vortex matter, fluctuations in the low temperature phase are entirely in the form of loops~\cite{Frey:defect}, and similar to vortex ring fluctuations in the Meissner phase. For systems with a finite axial length, the balance may be tilted in favour of long strings because the end penalty is removed if the ends move to the surface and the string threads the sample. For threading strings the expression for entropy in Eq.~(\ref{F}) is no longer valid because the freedom in the z-direction is lost. The remaining two-dimensional entropy can be ignored in a three-dimensional system, and we are left with \begin{equation} f \simeq -\mu_d \phi + u \phi^2 /2 \end{equation} where $\phi$ now is also the areal fraction of defects; and one finds $\phi \simeq \mu_d/u$, similar to region~4 discussed above. \section{Numerical calculation of defect line tensions} \label{num} Line tension calculations require that we find the lowest energy lattice deformation associated with a vacancy or interstitial. These line tensions depend on the \textit{type} of vacancy or interstitial, e.g., whether the defect sits in an environment which is two-, three- or six-fold symmetric. If thermal fluctuations out of this configuration are small enough to be described within a quadratic approximation, they decouple from the equilibrium configuration. Since these $T=0$ equilibrium defect configurations are composed of straight columns, the 3-dimensional deformation energy can be reduced to an effective 2-dimensional interaction energy $V(r)$ per unit length between columns separated by distance $r$. The calculations can then be performed on a two-dimensional triangular lattice of points interacting with potential $V(r)$. Thus, the defect energies in a two-dimensional Wigner crystal of electrons~\cite{Morf:defect} would correspond to the \textit{line tensions} of the corresponding string defects in a hexagonal columnar crystal of lines interacting with an effective radial $1/r$-potential per unit length. Such calculations have been carried out by several authors~\cite{Frey:defect,Morf:defect,CockEl:defect}. Whereas Refs.~\cite{Morf:defect} and~\cite{CockEl:defect} have considered defects in a Wigner crystal of electrons ($V_p(r)=1/r$), Frey \textit{et al.}~\cite{Frey:defect} have studied a modified Bessel-function potential $V_{\kappa}(r) = u_0 K_0(\kappa r)$ in the $\kappa \rightarrow 0$ limit. Here $\kappa \equiv \lambda^{-1}$, where $\lambda$ is the Debye screening length in the case of long polyelectrolytes in an ionic solution, and the London penetration depth in the case of vortex lines in a type-II superconductor. The limit $\kappa \rightarrow 0$ corresponds to a long-range logarithmic interaction, whereas in the short-range limit $\kappa a_0 \gg 1$ the interaction is exponentially decaying. Both Refs.~\cite{Frey:defect} and~\cite{CockEl:defect} dealt with long-range interactions ($\ln{r}$ and $1/r$ repectively), and found that the centered interstitial (see Fig.~\ref{defects}) has the lowest line tension. We denote the centered interstitial by $CI$, or by $I_3$ when we want to stress its three-fold symmetry. The edge interstitial (denoted $EI$ or $I_2$) was found to be a saddle-point and buckled into a $CI$. The three-fold symmetric centered interstitial $CI$ is the lowest energy interstitial defect over the entire range of interactions we studied. Among the vacancies, the two-fold symmetric crushed vacancy (denoted $V_2$ or $V_{2a}$ --- see Fig.~\ref{defects}) is the only stable one, the symmetric six-fold vacancy ($V_6$) being unstable to it. The long-range interactions between the energetically preferred types of interstitials and vacancies were found to be attractive for interstitials and repulsive for vacancies. To determine the correct type of microscopic defect to insert into the phenomenological considerations of Secs. \ref{single}--\ref{int}, we have extended the work of Frey \textit{et al.} to the short-ranged regime of the $K_0(\kappa r)$-interaction, to which end we studied values of $\kappa a_0$ from $0$ to $7$ ($7$ being large enough to represent the short-range $\kappa a_0 \rightarrow \infty$ limit) (Fig.~\ref{bes:ek}). The aim was to determine the point of cross-over from centered interstitials to vacancies as the lowest-energy defect, since it is known from simulations of short-range interactions (for a review, see Ref.~\cite{point-defects}) that vacancies are preferred in this limit. In the same spirit, we have also extended the Coulomb interaction to power-law interactions $1/r^p$ with exponent values ranging from $p = 0$ ($\sim \ln{r}$) to $p = 12$ (Fig.~\ref{gam:ep}). We checked our minimization procedure by first reproducing the results of Refs.~\cite{Frey:defect} and~\cite{CockEl:defect} for $\ln{r}$ and $1/r$ potentials respectively. As we move away from the long-range interaction limit $\kappa a = 0$, the metastable crushed vacancy ($V_{2a}$) exchanges stability with the metastable split vacancy ($SV$), also of two-fold symmetry. Two metastable species, a three-fold symmetric vacancy ($V_3$) and a two-fold symmetric vacancy ($V_{2b}$) crushed along the basis vector of a triangular unit cell, also exist, but are of higher energy. The differences in energy can be as small as one part in a few thousand. As the interaction gets shorter-ranged, $V_{2b}$ loses stability to $V_3$ at $\kappa a_0 \simeq 5.2$, and the 3-fold deformation of $V_3$ gets smaller so that it transforms continuously into $V_6$ at $\kappa a_0 \simeq 5.9$. When $V_6$ appears, the $SV$ also loses stability to it. By the time $I_3$ and $V_6$ finally cross in energy, $V_6$ is the only stable vacancy left. The crossing happens at surprisingly large parameter values, $\kappa a_0 \simeq 6.9$ for $V_{\kappa a}$ (Fig.~\ref{bes:diff}), and $p \simeq 5.9$ for $V_p$ (Fig.~\ref{gam:diff}), each very close to the short-range limit. We thus find that the interstitial has a very wide range of stability, extending well into the short-ranged regime. \begin{figure} \centering \leavevmode \epsfxsize=6.3in \epsfbox{def.eps} \vfill \caption{Various defects obtained in a two-dimensional triangular lattice. The centered interstitial is the only stable interstitial defect.} \label{defects} \end{figure} \noindent \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=3.2in \epsfbox{besek.eps} \caption{Defect energy as a function of the screening $\kappa a$ for $V(r) = K_0(\kappa r)$ at system size $n = 4$ ($N = 480$). Only the centered interstitial is shown, because the edge interstitial is always unstable to it. Various species of vacancies exist, within limited parameter ranges, very close in energy. Lines joining the data points are only an aid to the eye.} \label{bes:ek} \end{figure} \end{minipage} \hfill \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=3.2in \epsfbox{gamep.eps} \caption{Defect energy as a function of the screening $\kappa a$ for $V(r) = 1/r^p$ at system size $n = 5$ ($N = 750$). The apparent increase in energy with $p$ (interaction getting shorter-ranged) would go away with proper normalization of the potential. Lines joining the data points are only an aid to the eye.} \label{gam:ep} \end{figure} \end{minipage} \medskip \noindent \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=3.2in \epsfbox{besdiff.eps} \caption{Defect energies for $V(r) = K_0(\kappa r)$, $n = 4$, on the log-scale, with respect to $V_3$/$V_6$, in order to illustrate the detailed structure of the energy diagram. The $CI$ can be seen crossing $V_6$ at $\kappa a \approx 6.9$. Lines joining the data points are only an aid to the eye.} \label{bes:diff} \end{figure} \end{minipage} \hfill \begin{minipage}{3.4in} \begin{figure} \centering \leavevmode \epsfxsize=3.2in \epsfbox{gamdiff.eps} \smallskip \caption{Defect energies for $V(r) = 1/r^p$, $n = 5$, on the log-scale, with respect to $V_3$/$V_6$. The $CI$ and $V_6$ cross at $p \approx 5.9$. Lines joining the data points are only an aid to the eye.} \label{gam:diff} \end{figure} \end{minipage} \medskip Following previous authors, the simulations were performed in an almost square (length-to-width ratio $5 : 3\sqrt{3}$) cell containing $N = 5n \times 6n = 30 n^2$ lattice points with $n =$ 1 -- 5 (rather than a more nearly square but bigger rectangle of, say, $7n \times 8n$ ($7 : 4\sqrt{3}$) which would allow us to sample fewer number of system sizes $n$ with a given computational limit on $N$). Fig.~\ref{defects} corresponds to $n=3$. A defect is introduced by adding or removing a particle, and then allowing the resulting configuration to relax. The difference between the energies of the relaxed defect configuration and the perfect lattice configuration gives the energy of the defect. There are two modifications to this simple calculation. We want the defect energy corresponding to the physical conditions of constant chemical potential or line density, so we rescale the cell dimensions (by changing the lattice constant $a_0$) after inserting the defect to restore the system to its original density (following Ref.~\cite{CockEl:defect}). Moreover, since we would ideally like to study an infinite system, the large, but finite cell containing $30 n^2$ particles is assumed to be repeated in all directions, so that we are effectively dealing with a periodic array of defects, or, an infinite lattice in the absence of a defect. The periodic boundary conditions maintain the average line density during the relaxation process. However, now the energy per cell also includes the energy of interaction of a defect with all its periodic images. As discussed earlier, this energy is finite, and by extrapolating its dependence on cell size $n$, i.e., inter-defect separation ($\approx 5n$), to large $n$, the energy of an isolated defect can be extracted~\cite{Frey:defect,CockEl:defect}. For short-ranged interactions, the energy calculation can be simplified. We introduce a cut-off interaction radius $r_c$ where the interaction falls to a small fraction of its nearest-neighbour value. The interaction with the particles outside can be approximately accounted for by assuming a uniform density outside and integrating over it. The radius $r_c$ is chosen to make this correction small compared to the total energy, say, less than $10^{-3}$ of it. Interactions within the shell are calculated explicitly. As long as $r_c < L/2$, $L$ being the cell width, this short-range method should be very accurate. For long-ranged interactions such as $\ln{r}$, $1/r$, or $1/r^2$, the above method breaks down, and we must resort to the Ewald summation technique~\cite{Rosenfeld:Ewald,Heyes:Ewald} which yields an effective two-particle interaction that includes the interaction of one particle with all the periodic images of the other. This effective potential consists of a real space sum (corresponding to a screened interaction) and a reciprocal space sum (corresponding to the screening charge). The division between the two is controlled by an Ewald parameter, and by a judicious choice of its value, the interaction can be made sufficiently short-ranged for both sums. We then employ cut-offs in both spaces, with values determined by the desired precision (see Appendix C for details). \noindent \begin{minipage}{\textwidth} \begin{table} \begin{tabular}{|l||c|c|c|c|c|c|} $\kappa a$ & $I_3$ & $SV$ & $V_{2a}$ & $V_3$ & $V_{2b}$ & $V_6$\\ \hline\hline 0 & .073016802 & $V_{2a}$ & .107018876 & .108206944 & .109320135 & $V_3$\\ 1 & .066331581 & .096728537 & .096661116 & .097578530 & .099169907 & $V_3$\\ 2 & .050588818 & .072306827 & .072341149 & .072594220 & .073852944 & $V_3$\\ 3 & .033575192 & .046095915 & $SV$ & .046131759 & .047174061 & $V_3$\\ 4 & .020037313 & .025980648 & $SV$ & .025962421 & .026641900 & $V_3$\\ \hline\hline 4 & .020036\hspace*{\fill} & .025980\hspace*{\fill} & $SV$ & .025961\hspace*{\fill} & .026641\hspace*{\fill} & $V_3$\\ 5 & .0110170\hspace*{\fill} & .0133112\hspace*{\fill} & $SV$ & .0133146\hspace*{\fill} & .0136217\hspace*{\fill} & $V_3$\\ 5.1 & .010338333 & .012397139 & $SV$ & .012400742 & .012674362 & $V_3$\\ 5.2 & .009695442 & .011537972 & $SV$ & .011541059 & $V_3$ & $V_3$\\ 5.3 & .009087036 & .010731274 & $SV$ & .010733113 & $V_3$ & $V_3$\\ 5.4 & .008511788 & .009974612 & $SV$ & .009974441 & $V_3$ & $V_3$\\ 5.5 & .007968369 & .009265581 & $SV$ & .009262603 & $V_3$ & $V_3$\\ 5.6 & .007455456 & .008601808 & $SV$ & .008595187 & $V_3$ & $V_3$\\ 5.7 & .006971737 & .007980968 & $SV$ & .007969812 & $V_3$ & $V_3$\\ 5.8 & .006515917 & .007400791 & $V_3$ & .007384121 & $V_3$ & $V_3$\\ 5.9 & .006086722 & $V_6$ & $V_6$ & $V_6$ & $V_6$ & .006835768\\ 6 & .005682901 & $V_6$ & $V_6$ & $V_6$ & $V_6$ & .006322377\\ 7 & .002788486 & $V_6$ & $V_6$ & $V_6$ & $V_6$ & .002771295\\ \end{tabular} \caption{Defect energies for $V(r) = K_0(\kappa r)$; $a_0=1$; system size $n = 4$ ($N = 480$). The upper part corresponds to the Ewald Sum method for long-range interactions, the lower part to a simple cut-off method for short-range interactions. The centered interstitial and the symmetric vacancy cross at $\kappa a \approx 6.9$. Entries such as ``$V_{2a}$'', ``SV'', ``$V_3$'', ``$V_6$'' indicate an instability to a lower energy defect.} \label{tbl:bes} \end{table} \end{minipage} \noindent \begin{minipage}{\textwidth} \begin{table} \begin{tabular}{|l||c|c|c|c|c|c|} \ $p$& $I_3$& $SV$& $V_{2a}$& $V_3$& $V_{2b}$& $V_6$\\ \hline\hline \ 0& \ 0.073061685& $V_{2a}$& 0.106775085& 0.108253779& 0.108994418& $V_3$\\ \ 1& \ 0.146421440& $V_{2a}$& 0.209046876& 0.209331872& 0.213568209& $V_3$\\ \ 2& \ 0.487928019& 0.677444176& $SV$& 0.672359275& 0.694143882& $V_3$\\ \ 3& \ 1.08543992\hspace*{\fill}& 1.39071722\hspace*{\fill}& $SV$& 1.38704618\hspace*{\fill}& 1.42628053\hspace*{\fill}& $V_3$\\ \ 4& \ 1.99663790\hspace*{\fill}& 2.37494467\hspace*{\fill}& $SV$& 2.37649196\hspace*{\fill}& 2.43341170\hspace*{\fill}& $V_3$\\ \ 5& \ 3.2620983\hspace*{\fill}& 3.5889518\hspace*{\fill}& $SV$& 3.5851010\hspace*{\fill}& $V_3$& $V_3$\\ \ 5.8& \ 4.5498400\hspace*{\fill}& $V_6$& $V_6$& $V_6$& $V_6$& \ 4.6053332\\ \ 5.9& \ 4.7286554\hspace*{\fill}& $V_6$& $V_6$& $V_6$& $V_6$& \ 4.7341340\\ \ 6& \ 4.9114956\hspace*{\fill}& $V_6$& $V_6$& $V_6$& $V_6$& \ 4..8637723\\ \ 7& \ 6.9642383\hspace*{\fill}& $V_6$& $V_6$& $V_6$& $V_6$& \ 6.1999848\\ \ 8& \ 9.4317462\hspace*{\fill}& $V_6$& $V_6$& $V_6$& $V_6$& \ 7.5920876\\ \ 9& 12.319586\hspace*{\fill}& $V_6$& $V_6$& $V_6$& $V_6$& \ 9.0220754\\ 10& 15.629229\hspace*{\fill}& $V_6$& $V_6$& $V_6$& $V_6$& 10.477581\hspace*{\fill}\\ 11& 19.359421\hspace*{\fill}& $V_6$& $V_6$& $V_6$& $V_6$& 11.950259\hspace*{\fill}\\ 12& 23.495660\hspace*{\fill}& $V_6$& $V_6$& $V_6$& $V_6$& 13.434556\hspace*{\fill}\\ \end{tabular} \caption{Defect energies for $V(r) = 1/r^p$; $a_0=1$; system size $n = 5$ ($N = 750$). The Ewald sum technique was used to calculate the energies. The centered interstitial and the symmetric vacancy cross at $p \approx 5.9$. Entries such as ``$V_3$'' and ``$V_6$'' indicate an instability to a lower energy defect.} \label{tbl:gam} \end{table} \end{minipage} To find the minimum of the interaction energy as a function of the configuration of N particles, we use the conjugate-gradient method~\cite{Num-Rec}. The forces are also needed for this method, and are easily derived from the energy and conveniently calculated along with it. The results for $n = 4$ ($480$ particles) for $V_{\kappa a}$ and for $n = 5$ for $V_p$ ($750$ particles) are shown in Tables~\ref{tbl:bes} and~\ref{tbl:gam} and Figs.~\ref{bes:ek} and~\ref{gam:ep}. ($n = 5$ was computationally prohibitive for the long-ranged regime with $\kappa a_0 > 0$). Note that, for the screened Bessel-function interaction, we find that calculations optimized for the long- and short-ranged regimes agree to within $1$ part in $20,000$ at $\kappa a_0 = 4$. Moreover, we find that the interaction of a defect with all its periodic images is repulsive for defects with (even) two- and six-fold symmetry, and attractive for (odd) three-fold symmetry, consistent with Ref.~\cite{Frey:defect}. As discussed in Refs.~\cite{Frey:defect} and~\cite{CockEl:defect}, the true asymptotic form of the power law defect interaction probably isn't reached for the distance scales $r \sim 20-30$ lattice spacings studied here. \section{Conclusions} We have studied factors contributing to the wandering of a vacancy or interstitial string defect in a hexagonal columnar crystal. A gas of such strings in the crystalline phase, interacting via short-range potentials, can proliferate via continuous or first-order transitions when the corresponding defect chemical potential changes sign, leading to a supersolid phase. The transition can be modified by the presence of vacancy-interstitial loops, especially in a system of finite thickness. We have also numerically calculated defect line tensions for two families of line interactions which interpolate between long- and short-ranged interaction potentials. In each case, we determine the point where interstitial and vacancy defects exchange stability. A complete accounting requires consideration of a variety of nearly degenerate vacancy configurations. At finite temperatures, the small energy differences between diffferent species will further lower the free energy of the vacancy through a gain in fluctuation entropy. The interstitial itself can fluctuate between the centered and edge configurations. The point where vacancies and interstitials exchange stability will shift at finite temperatures due to entropic effects of this kind. In the context of long-range potential calculations, we show in an Appendix how to extend the Ewald summation to the modified Bessel function potential $K_0(x)$. \acknowledgements This research was supported by the National Science Foundation, in part by the MRSEC Program through Grant No. DMR-9400396 and through Grant No. DMR-9714725.
1,108,101,563,802
arxiv
\section{Introduction} \label{sec:intro} Texture is the most fundamental information on which the majority of all living organisms base their visual cognition and is a key component of computer vision system\cite{haindl2013visual}. Basically, all the digital images can be regarded as texture. Texture analysis has been applied to many visual problems such as material categorization, surface inspection, medical image analysis, object recognition, image segmentation, pedestrian detection, face analysis and so on. Over the years, lots of texture descriptors have been proposed \cite{haralick1973textural,qian2009object,wu1996rotation,porter1997robust}. Among these descriptors, local patterns have achieved good performance in most texture applications \cite{ojala2002multiresolution,lowe2004distinctive,dalal2005histograms}. In particular, LBP is an efficient descriptor for describing local structures \cite{ojala2002multiresolution}. LBP descriptors have already demonstrated powerful discriminative capability, low computational complexity, and low sensitivity to illumination variation. For further improving the discrimination of LBP, a large number of LBP variants have been proposed \cite{liu2017local}. Most of these changes make efforts on the following three directions. First is to utilize different forms of information from the original textures. Guo et al. proposed Complete LBP which utilized the sign and magnitude information of local neighborhood in the descriptor \cite{guo2010completed}. Some other methods concentrate on the local derivative information respected to a local region, such as LDP \cite{zhang2010local}, CLDP \cite{yin2014multi}, LDDP \cite{guo2012local}, POEM \cite{vu2012enhanced} and so on. Second is rotation invariance, which is an important topic in texture classification. Many methods have been proposed to achieve rotation invariance, such as SRP \cite{liu2012sorted,skibbe2012fast}, SIFT \cite{lowe2004distinctive} and so on. Third is feature selection. The exponential increasing in the number of features with the patch size is a limitation for the traditional LBP. The uniform LBP descriptor proposed by Ojala et al. \cite{ojala2002multiresolution} is the first attempt to solve this problem. The main contributions of the paper are threefold. Firstly, we propose the Affine-Gradient based method to describe texture information. Affine-Gradient (AG) has some properties that Euclidean-Gradient (EG) does not have, which will be elaborated detailedly in the following. Secondly, an improved method for determining the local reference direction is proposed to reach rotation invariance, which is fast to compute and effective for the rotation transformations. Finally, we propose a simple but effective feature selection method considering both the distribution of patterns and the intraclass variance on the training datasets. Experiments show that the proposed feature selection method not only increases the discriminative power but also reduce the dimension of descriptor effectively. \section{Affine-Gradient based Local Pattern Descriptor} In this section we elaborate our approach in detail. First, we give a brief review of LBP. Second, we discuss how to make full use of multi-information, especially Affine-Gradient (AG), for texture classification. The properties of AG are discussed in detail. Then we discuss the method we proposed to achieve the rotation invariance. Finally, the criterion for feature selection are discussed. \subsection{Overview of LBP Method} Th traditional LBP operator extracts information that is invariant to local gray-scale variations in the image. It is computed at each pixel location, considering the values of a small circular neighborhood around the central pixel $q_c$. Then, the LBP is defined as following: \begin{equation} \label{equ:LBP} LBP_{R,P}=\sum_{p=0}^{P-1}s(g_p-g_c)\cdot2^p \quad\quad s(x)=\begin{cases} 1, x\geq 0 \\ 0, x<0 \end{cases} \end{equation} where $g_c$ is the central pixel and $g_p$ are the values of its neighbors. $p$ is the index of the neighbor, $R$ is the radius of the circular neighborhood and $P$ is the number of pixels in the neighborhood. Then the histogram of these patterns is used to describe the texture of the image. There are three obvious disadvantages of LBP. First, it has no rotation invariance. Second, it is just 1-th order sign information used in the descriptor. Third is the exponentially length increasing with the parameter $R$. The proposed method has been improved in these three direction. \subsection{Affine-Gradient based Descriptors} In here, we propose the method based on the AG information to increase the discrimination of the descriptor. The Euclidean Gradient (EG) can de defined as $G=\sqrt{I_x^2+I_y^2}$. It is 2-norm of gradient in Euclidean space that remains invariant only under Euclidean transformation. Olver et al. \cite{olver1999affine} proposed that there are two basic relative affine differential invariant of 2-order in two-dimensional affine spaces as following: \begin{gather} H=I_{xx}I_{yy}-I_{xy}^2\\ J=I_{xx}I_y^2-2I_xI_yI_{xy}+I_x^2I_{yy} \end{gather} All other 2-order differential invariants can be made up of these two expressions. And their ratios constitute absolute invariant of differential in affine space. The affine gradient magnitude ($affG$) can be defined as equation (\ref{equ:affG}). In order to avoid the calculation fault of zero-denominator, we can make some changes to the definition as $affG'$. \begin{equation} \label{equ:affG} affG=\left|\frac{H}{J}\right|,\quad\quad affG'=\sqrt{\frac{H^2}{J^2+1}} \end{equation} The Affine-Gradient is superior than Euclidean-Gradient (EG), because AG is invariant for the affine transformation, and the EG just remains invariant under Euclidean transformation. Using the AG information can improve the robustness of descriptor for the geometric transformation. Ge et al. constructed a new descriptor using the AG to replace the EG in SIFT, which get much better performance than the original SIFT \cite{juan2013local}. The gradient and AG information are shown in Fig. \ref{fig:gradient}. \begin{figure}[th] \centering \subfigure[]{ \label{fig:exampleofimage} \scalebox{0.14}{\includegraphics{figures/example.jpg}}} \subfigure[]{ \label{fig:g} \scalebox{0.14}{\includegraphics{figures/gradient.jpg}}} \subfigure[]{ \label{fig:affg1} \scalebox{0.14}{\includegraphics{figures/affg3.jpg}}} \subfigure[]{ \label{fig:affg2} \scalebox{0.14}{\includegraphics{figures/affg2.jpg}}} \caption{The EG and AG information of image example: (a) image example; (b) EG magnitudes of example; (c) AG of example range in (0-0.2); (d) AG of example range in (0.2-1).} \label{fig:gradient} \end{figure} In Fig. \ref{fig:histofgradient} and \ref{fig:histofaffg}, we can see that the histogram of EG is much more continuous and smooth than that of AG. In fact, the range of AG is from 0 to 162, not limited to 0 to 1 corresponding to Fig.\ref{fig:histofaffg}. It's just more sparse where the value bigger than 1. But the distribution of EG just ranges form 0 to 763 corresponding to Fig. \ref{fig:histofgradient}. So intuitively, the information of AG ranging (0,1) probably corresponding to that of EG as shown in Fig. \ref{fig:g} and \ref{fig:affg1}. And there are some local extreme information in the AG as shown in Fig. \ref{fig:affg2}. \begin{figure}[th] \centering \subfigure[]{ \label{fig:histofgradient} \scalebox{0.3}{\includegraphics{figures/histofgradient.jpg}}} \subfigure[]{ \label{fig:histofaffg} \scalebox{0.3}{\includegraphics{figures/histofaffg2.jpg}}} \caption{The histogram of EG and AG: (a) histogram of the gradient; (b) histogram of the AG.} \label{fig:gradient2} \end{figure} For further verification of the validity of AG, experiments are conducted on Outex12 dataset. The Local Gradient Pattern (LGP) and Local Affine-Gradient Patter (LAGP) can be defined as \begin{gather} LGP_{R,P} = \sum_{p=0}^{P-1}s(G_p-G_c)\\ LAGP_{R,P} = \sum_{p=0}^{P-1}s(affG'_p-affG'_c) \end{gather} The $s$ function is defined in equation (\ref{equ:LBP}). The Multi-Information based descriptor MI-G, can be defined as the concatenation of LGP and LBP. Similarly, MI-AG is the concatenation of LAGP and LBP. Then the experimental results are listed in Table \ref{table:expMI}. \begin{table} \caption{Results of Multi-Information based descriptors on Outex12} \label{table:expMI} \renewcommand{\arraystretch}{1.4} \setlength\tabcolsep{3pt} \begin{center} \begin{tabular}{lllll} \hline\noalign{\smallskip} Problem & form & $LBP$ & $MI\text{-}G$ & $MI\text{-}AG$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{4}*{Outex12} & $original$ & 55.26 & 58.04 & \textbf{58.69} \\ & $ri$ & 71.37 & 73.49 & \textbf{79.28} \\ & $u2$ & 56.98 & 58.03 & \textbf{60.02} \\ & $riu2$ & 65.09 & 77.62 & \textbf{77.65} \\ \hline \end{tabular} \end{center} \end{table} From the results, we can see that the Multi-Information descriptor based on Affine-Gradient get the best performance in all scenarios. It was demonstrated that the AG information can substantially increase the discriminative power of the descriptors. \subsection{Rotation Invariance} Metha et al. \cite{mehta2016dominant} proposed a method that quantizing the directions into $P$ discrete values, then make direction with the maximum magnitude of the difference as the reference direction. But this definition discard the sign information of the magnitude and will assign the opposite directions into the same one. In this paper, we take both the sign and magnitude of the discrete directions into consideration. The reference direction can be defined as: \begin{equation} Ds = (\mathop{\arg \max}_{p\in(0,1,...,P-1)}{|g_p-g_c|} + \frac{P}{2} \cdot s(g_D-g_c))\mod P \end{equation} where $s$ is the sign function defined in equation (\ref{equ:LBP}). The proposed descriptor is computed by rotating the weights with respect to the reference direction. The rotation invariance LBP (roLBP) can be defined as \begin{equation} roLBP_{R,P} = \sum_{p=0}^{P-1}s(g_p-g_c)\cdot 2^{(p-Ds)\mod P} \end{equation} Applying the reference direction selection method to the LAGP descriptor. We can get the rotation invariant descriptor roLAGP as following: \begin{equation} roLAGP_{R,P} = \sum_{p=0}^{P-1}s(affG'_p-affG'_c)\cdot 2^{(p-Ds)\mod P} \end{equation} Then the final descriptor AGLBP can be defined as the concatenation of roLBP and roLAGP. \begin{equation} AGLBP_{R,P} = roLBP_{R,P}\text{\_}roLAGP_{R,P} \end{equation} \subsection{Feature Selection} It is observed the dimensionality of descriptors also increases exponentially with the number of neighboring pixels. In \cite{mehta2016dominant}, proposed a method depending on the distribution of patterns in the training dataset. Besides, some patterns may be negative to the final classification result. So in our method, the intraclass variance of training datasets is also chosen as the evaluation for feature selection. In the statistical description, variance is defined as$\frac{1}{n-1}\sum(X-\mu)^2$, where $\mu$ is mean value of the array. The distribution of the intraclass variance of all patterns are computed from the training dataset, as shown in Fig. \ref{fig:histofvariance}. \begin{figure} \subfigure[ ]{ \label{fig:histofvar1} \scalebox{0.3}{\includegraphics{figures/histofvarofrlbp.jpg}}} \subfigure[ ]{ \label{fig:histofvar2} \scalebox{0.3}{\includegraphics{figures/histofvarofrlbpG.jpg}}} \caption{The intraclass variance distribution for roLBP on Outex12 dataset: (a) The variance distribution of roLBP in Outex12 training dataset; (b) The variance distribution of roLAGP in Outex12 training dataset.} \label{fig:histofvariance} \end{figure} The bins of the histogram are sorted in descending order. Then there will be two method for feature selection. One selects the top $N$ patterns in the ordered list, the other selects bins which is less than a threshold $\phi$ as the final descriptor. The final patterns selected depend on the threshold parameter $N$ or $\phi$ and the training datasets. The final dimensionality of the descriptor is not constant. It varies across different datasets. The accuracy-parameter curve of the two method for roLBP on Outex12 dataset are plotted in Fig. \ref{fig:curve}. \begin{figure} \subfigure[ ]{ \label{fig:curveofnum} \scalebox{0.3}{\includegraphics{figures/curveofnum.jpg}}} \subfigure[ ]{ \label{fig:curveofvar} \scalebox{0.3}{\includegraphics{figures/curveofvar.jpg}}} \caption{The accuracy-parameter curve for roLBP on Outex12 dataset: (a) the accuracy-N curve of roLBP on Outex12 dataset; (b) the accuracy-$\phi$ curve of roLBP on Outex12 dataset.} \label{fig:curve} \end{figure} It can be observed in Fig. \ref{fig:curveofvar} that the classification accuracy reach the peak with the threshold value almost between 1.6-2.0, just over the peak of distribution corresponding to Fig. \ref{fig:histofvar1} . This values results in a significant reduction of the dimensionality. Thus, the proposed approach consider both the statical frequency and the intraclass variance of the training textures, which not only reduces the dimensionality of descriptors, but also improves the classification accuracy. The effective of the proposed approach will be demonstrated in next section. \subsection{Classification method} Some state-of-the-art methods, such as artificial neural network (ANN), SVM, AdaBoost, can achieve outstanding classification performance, but these methods require complex learning procedure and may influence analysis of discriminative capabilities of features. To make a fair comparison with some other approaches, the Nearest Neighbor (NN) classifier based on the Chi-Square distance was performed as our classification method. The effectiveness of the Chi-Square distance for classification is demonstrated in \cite{guo2010descriptor,guo2012local}. \section{Experiments} \label{sec:exp} To evaluate the proposed descriptor (AGLBP), three experiments are conducted on texture datasets: Outex10, Outex12 and KTH-TIPS2. Outex10 and Outex12 datasets are for rotation invariant texture classification with rotation and illumination deformations. The KTH-TIPS2 is for material categorization and includes scale and viewpoints variations. The parameter $\phi$ of proposed method is set to 2 in all our experiments. \subsection{Outex12} Outex is a framework for empirical evaluation of texture classification algorithms\cite{ojala2002outex}. First we conduct experiment on the Outex12 dataset. It consists of 9120 images, which are separated into 24 different texture classes captured with different illuminations and rotations. This dataset contains 20 training images and 360 (2*9*20) testing images under two different illumination and 9 different orientation for each class. In experiment, following two problem proposed in the dataset\cite{ojala2002outex}, problem 000 and 001. Considering the length of the final descriptor is depending on the parameter (R,P), we use a conservative setting of the parameter as (1,8),(2,12),(3,16). All the LBP-based methods were performed and the results are shown in Table \ref{table:exp1}. \begin{table} \label{table:exp1} \caption{Experiment results of LBP based methods on different datasets} \renewcommand{\arraystretch}{1.4} \setlength\tabcolsep{3pt} \begin{center} \begin{tabular}{llccccccccc} \hline\noalign{\smallskip} Problems & (R,P) & $LBP$ & $LBP^{u2}$ & $LBP^{ri}$ & $LBP^{riu2}$ & $LBP\text{-}HF$ & $LBPV$ & $AGLBP$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}*{Outex10} & (1,8) & 50.20 & 57.44 & 82.78 & 74.38 & 72.03 & 91.40 & 63.72\\ & (2,12) & - & 59.62 & 91.48 & 86.74 & 90.52 & 92.18 & \textbf{95.43}\\ & (3,16) & - & 61.35 & 95.76 & 88.92 & 97.03 & 94.37 & \textbf{99.22}\\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}*{Outex12-000} & (1,8) & 54.21 & 55.81 & 72.26 & 65.93 & 70.85 & 76.41 & 61.99\\ & (2,12) & - & 57.85 & 86.78 & 82.66 & 88.49 & 86.80 & \textbf{93.31}\\ & (3,16) & - & 58.56 & 93.50 & 83.98 & 91.08 & 90.85 & \textbf{97.84}\\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}*{Outex12-001} & (1,8) & 56.32 & 58.15 & 70.39 & 64.26 & 77.24 & 77.08 & 67.50\\ & (2,12) & - & 57.08 & 84.77 & 75.86 & 91.34 & 84.09 & \textbf{94.83}\\ & (3,16) & - & 59.49 & 92.97 & 79.63 & 92.40 & 84.76 & \textbf{97.38}\\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{4}*{KTH-TIPS2} & (1,8) & 90.97 & 85.85 & 83.65 & 82.78 & 88.73 & 78.98 & 81.28\\ & (2,12) & - & 87.92 & 89.75 & 87.95 & 90.87 & 83.00 & \textbf{95.23}\\ & (3,16) & - & 91.95 & 94.36 & 91.52 & 91.85 & 85.10 & \textbf{97.12}\\ \hline \end{tabular} \end{center} \end{table} Among these methods, the proposed method with setting (3,16) has achieved the highest accuracy of 97.84\% for problem 000 and 97.38\% for problem 001. For further analysis, we compare our method with some other state-of-the-art methods. The results are shown in Table \ref{table:exp2}. It can be seen that the proposed descriptor achieves the best result, the close second is $DRLBP$, which get the accuracy 97.15\% for problem 000 and 95.37\% for problem 001. \subsection{Outex10} Then experiment is conducted on the Outex10 dataset, which includes 4320 images of 24 different classes. These images are captured under the same illumination but rotated at nine different angles. There are 20 images at each angle for each class. Following the problem proposed in the dataset\cite{ojala2002outex}, 480 images captured at angle $0^\circ$ are taken as the training set and the rest 3840 images captured at other angles used for testing. The results with various setting are shown in Table \ref{table:exp1}. For further analysis, AGLBP are compared with some other state-of-the-art approaches. The result of these methods are also shown in Table \ref{table:exp2}. It can be observed that AGLBP performs well under various rotation deformations. Among all, our method with setting (3,16) has achieved the highest accuracy 99.22\%, just a little improvement on the results to the 99.19\%, which achieved by $DRLBP$. \subsection{KTH-TIPS2 Dataset} Experiment on the KTH-TIPS2 dataset has also been conducted for material classification. The KTH-TIPS2 database contains 11 texture classes with different materials. For each class, the images are captured from 4 different samples of materials. And for each sample, 9 different scales with 4 different illumination and 3 different poses are conducted for the imaging. In this experiment, following problem proposed in most research\cite{guo2010rotation,guo2011texture}, images of one random sample are selected from each class are taken as the training dataset, images from the other samples are taken as the testing dataset. All the methods were performed and the results are shown in Table \ref{table:exp1}. As the same, AGLBP is also compared with some other state-of-the-art approaches. The result of these methods are shown in Table \ref{table:exp2}. The proposed descriptor outperforms all other descriptors again. It can be concluded that our method is effective for texture classification. \begin{table} \label{table:exp2} \caption{Experiment results of descriptors on different datasets} \renewcommand{\arraystretch}{1.4} \setlength\tabcolsep{3pt} \begin{center} \begin{tabular}{lcccccccccc} \hline\noalign{\smallskip} Problems & $LBP^{ri}$ & $LDDP$ & $LCP$ & $LBP\text{-}HF$ & $LBPV$ & $VZ\_MR8$ &$VZ\_Joint$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} Outex10 & 95.76 & 73.16 & 74.12 & 97.03 & 94.37 & 93.59 & 92.00 \\ Outex12-000 & 93.50 & 63.48 & 70.16 & 91.08 & 90.85 & 91.34 & 90.46 \\ Outex12-001 & 92.97 & 68.48 & 68.48 & 92.40 & 84.76 & 92.83 & 91.74 \\ KTH-TIPS2 & 94.36 & 92.74 & 92.15 & 91.85 & 85.10 & 93.50 & 95.46 \\ \noalign{\smallskip} \hline \noalign{\smallskip} Problems & $PLBP$ & $MDLBP$ & $FBLLBP$ & $BIF$ & $LEP$ & $DRLBP$ & $AGLBP$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} Outex10 & 96.64 & 95.34 & 98.68 & - & - & 99.19 & \textbf{99.22}\\ Outex12-000 & 82.79 & 93.96 & 88.38 & - & - & 97.15 & \textbf{97.84}\\ Outex12-001 & 90.08 & 89.94 & 92.17 & - & - & 95.37 & \textbf{97.38}\\ KTH-TIPS2 & - & - & - & 98.50 & 96.41 & 96.78 & \textbf{97.12}\\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion} In this paper we have proposed an Affine-Gradient based Local Binary Pattern (AGLBP) descriptor for texture classification. Affine-Gradient is different from the Euclidean-Gradient and has been proved to have a good improvement for texture classification. In addition, we have proposed an improved method for determining the local reference direction to reach rotation invariance. Importantly, the dimension increasing bringing by multi-information is also alleviated by proposed feature selection method, which considering both the statistical frequency and the intraclass variance of the training texture. Three extensive experiments have been conducted on texture datasets including rotating, scaling and viewpoint deformations. The results demonstrate that the AGLBP performed better than some state-of-the-art approaches for texture classification. The AGLBP utilize the Affine-Gradient which has been demonstrated robust for the viewpoint deformation. For further research, information invariant for projective transformation should be utilized to enhance the robustness to viewpoint deformation. \bibliographystyle{splncs03} \section{Introduction} \label{sec:intro} Texture is the most fundamental information on which the majority of all living organisms base their visual cognition and is a key component of computer vision system\cite{haindl2013visual}. Basically, all the digital images can be regarded as texture. Texture analysis has been applied to many visual problems such as material categorization, surface inspection, medical image analysis, object recognition, image segmentation, pedestrian detection, face analysis and so on. Over the years, lots of texture descriptors have been proposed \cite{haralick1973textural,qian2009object,wu1996rotation,porter1997robust}. Among these descriptors, local patterns have achieved good performance in most texture applications \cite{ojala2002multiresolution,lowe2004distinctive,dalal2005histograms}. In particular, LBP is an efficient descriptor for describing local structures \cite{ojala2002multiresolution}. LBP descriptors have already demonstrated powerful discriminative capability, low computational complexity, and low sensitivity to illumination variation. For further improving the discrimination of LBP, a large number of LBP variants have been proposed \cite{liu2017local}. Most of these changes make efforts on the following three directions. First is to utilize different forms of information from the original textures. Guo et al. proposed Complete LBP which utilized the sign and magnitude information of local neighborhood in the descriptor \cite{guo2010completed}. Some other methods concentrate on the local derivative information respected to a local region, such as LDP \cite{zhang2010local}, CLDP \cite{yin2014multi}, LDDP \cite{guo2012local}, POEM \cite{vu2012enhanced} and so on. Second is rotation invariance, which is an important topic in texture classification. Many methods have been proposed to achieve rotation invariance, such as SRP \cite{liu2012sorted,skibbe2012fast}, SIFT \cite{lowe2004distinctive} and so on. Third is feature selection. The exponential increasing in the number of features with the patch size is a limitation for the traditional LBP. The uniform LBP descriptor proposed by Ojala et al. \cite{ojala2002multiresolution} is the first attempt to solve this problem. The main contributions of the paper are threefold. Firstly, we propose the Affine-Gradient based method to describe texture information. Affine-Gradient (AG) has some properties that Euclidean-Gradient (EG) does not have, which will be elaborated detailedly in the following. Secondly, an improved method for determining the local reference direction is proposed to reach rotation invariance, which is fast to compute and effective for the rotation transformations. Finally, we propose a simple but effective feature selection method considering both the distribution of patterns and the intraclass variance on the training datasets. Experiments show that the proposed feature selection method not only increases the discriminative power but also reduce the dimension of descriptor effectively. \section{Affine-Gradient based Local Pattern Descriptor} In this section we elaborate our approach in detail. First, we give a brief review of LBP. Second, we discuss how to make full use of multi-information, especially Affine-Gradient (AG), for texture classification. The properties of AG are discussed in detail. Then we discuss the method we proposed to achieve the rotation invariance. Finally, the criterion for feature selection are discussed. \subsection{Overview of LBP Method} Th traditional LBP operator extracts information that is invariant to local gray-scale variations in the image. It is computed at each pixel location, considering the values of a small circular neighborhood around the central pixel $q_c$. Then, the LBP is defined as following: \begin{equation} \label{equ:LBP} LBP_{R,P}=\sum_{p=0}^{P-1}s(g_p-g_c)\cdot2^p \quad\quad s(x)=\begin{cases} 1, x\geq 0 \\ 0, x<0 \end{cases} \end{equation} where $g_c$ is the central pixel and $g_p$ are the values of its neighbors. $p$ is the index of the neighbor, $R$ is the radius of the circular neighborhood and $P$ is the number of pixels in the neighborhood. Then the histogram of these patterns is used to describe the texture of the image. There are three obvious disadvantages of LBP. First, it has no rotation invariance. Second, it is just 1-th order sign information used in the descriptor. Third is the exponentially length increasing with the parameter $R$. The proposed method has been improved in these three direction. \subsection{Affine-Gradient based Descriptors} In here, we propose the method based on the AG information to increase the discrimination of the descriptor. The Euclidean Gradient (EG) can de defined as $G=\sqrt{I_x^2+I_y^2}$. It is 2-norm of gradient in Euclidean space that remains invariant only under Euclidean transformation. Olver et al. \cite{olver1999affine} proposed that there are two basic relative affine differential invariant of 2-order in two-dimensional affine spaces as following: \begin{gather} H=I_{xx}I_{yy}-I_{xy}^2\\ J=I_{xx}I_y^2-2I_xI_yI_{xy}+I_x^2I_{yy} \end{gather} All other 2-order differential invariants can be made up of these two expressions. And their ratios constitute absolute invariant of differential in affine space. The affine gradient magnitude ($affG$) can be defined as equation (\ref{equ:affG}). In order to avoid the calculation fault of zero-denominator, we can make some changes to the definition as $affG'$. \begin{equation} \label{equ:affG} affG=\left|\frac{H}{J}\right|,\quad\quad affG'=\sqrt{\frac{H^2}{J^2+1}} \end{equation} The Affine-Gradient is superior than Euclidean-Gradient (EG), because AG is invariant for the affine transformation, and the EG just remains invariant under Euclidean transformation. Using the AG information can improve the robustness of descriptor for the geometric transformation. Ge et al. constructed a new descriptor using the AG to replace the EG in SIFT, which get much better performance than the original SIFT \cite{juan2013local}. The gradient and AG information are shown in Fig. \ref{fig:gradient}. \begin{figure}[th] \centering \subfigure[]{ \label{fig:exampleofimage} \scalebox{0.14}{\includegraphics{figures/example.jpg}}} \subfigure[]{ \label{fig:g} \scalebox{0.14}{\includegraphics{figures/gradient.jpg}}} \subfigure[]{ \label{fig:affg1} \scalebox{0.14}{\includegraphics{figures/affg3.jpg}}} \subfigure[]{ \label{fig:affg2} \scalebox{0.14}{\includegraphics{figures/affg2.jpg}}} \caption{The EG and AG information of image example: (a) image example; (b) EG magnitudes of example; (c) AG of example range in (0-0.2); (d) AG of example range in (0.2-1).} \label{fig:gradient} \end{figure} In Fig. \ref{fig:histofgradient} and \ref{fig:histofaffg}, we can see that the histogram of EG is much more continuous and smooth than that of AG. In fact, the range of AG is from 0 to 162, not limited to 0 to 1 corresponding to Fig.\ref{fig:histofaffg}. It's just more sparse where the value bigger than 1. But the distribution of EG just ranges form 0 to 763 corresponding to Fig. \ref{fig:histofgradient}. So intuitively, the information of AG ranging (0,1) probably corresponding to that of EG as shown in Fig. \ref{fig:g} and \ref{fig:affg1}. And there are some local extreme information in the AG as shown in Fig. \ref{fig:affg2}. \begin{figure}[th] \centering \subfigure[]{ \label{fig:histofgradient} \scalebox{0.3}{\includegraphics{figures/histofgradient.jpg}}} \subfigure[]{ \label{fig:histofaffg} \scalebox{0.3}{\includegraphics{figures/histofaffg2.jpg}}} \caption{The histogram of EG and AG: (a) histogram of the gradient; (b) histogram of the AG.} \label{fig:gradient2} \end{figure} For further verification of the validity of AG, experiments are conducted on Outex12 dataset. The Local Gradient Pattern (LGP) and Local Affine-Gradient Patter (LAGP) can be defined as \begin{gather} LGP_{R,P} = \sum_{p=0}^{P-1}s(G_p-G_c)\\ LAGP_{R,P} = \sum_{p=0}^{P-1}s(affG'_p-affG'_c) \end{gather} The $s$ function is defined in equation (\ref{equ:LBP}). The Multi-Information based descriptor MI-G, can be defined as the concatenation of LGP and LBP. Similarly, MI-AG is the concatenation of LAGP and LBP. Then the experimental results are listed in Table \ref{table:expMI}. \begin{table} \caption{Results of Multi-Information based descriptors on Outex12} \label{table:expMI} \renewcommand{\arraystretch}{1.4} \setlength\tabcolsep{3pt} \begin{center} \begin{tabular}{lllll} \hline\noalign{\smallskip} Problem & form & $LBP$ & $MI\text{-}G$ & $MI\text{-}AG$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{4}*{Outex12} & $original$ & 55.26 & 58.04 & \textbf{58.69} \\ & $ri$ & 71.37 & 73.49 & \textbf{79.28} \\ & $u2$ & 56.98 & 58.03 & \textbf{60.02} \\ & $riu2$ & 65.09 & 77.62 & \textbf{77.65} \\ \hline \end{tabular} \end{center} \end{table} From the results, we can see that the Multi-Information descriptor based on Affine-Gradient get the best performance in all scenarios. It was demonstrated that the AG information can substantially increase the discriminative power of the descriptors. \subsection{Rotation Invariance} Metha et al. \cite{mehta2016dominant} proposed a method that quantizing the directions into $P$ discrete values, then make direction with the maximum magnitude of the difference as the reference direction. But this definition discard the sign information of the magnitude and will assign the opposite directions into the same one. In this paper, we take both the sign and magnitude of the discrete directions into consideration. The reference direction can be defined as: \begin{equation} Ds = (\mathop{\arg \max}_{p\in(0,1,...,P-1)}{|g_p-g_c|} + \frac{P}{2} \cdot s(g_D-g_c))\mod P \end{equation} where $s$ is the sign function defined in equation (\ref{equ:LBP}). The proposed descriptor is computed by rotating the weights with respect to the reference direction. The rotation invariance LBP (roLBP) can be defined as \begin{equation} roLBP_{R,P} = \sum_{p=0}^{P-1}s(g_p-g_c)\cdot 2^{(p-Ds)\mod P} \end{equation} Applying the reference direction selection method to the LAGP descriptor. We can get the rotation invariant descriptor roLAGP as following: \begin{equation} roLAGP_{R,P} = \sum_{p=0}^{P-1}s(affG'_p-affG'_c)\cdot 2^{(p-Ds)\mod P} \end{equation} Then the final descriptor AGLBP can be defined as the concatenation of roLBP and roLAGP. \begin{equation} AGLBP_{R,P} = roLBP_{R,P}\text{\_}roLAGP_{R,P} \end{equation} \subsection{Feature Selection} It is observed the dimensionality of descriptors also increases exponentially with the number of neighboring pixels. In \cite{mehta2016dominant}, proposed a method depending on the distribution of patterns in the training dataset. Besides, some patterns may be negative to the final classification result. So in our method, the intraclass variance of training datasets is also chosen as the evaluation for feature selection. In the statistical description, variance is defined as$\frac{1}{n-1}\sum(X-\mu)^2$, where $\mu$ is mean value of the array. The distribution of the intraclass variance of all patterns are computed from the training dataset, as shown in Fig. \ref{fig:histofvariance}. \begin{figure} \subfigure[ ]{ \label{fig:histofvar1} \scalebox{0.3}{\includegraphics{figures/histofvarofrlbp.jpg}}} \subfigure[ ]{ \label{fig:histofvar2} \scalebox{0.3}{\includegraphics{figures/histofvarofrlbpG.jpg}}} \caption{The intraclass variance distribution for roLBP on Outex12 dataset: (a) The variance distribution of roLBP in Outex12 training dataset; (b) The variance distribution of roLAGP in Outex12 training dataset.} \label{fig:histofvariance} \end{figure} The bins of the histogram are sorted in descending order. Then there will be two method for feature selection. One selects the top $N$ patterns in the ordered list, the other selects bins which is less than a threshold $\phi$ as the final descriptor. The final patterns selected depend on the threshold parameter $N$ or $\phi$ and the training datasets. The final dimensionality of the descriptor is not constant. It varies across different datasets. The accuracy-parameter curve of the two method for roLBP on Outex12 dataset are plotted in Fig. \ref{fig:curve}. \begin{figure} \subfigure[ ]{ \label{fig:curveofnum} \scalebox{0.3}{\includegraphics{figures/curveofnum.jpg}}} \subfigure[ ]{ \label{fig:curveofvar} \scalebox{0.3}{\includegraphics{figures/curveofvar.jpg}}} \caption{The accuracy-parameter curve for roLBP on Outex12 dataset: (a) the accuracy-N curve of roLBP on Outex12 dataset; (b) the accuracy-$\phi$ curve of roLBP on Outex12 dataset.} \label{fig:curve} \end{figure} It can be observed in Fig. \ref{fig:curveofvar} that the classification accuracy reach the peak with the threshold value almost between 1.6-2.0, just over the peak of distribution corresponding to Fig. \ref{fig:histofvar1} . This values results in a significant reduction of the dimensionality. Thus, the proposed approach consider both the statical frequency and the intraclass variance of the training textures, which not only reduces the dimensionality of descriptors, but also improves the classification accuracy. The effective of the proposed approach will be demonstrated in next section. \subsection{Classification method} Some state-of-the-art methods, such as artificial neural network (ANN), SVM, AdaBoost, can achieve outstanding classification performance, but these methods require complex learning procedure and may influence analysis of discriminative capabilities of features. To make a fair comparison with some other approaches, the Nearest Neighbor (NN) classifier based on the Chi-Square distance was performed as our classification method. The effectiveness of the Chi-Square distance for classification is demonstrated in \cite{guo2010descriptor,guo2012local}. \section{Experiments} \label{sec:exp} To evaluate the proposed descriptor (AGLBP), three experiments are conducted on texture datasets: Outex10, Outex12 and KTH-TIPS2. Outex10 and Outex12 datasets are for rotation invariant texture classification with rotation and illumination deformations. The KTH-TIPS2 is for material categorization and includes scale and viewpoints variations. The parameter $\phi$ of proposed method is set to 2 in all our experiments. \subsection{Outex12} Outex is a framework for empirical evaluation of texture classification algorithms\cite{ojala2002outex}. First we conduct experiment on the Outex12 dataset. It consists of 9120 images, which are separated into 24 different texture classes captured with different illuminations and rotations. This dataset contains 20 training images and 360 (2*9*20) testing images under two different illumination and 9 different orientation for each class. In experiment, following two problem proposed in the dataset\cite{ojala2002outex}, problem 000 and 001. Considering the length of the final descriptor is depending on the parameter (R,P), we use a conservative setting of the parameter as (1,8),(2,12),(3,16). All the LBP-based methods were performed and the results are shown in Table \ref{table:exp1}. \begin{table} \label{table:exp1} \caption{Experiment results of LBP based methods on different datasets} \renewcommand{\arraystretch}{1.4} \setlength\tabcolsep{3pt} \begin{center} \begin{tabular}{llccccccccc} \hline\noalign{\smallskip} Problems & (R,P) & $LBP$ & $LBP^{u2}$ & $LBP^{ri}$ & $LBP^{riu2}$ & $LBP\text{-}HF$ & $LBPV$ & $AGLBP$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}*{Outex10} & (1,8) & 50.20 & 57.44 & 82.78 & 74.38 & 72.03 & 91.40 & 63.72\\ & (2,12) & - & 59.62 & 91.48 & 86.74 & 90.52 & 92.18 & \textbf{95.43}\\ & (3,16) & - & 61.35 & 95.76 & 88.92 & 97.03 & 94.37 & \textbf{99.22}\\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}*{Outex12-000} & (1,8) & 54.21 & 55.81 & 72.26 & 65.93 & 70.85 & 76.41 & 61.99\\ & (2,12) & - & 57.85 & 86.78 & 82.66 & 88.49 & 86.80 & \textbf{93.31}\\ & (3,16) & - & 58.56 & 93.50 & 83.98 & 91.08 & 90.85 & \textbf{97.84}\\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}*{Outex12-001} & (1,8) & 56.32 & 58.15 & 70.39 & 64.26 & 77.24 & 77.08 & 67.50\\ & (2,12) & - & 57.08 & 84.77 & 75.86 & 91.34 & 84.09 & \textbf{94.83}\\ & (3,16) & - & 59.49 & 92.97 & 79.63 & 92.40 & 84.76 & \textbf{97.38}\\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{4}*{KTH-TIPS2} & (1,8) & 90.97 & 85.85 & 83.65 & 82.78 & 88.73 & 78.98 & 81.28\\ & (2,12) & - & 87.92 & 89.75 & 87.95 & 90.87 & 83.00 & \textbf{95.23}\\ & (3,16) & - & 91.95 & 94.36 & 91.52 & 91.85 & 85.10 & \textbf{97.12}\\ \hline \end{tabular} \end{center} \end{table} Among these methods, the proposed method with setting (3,16) has achieved the highest accuracy of 97.84\% for problem 000 and 97.38\% for problem 001. For further analysis, we compare our method with some other state-of-the-art methods. The results are shown in Table \ref{table:exp2}. It can be seen that the proposed descriptor achieves the best result, the close second is $DRLBP$, which get the accuracy 97.15\% for problem 000 and 95.37\% for problem 001. \subsection{Outex10} Then experiment is conducted on the Outex10 dataset, which includes 4320 images of 24 different classes. These images are captured under the same illumination but rotated at nine different angles. There are 20 images at each angle for each class. Following the problem proposed in the dataset\cite{ojala2002outex}, 480 images captured at angle $0^\circ$ are taken as the training set and the rest 3840 images captured at other angles used for testing. The results with various setting are shown in Table \ref{table:exp1}. For further analysis, AGLBP are compared with some other state-of-the-art approaches. The result of these methods are also shown in Table \ref{table:exp2}. It can be observed that AGLBP performs well under various rotation deformations. Among all, our method with setting (3,16) has achieved the highest accuracy 99.22\%, just a little improvement on the results to the 99.19\%, which achieved by $DRLBP$. \subsection{KTH-TIPS2 Dataset} Experiment on the KTH-TIPS2 dataset has also been conducted for material classification. The KTH-TIPS2 database contains 11 texture classes with different materials. For each class, the images are captured from 4 different samples of materials. And for each sample, 9 different scales with 4 different illumination and 3 different poses are conducted for the imaging. In this experiment, following problem proposed in most research\cite{guo2010rotation,guo2011texture}, images of one random sample are selected from each class are taken as the training dataset, images from the other samples are taken as the testing dataset. All the methods were performed and the results are shown in Table \ref{table:exp1}. As the same, AGLBP is also compared with some other state-of-the-art approaches. The result of these methods are shown in Table \ref{table:exp2}. The proposed descriptor outperforms all other descriptors again. It can be concluded that our method is effective for texture classification. \begin{table} \label{table:exp2} \caption{Experiment results of descriptors on different datasets} \renewcommand{\arraystretch}{1.4} \setlength\tabcolsep{3pt} \begin{center} \begin{tabular}{lcccccccccc} \hline\noalign{\smallskip} Problems & $LBP^{ri}$ & $LDDP$ & $LCP$ & $LBP\text{-}HF$ & $LBPV$ & $VZ\_MR8$ &$VZ\_Joint$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} Outex10 & 95.76 & 73.16 & 74.12 & 97.03 & 94.37 & 93.59 & 92.00 \\ Outex12-000 & 93.50 & 63.48 & 70.16 & 91.08 & 90.85 & 91.34 & 90.46 \\ Outex12-001 & 92.97 & 68.48 & 68.48 & 92.40 & 84.76 & 92.83 & 91.74 \\ KTH-TIPS2 & 94.36 & 92.74 & 92.15 & 91.85 & 85.10 & 93.50 & 95.46 \\ \noalign{\smallskip} \hline \noalign{\smallskip} Problems & $PLBP$ & $MDLBP$ & $FBLLBP$ & $BIF$ & $LEP$ & $DRLBP$ & $AGLBP$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} Outex10 & 96.64 & 95.34 & 98.68 & - & - & 99.19 & \textbf{99.22}\\ Outex12-000 & 82.79 & 93.96 & 88.38 & - & - & 97.15 & \textbf{97.84}\\ Outex12-001 & 90.08 & 89.94 & 92.17 & - & - & 95.37 & \textbf{97.38}\\ KTH-TIPS2 & - & - & - & 98.50 & 96.41 & 96.78 & \textbf{97.12}\\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion} In this paper we have proposed an Affine-Gradient based Local Binary Pattern (AGLBP) descriptor for texture classification. Affine-Gradient is different from the Euclidean-Gradient and has been proved to have a good improvement for texture classification. In addition, we have proposed an improved method for determining the local reference direction to reach rotation invariance. Importantly, the dimension increasing bringing by multi-information is also alleviated by proposed feature selection method, which considering both the statistical frequency and the intraclass variance of the training texture. Three extensive experiments have been conducted on texture datasets including rotating, scaling and viewpoint deformations. The results demonstrate that the AGLBP performed better than some state-of-the-art approaches for texture classification. The AGLBP utilize the Affine-Gradient which has been demonstrated robust for the viewpoint deformation. For further research, information invariant for projective transformation should be utilized to enhance the robustness to viewpoint deformation. \bibliographystyle{splncs03}
1,108,101,563,803
arxiv
\section{Introduction} In the black-box computation model, we assume that the input are given by a black box that, given an index $i$, returns the $i^{\rm th}$ bit of the input. Several efficient quantum algorithms can be considered in this framework, including Grover's algorithm\cite{Grover95} and many its variants. Beals, Buhrman et.al.~\cite{Beals98} proved that almost all $N$-variable Boolean functions require $\Omega(N)$ queries in this model if the computation has to be exact (i.e., no error is allowed). We extend their result to computation with bounded error. In this case, a non-trivial speedup is possible. Namely, van Dam\cite{Dam98} showed that all $N$ input bits can be recovered with just $N/2+o(N)$ queries and arbitrarily small probability of error. This allows to compute any function with just $N/2+o(N)$ queries. This bound is known to be tight (up to $o(N)$ term) for the parity function\cite{Beals98,Sipser98} but not for other functions. In this paper, we show that almost all Boolean functions require $N/4-O(\sqrt{N}\log N)$ queries in the quantum black-box model. This matches van Dam's result up to a constant factor ($N/4$ compared to $N/2$). \section{Quantum black-box model} We consider computing a Boolean function $f(x_1, \ldots, x_N):\{0, 1\}^N\rightarrow\{0, 1\}$ in the quantum black-box model\cite{Beals98}. In this model, input bits can be accessed by queries to an oracle $X$ and the complexity of $f$ is the number of queries needed to compute $f$. A computation with $T$ queries is just a sequence of unitary transformations \[ U_0\rightarrow O_1\rightarrow U_1\rightarrow O_1\rightarrow\ldots \rightarrow U_{T-1}\rightarrow O_T\rightarrow U_T\] on a state space with finitely many basis states. We shall assume that the set of basis states is $\{0, 1, \ldots, 2^m-1\}$ for some $m$. (Then, $U_0, O_1, \ldots, U_T$ are transformations on $m$ qubits.) $U_j$'s are arbitrary unitary transformations that do not depend on $x_1, \ldots, x_N$ and $O_j$ are queries to the oracle. To define $O_j$, we represent basis states as $|i, b, z\rangle$ where $i$ consists of $\lceil \log N\rceil$ bits, $b$ is one bit and $z$ consists of all other qubits. Then, $O_j$ maps $|i, b, z\rangle$ to $|i, b\oplus x_i, z\rangle$. (I.e., the first $\lceil\log N\rceil$ qubits are interpreted as an index $i$ for an input bit $x_i$ and this input bit is XORed on the next qubit.) We start with a state $|0\rangle$, apply $U_0$, $O_1$, $\ldots$, $O_T$, $U_T$ and measure the rightmost bit of the final state. The network computes $f$ exactly if, for every $x_1, \ldots, x_N$, the result of the measurement always equals $f(x_1, \ldots, x_N)$. The network computes $f$ with bounded error if, for every $x_1, \ldots, x_N$, the probability that the result equals $f(x_1, \ldots, x_N)$ is at least $2/3$. For more information about this model, see \cite{Beals98}. \section{Result} We are going to prove that almost all $N$-variable functions $f(x_1, \ldots, x_N)$ require at least $T(N)=\frac{N}{4}-2\sqrt{N}\log N$ queries in the quantum black box model. First, we state a useful lemma from \cite{Beals98}. \begin{Lemma} \cite{Beals98} Assume we have a computation in the black-box model with $T$ queries. Then, the probability that the measurement at the end of computation gives 0 (or 1) is a polynomial $p(x_1, \ldots, x_N)$ of degree at most $2T$. \end{Lemma} If a black-box computation computes $f(x_1, \ldots, x_N)$ with a bounded error, $p(x_1, \ldots, x_N)$ must be in the interval $[2/3, 1]$ if $f(x_1, \ldots, x_N)=1$ and in $[0, 1/3]$ if $f(x_1, \ldots, x_N)=1$. In this case, we say that $p$ {\em approximates} $f$. We show that, for almost Boolean functions, there is no polynomial $p$ of degree $2T$ that approximates $f$. We start by bounding the coefficients of $p$. \begin{Lemma} \label{L1} If a polynomial $p(x_1, \ldots, x_N)$ approximates a Boolean function $f(x_1, \ldots, x_N)$, then coefficients of all its $d^{\rm th}$ degree terms are between $-2^{Nd+1}$ and $2^{Nd+1}$. \end{Lemma} \noindent {\bf Proof:} By induction. {\bf Base case:} $k=0$. The coefficient is equal to the value of the polynomial on the all-0 vector, $p(0, \ldots, 0)$. Hence, it must be between -4/3 and 4/3. {\bf Inductive case:} Let $c$ be the coefficient of $x_{i_1}x_{i_2}\ldots x_{i_d}$. The value of the polynomial on the assignment with $x_{i_1}=\ldots=x_{i_d}=1$ and all other variables equal to 0 is the sum of $c$ and coefficients of all terms that use part of variables $x_{i_1}, \ldots, x_{i_d}$. These are terms of degree at most $d-1$. Hence, inductive assumption applies to them, each of them is at most $2^{N(d-1)+1}$ and their sum is at most $(2^{d}-1)2^{N(d-1)+1}$. The sum of this and $c$ should be at most 4/3 by absolute value. Hence, $|c|$ is at most $(2^d-1)2^{N(d-1)+1}+4/3 < 2^{Nd+1}$. $\Box$ This implies a bound on the number of polynomials than can be approximated. Let $D(N, d)=\sum_{i=0}^d {n \choose i}$. \begin{Lemma} \label{L2} At most $2^{O(D(N, d)d N^2)}$ functions can be approximated by polynomials of degree $d$. \end{Lemma} \noindent {\bf Proof:} Let $p_1$, $p_2$ be two polynomials. If all coefficients of $p_1$ and $p_2$ differ by at most $2^{-N-2}$, the values on any (0,1)-assignment differ by at most $2^{N}2^{-N-2}=1/4$ (since there are at most $2^N$ terms) and these two polynomials cannot approximate two different Boolean functions. By Lemma \ref{L1}, all coefficients of such polynomials are in $[-2^{Nd+1},2^{Nd+1}]$. We split this interval into subintervals of size $2^{-N-2}$. This gives $2^{O(N^2 d)}$ subintervals. If we choose a subinterval for each coefficient, there is at most one Boolean function approximated by a polynomial with coefficients in these intervals (because any two such polynomials differ by at most 1/4 and, hence, cannot approximate different functions). There are $D(N, d)$ possible terms of degree at most $d$. Hence, there are at most $(2^{O(N^2 d)})^{D(N, d)}=2^{O(D(N,d)N^2 d)}$ combinations of intervals. $\Box$ \begin{Theorem} The fraction of Boolean functions that can be computed with a bounded error in the quantum black-box model with at most $T(N)$ queries, for $T(N)=N/4-2\sqrt{N}\log N$, goes to 0, as $N\rightarrow\infty$. \end{Theorem} \noindent {\bf Proof:} Let $d=2T=N/2-4\sqrt{N}\log N$. Then, $D(N, d)\leq\frac{2^N}{N^{4}}$. and $D(N, d) N^2 d\leq D(N, d) N^3 \leq \frac{2^N}{N}$. Hence, black-box computations with at most $T(N)=N/4-2\sqrt{N}\log N$ queries can compute only $2^{\frac{2^N}{N}}=o(2^{2^N})$ functions, but there are $2^{2^N}$ different Boolean functions of $N$ variables. $\Box$
1,108,101,563,804
arxiv
\section{Introduction} There is a long history of studying the relations between the soft behaviour of field theory amplitudes and the symmetries of the underlying theory. Since the 50s it has been shown that the leading behaviour of scattering amplitudes with a soft photon is obtained by means of gauge invariance from the corresponding amplitudes without the soft particle\cite{LowFourPt}. The extension of this theorem to the universal leading behaviour of amplitudes with one soft graviton was discussed by Weinberg in the 60s\cite{Weinberg}. The nonlinear realization of symmetries provides another example of the relation existing between symmetries and low energy theorems. When a group $G$ is spontaneously broken to some subgroup $H$, Nambu-Goldstone bosons appear and parametrize the coset space $G/H$. Their interaction with the other fields charged under the symmetry group is described in the Lagrangian by derivative couplings. As a result, amplitudes involving one soft Nambu-Goldstone boson are vanishing\cite{ArkaniHamed:2008gz,1412.2145}. These are the famous Adler's zero studied for the first time in the contest of pion dynamics\cite{adler}. Recently the leading divergent behaviour of amplitudes with a soft graviton was obtained from the Ward identity\cite{Strominger} of the diagonal Bondi, van der Burg, Metzner and Sachs supertranslation symmetry\cite{BMS}. Later, similar results have been obtained for Yang-Mills theory where the soft gluon theorem arises as the Ward identity of a two dimensional Kac-Moody type symmetry\cite{mitra}. The extension of these theorems to subleading order for gluons, and sub-subleding order for gravitons, have been obtained by computing on-shell scattering amplitudes\cite{all,Bianchi2015} and proved in arbitrary dimensions by using Poincar\'e and on-shell gauge invariance\cite{gauge,BDDN}. In these new soft theorems, $n+1$-point amplitudes with a soft graviton or gluon are obtained acting on $n$-point hard amplitudes with universal soft operators depending on the momenta and polarizations of the hard particles. The symmetries of the quantum field theory are also reflected in the double soft behaviour of scattering amplitudes with scalar particles or gluons. This has been made explicit in Ref.\cite{ArkaniHamed:2008gz} where, in the case of spontaneously broken symmetries, it has been shown that amplitudes with two soft Nambu-Goldstone bosons capture the algebra of the broken generators of the global symmetry. More recently supergravity amplitudes involving fermion particles have been studied in three and four dimensions and in the kinematic region where two of these particles carry small momentum\cite{1412.1809}. Subleading terms in the emission of two soft scalars computed in Cachazo-He-Yuan (CHY) representation of the amplitude, have been determined for a vast class of theories in Ref.\cite{Cachazo:2015ksa}. In the spinor helicity formalism, similar analyses are performed for amplitudes with gravitons and gluons and for scalars of ${\cal N}=4$ super-Yang-Mills in Ref.\cite{Klose:2015xoa}. New soft theorems in gauge theories with more than one particle are derived in \cite{Volovich:2015yoa} both in four and in any dimensions by using respectively the BCFW and CHY formula. In this very short paper, which is a summary of the main results obtained in Ref.\cite{1502.05258}, a purely string approach to the low energy theorems is presented. Soft gluon and graviton behaviour was also studied in the framework of string theory in Refs.\cite{Ademollo:1975pf,StringSoft,Volovich:2015yoa,1505.05854}. In these papers it has been shown that string amplitudes reproduce the soft theorems without any $\alpha'$ correction, $\alpha'$ being the string slope\cite{Bianchi2015}. In this work we consider bosonic string theory and do not only confirm the results obtained in the literature regarding the emission of soft gravitons, but also extend them to the dilaton and Kalb-Ramond fields. In the case of the Kalb-Ramond we do not get any pole term and we find a peculiar relation between $n+1$ point amplitudes with a soft antisymmetric tensor and $n$ point hard amplitudes, which involve, as an intermediate step, the introduction of holomorphic and antiholomorphic momenta. This handling of the momenta is quite natural in closed string theory, but the relation obtained between amplitudes with and without the soft particle is not a real low energy theorem, because the hard amplitudes are not physical. The amplitude with a soft graviton and $n$ tachyons is obtained through second subleading order. According to a standard trick, the tachyon is seen as a scalar field with mass $m^2=-4/\alpha'$ and therefore we have an example of validity of the low energy theorem for massive matter. String theory is also a powerful tool to get field theory amplitudes. There are few diagrams at each order of the perturbative expansion that are represented as complex integrals on the string moduli space. We have used this compact representation of scattering amplitudes to compute, in bosonic string theory, the colour ordered amplitude with $n+2$ gluons. On this amplitude two different double soft limits are performed. In one case, contiguous gluons are taken with small momentum, in the other case two soft gluons are separated by a hard particle. In both examples gauge invariant expressions are derived. \section{Single soft limit of string amplitudes} \label{tachyon} \setcounter{equation}{0} The scattering amplitude involving a massless closed string state, graviton or dilaton, and $n$ closed string tachyons is given by the tensor: \begin{eqnarray} M_{\mu \nu}= \frac{8\pi}{\alpha'}\bigg(\frac{\kappa_d}{2\pi}\Bigg)^{n-1}\int \frac{\prod_{i=1}^{n} d^2 z_i}{d V_{abc}} \prod_{i<j} |z_i - z_j|^{\alpha' k_i k_j} S _{\mu \nu} \, , \label{M1n} \end{eqnarray} where \begin{eqnarray} S_{\mu \nu} = \frac{\alpha'}{2} \int d^2 z \prod_{\ell=1}^{n} | z- z_{\ell}|^{\alpha' k_{\ell} q}\sum_{i=1}^{n} \frac{k_{i\mu}}{z- z_i} \sum_{j=1}^{n} \frac{k_{j\nu}}{{\bar{z}} - {\bar{z}}_j} \, . \label{Nzbarzqki} \end{eqnarray} and $\kappa_d$ is the gravitational coupling constant. The quantities $z_i$ are complex coordinates parametrizing the insertion on the world-sheet of the vertex operators associated to the tachyon states. The coordinate~$z$, without index, is associated to the massless closed string state. Finally, the soft momentum of massless states is denoted by $q$. In principle $M_{\mu\nu}$ describes also the emission of one anti-symmetric tensor from a scattering amplitude with $n$ tachyons. However, this latter contribution vanishes because the world-sheet parity $\Omega$ leaves the vertex operators of the tachyon, dilaton, and graviton invariant, while changes the sign of the vertex operator of the Kalb-Ramond. The main aspect of these new soft theorems consists in finding an operator, $\hat{S}$, that acting on $n$-point amplitudes reproduces the soft behaviour of $n+1$-point amplitudes. The soft operator is determined by evaluating eq. (\ref{Nzbarzqki}) for small $q$. Eq. (\ref{Nzbarzqki}) is a sum of integrals on the complex plane that have been explicitly computed in Ref.\cite{1502.05258}. Here we only quote the result: \begin{eqnarray} \label{totalexpre} && S_{\mu \nu} = 2\pi\Bigg\{ \sum_{i=1}^{n} k_{i\mu} k_{i \nu} \Bigg[ \frac{(\alpha')^2}{2} \sum_{j \neq i} (k_j q) \log^2 |z_i - z_j| \nonumber\\ && + \frac{1}{k_i q} \Big( 1 +\alpha' \sum_{j \neq i} (k_j q) \log |z_i - z_j| + \frac{(\alpha')^2}{2} \sum_{j;k \neq i}(k_j q) (k_k q)\nonumber\\ && \log|z_i -z_j| \log |z_i - z_k| \Big) \Bigg]+\sum_{i \neq j} \frac{k_{i\mu} k_{j\nu} + k_{i\nu} k_{j\mu}}{2}\nonumber\\ &&\times \Bigg[ - \alpha' \log|z_i-z_{j}| + \frac{(\alpha' )^2}{2} \Bigg( \sum_{k \neq i,j} (k_k q) \Big( \log |z_k - z_{i}| \nonumber\\ &&\log |z_k - z_{j}| \Big)- \sum_{k \neq i} (k_k q) \log|z_i - z_{j}| \log |z_k - z_{i}| \nonumber\\ &&- \sum_{k \neq j} (k_k q) \log|z_i - z_{j}| \log |z_k - z_{j}| \Bigg) \Bigg]\Bigg\} + O(q^2) \ . \end{eqnarray} After a long but straightforward calculation the result of the integrations is rewritten in terms of the differential operators acting on the $n$-tachyon amplitude: \begin{eqnarray} &&\frac{M_{\mu \nu}}{\kappa_d}= \sum_{i=1}^{n} \Bigg[ \frac{k_{i\mu} k_{i\nu}}{k_i q} - i \frac{k_{i\nu} q^\rho L^{(i)}_{\mu \rho} }{2k_i q} - i \frac{k_{i\mu} q^\rho L^{(i)}_{\nu \rho} }{2k_i q}\nonumber\\ && - \frac{q^{\rho} L_{i \,\mu \rho} q^{\sigma} L_{i \,\nu \sigma} }{2k_i q} + \left( \frac{1}{2} \left(\eta_{\mu \nu} q_{\sigma} - q_\mu \eta_{\nu \sigma}\right) - \frac{k_{i \mu} q_\nu q_\sigma }{ 2k_iq} \right) \frac{\partial}{\partial k_{i \sigma}} \Bigg]\nonumber\\ &&\times T_n(k_1,\dots k_n) + O(q^2) \ .\label{1gra4tacbvv} \end{eqnarray} with $T_n$ the $n$ tachyon amplitude defined for example in Ref.\cite{1502.05258} and $L_i$ are the angular momentum operators given by: \begin{eqnarray} L_i^{\mu \rho} = i \left( k_{i}^{\mu} \frac{\partial}{\partial k_{i\rho} }- k_{i}^{\rho} \frac{\partial}{\partial k_{i\mu}} \right) \,. \label{Jmurho} \end{eqnarray} The scattering of the dilaton, graviton and Kalb-Ramond is selected by saturating $M_{\mu\nu}$ with the projectors: \begin{eqnarray} \mbox{Graviton}\, \, (g_{\mu \nu}) \,\,\, &\Longrightarrow& \epsilon^{\mu \nu}_g = \epsilon^{\nu \mu}_g \,\,\, ; \,\,\, \eta_{\mu \nu} \epsilon^{\mu \nu}_g =0 \label{epsG} \\ \mbox{Dilaton } \, (\phi) \,\,\, &\Longrightarrow& \epsilon^{\mu \nu}_d = \eta^{\mu \nu} - q^{\mu} {\bar{q}}^{\nu} - q^{\nu} {\bar{q}}^{\mu} \label{epsd} \\ \mbox{Kalb-Ramond }(B_{\mu \nu} ) \,\,\, &\Longrightarrow& \epsilon^{\mu \nu}_B = - \epsilon^{\nu \mu}_B \label{epsB} \end{eqnarray} where ${\bar{q}}$ is a lightlike vector such that \mbox{$q \cdot {\bar{q}}=1$}. In the case of a graviton, we can neglect the last three terms in the squared bracket of Eq.~(\ref{1gra4tacbvv}) and we get \begin{eqnarray} &&\epsilon^{\mu \nu}_g \frac{M_{\mu \nu} (q; k_i )}{\kappa_d} = \epsilon^{\mu \nu}_g \sum_{i=1}^{n} \Bigg[ \frac{k_{i\mu} k_{i\nu}}{k_i q} - i \frac{k_{i\nu} q^\rho L^{(i)}_{\mu \rho} }{2 k_i q} \nonumber\\ &&- i \frac{k_{i\mu} q^\rho L^{(i)}_{\nu \rho} }{2 k_i q} - \frac{1}{2} \frac{q_{\rho} L_i^{\mu \rho} q_{\sigma} )^{\nu \sigma}_i }{k_i q} \Bigg]T_n (k_i ) +O(q^2)\ , \label{Mgravi} \end{eqnarray} which, of course, agrees with the soft theorem for the graviton derived in section 3 of Ref.~\cite{BDDN}. In the case of the dilaton one gets instead: \begin{eqnarray} && \epsilon^{\mu \nu}_d \frac{M_{\mu \nu} (q; k_i)}{\kappa_d} = \Bigg[ \! - \!\sum_{i=1}^{n} \frac{ m_i^2 \left( 1 + q^{\rho} \frac{\partial}{\partial k_{i}^{\rho}} + \frac{1}{2} q^{\rho} q^{\sigma} \frac{ \partial^2}{ \partial k_{i}^{\rho} \partial k_{ i}^{\sigma} } \right) }{k_i q} \nonumber\\ && +2 - \sum_{i=1}^{n} k_{i}^{\rho} \frac{\partial}{ \partial k_{i}^{\rho }}- \sum_{i=1}^{n} \Bigg(k_{i\mu} q_{\sigma} \frac{\partial^2}{\partial k_{i\mu} \partial k_{i\sigma}} - \frac{1}{2} (k_i q) \nonumber\\ &&\times \frac{\partial^2}{\partial k_{i\mu} \partial k_{i\mu}} \Bigg)\Bigg] T_n (k_1,\dots k_n ) +O(q^2)\ , \end{eqnarray} where $m_i^2 = -\frac{4}{\alpha'}$ is the squared mass of the closed string tachyon. The dilaton contains terms $O( q^{-1} )$ when the other particles are massive, because the three-point amplitude involving a dilaton and two equal particles with mass $m$ is proportional to $m^2$. We have then studied the soft behaviour of amplitudes involving only massless states. In this case the analysis has been done up to the subleading order in the soft expansion and the result is rather complicated but can be written as a convolution integral, $M_{n+1}=S\ast M_{n}$, with $M_n$ the amplitude of $n$ massless states in the closed bosonic string, given in ref. \cite{1502.05258}, and \begin{eqnarray} S=2\pi \epsilon_{q\mu} {\bar{\epsilon}}_{ q\nu} \Big( S_{q^{-1}}^{\mu\nu}+S_{q^0}^{\mu\nu}\Big)+O(q) \end{eqnarray} with \begin{eqnarray} S_{q^{-1}}^{\mu\nu}= \sum_{i=1}^{n} \frac{k_{i}^{\mu} k_{i}^{\nu}}{k_i q} \end{eqnarray} and \begin{eqnarray} &&S_{q^0}^{\mu\nu} = \sqrt{\frac{\alpha'}{2}}\sum_{j \neq i} \left[ \frac{\sqrt{2\alpha'}k_{i}^{\nu} q^{\rho}}{k_i q} \log |z_i - z_j|\left( k_{i}^{\mu} k_{j\rho} - k_{i\rho} k_{j}^{ \mu} \right)\right. \nonumber\\ && - \left( \frac{\theta_i (\epsilon_i q)}{z_i - z_j} \left( \frac{k_{j}^{\mu} k_{i}^{\nu}}{k_i q} - \frac{k_{j}^{\mu} k_{j}^{\nu}}{k_j q} \right) +\mbox{c.c} \right)- \left( \frac{ \theta_i \epsilon_{i}^{\mu} k_{j}^{\nu}}{z_i - z_j}+\frac{ {\bar{\theta}}_i {\bar{\epsilon}}_{i}^{\nu} k_{j}^{\mu}}{{\bar{z}}_i - {\bar{z}}_j}\right)\nonumber \\ &&\left.+ \frac{k_j q}{k_i q}\Bigg( \frac{ \theta_i \epsilon_i^{\mu} k_{i}^{\nu}}{z_i - z_j}+\frac{ {\bar{\theta}}_i {\bar{\epsilon}}_i^{\nu} k_{i}^{\mu}}{{\bar{z}}_i-\bar{z}_j }\Bigg)\right] - \!\! \sum_{i \neq j} \!\!\frac{ (\theta_j \epsilon_j q)(\theta_i \epsilon_i^{\mu})}{(z_i - z_j)^2} \nonumber \\ &&\times \left( \frac{k_j^{\nu}}{k_j q} - \frac{k_i^{\nu}}{k_i q} \right) - \sum_{i \neq j} \!\!\frac{ ({\bar{\theta}}_j {\bar{\epsilon}}_j q) ({\bar{\theta}}_i {\bar{\epsilon}}_i^{\nu})}{({\bar{z}}_i - {\bar{z}}_j)^2} \Bigg( \frac{k_j^{\mu}}{k_j q} - \frac{k_i^{\mu}}{k_i q} \Bigg) \ . \end{eqnarray} The polarizations of the massless states are conveniently written in the form $\epsilon_{\mu\nu}=\theta\epsilon_\mu\bar{\theta}\bar{\epsilon}_\nu$, with ($\theta,\, \bar{\theta}$) Grassmanian variables and c.c. denotes the complex conjugate. If we use the polarization of a graviton, given in Eq.~(\ref{epsG}), we get the soft behavior for a graviton in agreement with the result of Ref.~\cite{BDDN}. In the case of the dilaton we get instead: \begin{eqnarray} \frac{M_{n+1}}{\kappa_d}= \left[ 2 - \sum_{i=1}^{n} k_{i\mu} \frac{\partial}{\partial k_{i\mu}} \right] M_n + O (q ) \, , \label{sofdilafin} \end{eqnarray} in agreement with the result of Ref.~\cite{Ademollo:1975pf}. We have checked that the previous soft behaviour is also obtained in the case of the superstring. In order to define a low energy theorem for the antisymmetric tensor, it is convenient to keep distinct the holomorphic, $k_i$, and anti-holomorphic, ${\bar{k}}_i$, momentum coming from the factorized structure of the vertices in closed string theory. According to such a separation one gets: \begin{eqnarray} &&\frac{iM_{n+1} }{\kappa_d} =\epsilon_{q\, \mu\nu}^B \sum_{i=1}^{n} \left[ \frac{( L_i - {\bar{L}}_i )^{\mu \nu}}{2} + \frac{k_i^{\nu} q_{\rho}( S_i - {\bar{S}}_i)^{\mu \rho}}{k_i q} \right]M_n\Bigg|_{k=\bar{k}}\nonumber\\ &&\label{Bmunu44} \end{eqnarray} with \begin{eqnarray} S_i^{\mu\nu}=i\left( \epsilon_i^\mu\frac{\partial }{\partial \epsilon_{i\nu}} -\epsilon_i^\nu\frac{\partial }{\partial \epsilon_{i\mu}}\right) \, , \ {\bar{S}}^{\mu\nu}_i=i\left( {\bar{\epsilon}}_i^\mu\frac{\partial }{\partial {\bar{\epsilon}}_{i\nu}} -{\bar{\epsilon}}_i^\nu\frac{\partial }{\partial {\bar{\epsilon}}_{i\mu}}\right) \ , \end{eqnarray} and $\bar{L}$ is the angular momentum operator written in terms of the anti-holomorphic momenta $\bar{k}$. Eq. (\ref{Bmunu44}) reproduces the soft behavior of the antisymmetric tensor, but it is not a real soft theorem as in the case of the graviton and dilaton because, due to the separation of $k$ and ${\bar{k}}$, the amplitude $M_n$ is not a physical amplitude. \section{Double soft limit of string amplitudes} In this section we consider the color-ordered scattering amplitude, $M_{2g;ng}$, involving $(n+2)$ gauge fields living on the world-volume of a D$p$-brane of the bosonic string and we compute the leading double-soft behavior when two contiguous gluons become simultaneously soft. We denote with $(\epsilon_{q_1} , q_1)$ and $(\epsilon_{q_2} , q_2)$ the polarizations and momenta of two contiguous gluons that eventually will become soft and with $(\epsilon_i , k_i)$ the polarizations and momenta of the remaining gluons. The amplitude has been computed in detail in ref.\cite{1502.05258}, and here we give only the result written as a convolution integral, $M_{2g;ng}= M_{ng} * G_n$, between the $n$ gluon amplitude and the quantity $G_n$ which contains all the information about double soft behaviour of the gluons \begin{eqnarray} && G_n = 2 \alpha' g_{p+1}^2 \int_{0}^{z_{n-1}} dw_1 \int_{0}^{w_1} dw_2 (w_1- w_2)^{2\alpha' q_1 q_2} \nonumber\\ &&\times \prod_{i=1}^{n} \prod_{a=1}^2\Bigg[(z_i - w_a)^{2\alpha' k_i q_a} {\rm e}^{\sqrt{2\alpha'} \frac{\theta_i \epsilon_i q_a}{z_i - w_a}}\Bigg]\Bigg\{ \frac{ (\epsilon_{q_1} \epsilon_{q_2})}{(w_1 - w_2)^2} \nonumber \\ && + \left[ \sum_{i=1}^{n} \frac{\theta_i ( \epsilon_i \epsilon_{q_1} )}{(z_i - w_1)^2} - \sum_{i=1}^{n} \frac{\sqrt{2\alpha'} (k_i \epsilon_{q_1} )}{z_i - w_1} + \frac{\sqrt{2\alpha'} (\epsilon_{q_1} q_2 )}{w_1 - w_2} \right] \nonumber \\ && \left. \times \left[ \sum_{j=1}^{n} \frac{\theta_j ( \epsilon_j \epsilon_{q_2} )}{(z_j - w_2)^2} - \sum_{j=1}^{n} \frac{\sqrt{2\alpha'} (k_j \epsilon_{q_2} )}{z_j - w_2} - \frac{\sqrt{2\alpha'} (\epsilon_{q_2} q_1 )}{w_1 - w_2} \right] \right\} \nonumber . \label{softfactorSbis} \end{eqnarray} The latter integral has been computed in the limit of small momenta $q_1$, $q_2$ and the resulting expression turns out to be: \begin{eqnarray} &&G_n = \frac{g_{p+1}^{2} }{ q_1 q_2} \Bigg\{ \Bigg[- \frac{(\epsilon_{q_1} \epsilon_{q_2} )k_n (q_2 - q_1) + q_1 q_2 }{2s_{n} } \nonumber\\ && + \frac{ (\epsilon_{q_1} q_2 ) (\epsilon_{q_2} k_n) - (\epsilon_{q_2} q_1 ) (\epsilon_{q_1} k_n)}{ s_n } + \frac{(\epsilon_{q_1} k_n) (\epsilon_{q_2} k_n) (q_1 q_2)}{(k_n q_2) s_n} \nonumber \\ && + k_{n}\leftrightarrow k_{n-1}\Bigg] - \frac{(\epsilon_{q_1} k_{n-1}) ( \epsilon_{q_2} k_n) (q_1 q_2) }{(k_{n-1} q_1 ) (k_n q_2)} \Bigg\}, \label{a} \end{eqnarray} where $s_\alpha=k_\alpha(q_1+q_2)+q_1q_2$ with $\alpha=n,n-1$. Eq.(\ref{a}) is gauge invariant and behaves as $\frac{1}{q_{1} q_2}$ in the double-soft limit, i.e. when both $q_1$ and $q_2$ simultaneously go to zero. The double-soft behaviour of the $n+2$-point color ordered amplitude with two soft particles separated by a hard one is evaluated along the same lines of the contiguous case. The resulting expression is again a convolution between the $n$ point gluon amplitude and the momenta dependent quantity: \begin{eqnarray} G_{2g} &=& g_{p+1}^2 \left[ \frac{ k_{n-2} \epsilon_{q_1}}{k_{n-2} q_1 } \left( \frac{k_{n-1} \epsilon_{q_2}}{k_{n-1} q_2} - \frac{k_{n} \epsilon_{q_2}}{k_{n} q_2} \right) + \frac{k_{n-1} \epsilon_{q_1}}{k_{n-1} q_1} \frac{k_{n} \epsilon_{q_2}}{k_{n} q_2} \right. \nonumber \\ &-& \left. \frac{ (\epsilon_{q_1} k_{n-1}) (\epsilon_{q_2} k_{n-1}) }{ k_{n-1}(q_1 + q_2) + q_1 q_2 } \left( \frac{1}{ k_{n-1} q_1} + \frac{1}{ k_{n-1} q_2} \right) \right] \label{sidstesidste} \end{eqnarray} It is easy to check that the soft factor is gauge invariant up to terms of order $q_{1,2}^0$ as in the case of two contiguous soft gluons. \section{Concluding Remarks} We have presented the results of Ref.\cite{1502.05258}, showing that bosonic string theory is a useful framework for computing low-energy properties of scattering amplitudes in a gauge covariant way and for any dimensions. The framework also allows to extend the results straightforwardly to higher orders in soft-momenta and to directly apply it to superstrings, which we plan to present in a future work.
1,108,101,563,805
arxiv
\section{Introduction, Results and Examples} Throughout the paper, we denote by $f$ a meromorphic function in the complex plane $\mathbb{C}$ and related to the function, we assume that the readers are familiar with the basic terms like $T(r,f)$, $N(r,f)$, $m(r, f)$, of Nevanlinna value distribution theory of meromorphic functions (see \cite{Hayman_Oxford,Laine_Gruyter}). The notation $S(r,f)$, will be used to define any quantity satisfying $S(r,f)=o(T(r,f))$ as $r\rightarrow \infty$, possibly outside a set $E$ of $r$ of finite logarithmic measure. In addition, we will respectively use the symbols $\rho(f)$, $\lambda(f)$ and $\tau(f)$ to denote the order, exponent of convergent and type of $f$. The symbol $L(f)$ will be used to represent a linear differential polynomial in $f$ with polynomial coefficients. Also, throughout this paper, by $card(S)$, we mean the cardinality of a set $S$, i.e., the number of elements in $S$.\par Considering the non-linear differential equation $$L(f)-p(z)f^n(z)=h(z),$$ in 2001, Yang \cite{Yang-Aust} investigated about the transcendental finite order entire solutions $f$ of the equation, where $p(z)$ is a non-vanishing polynomial, $h(z)$ is entire and $n\geq 4$ is an integer.\par In 2010, Yang-Laine \cite{Yang-Laine-Jpn} showed that the equation $$f(z)^2 + q(z)f(z + 1) = p(z),$$ where $p(z)$, $q(z)$ are polynomials, admits no transcendental entire solutions of finite order.\par In the last two decades researchers mainly studied (see \cite{Biswas-Banerjee,Li-Lu-Xu,Liao-Yang-Zhang,Yang-Aust,Zhang-Liao} etc.) about the following three distinct features of solutions of shift or delay-differential or differential equations:\\ i) existence and non-existence conditions,\\ ii) order of growth and \\ iii) different types of forms of solutions. \par Next, let us consider the exponential polynomial $f(z)$, defined by the form \bea\label{e1.1} f(z) = P_1(z)e^{Q_1(z)} + \cdots + P_k(z)e^{Q_k(z)},\eea where $P_j$'s and $Q_j$'s are polynomials in $z$. Steinmetz \cite{Steinmetz}, showed that (\ref{e1.1}) can be written in the normalized form \bea\label{e1.2}f(z) = H_0(z) + H_1(z)e^{\omega_1z^t}+\cdots + H_m(z)e^{\omega_mz^t},\eea where $H_j$ are either exponential polynomials of order $< t$ or ordinary polynomials in $z$, the leading coefficients $\omega_j$ are pairwise distinct and $m\leq k$.\par Let $co(\mathcal{W})$ be the convex hull of a set $\mathcal{W} \subset\mathbb{C}$ which is the intersection of all convex sets containing $\mathcal{W}$. If $\mathcal{W}$ contains finitely many elements then $co(\mathcal{W})$ is obtained as an intersection of finitely many half-planes, then $co(\mathcal{W})$ is either a compact polygon with a non-empty interior or a line segment. We denote by $C(co(\mathcal{W}))$, the circumference of $co(\mathcal{W})$. If $co(\mathcal{W})$ is a line-segment, then $C(co(\mathcal{W}))$ is equals to twice the length of this line segment. Throughout the paper, we denote $W =\{\bar{\omega}_1,\bar{\omega}_2,\ldots,\bar{\omega}_m\}$ and $W_0 = W \cup \{0\}$.\par Now-a-days, to find the form of exponential polynomials as solution of certain non-linear differential-difference equation has become an interesting topic among researchers (see \cite{Chen-Gao-Zhang_CMFT1,Chen-Hu-Wang_CMFT2,Xu-Rong}). Most probably, in this regard, the first attempt was made by Wen-Heittokangas-Laine \cite{Wen-Heittokangas-Laine}. In 2012, they considered the equation \bea\label{e1.3}f(z)^n + q(z)e^{Q(z)}f(z + c) = P(z),\eea where $q(z)$, $Q(z)$, $P(z)$ are polynomials, $n\geq 2$ is an integer, $c\in\mathbb{C}\backslash \{0\}$. Wen-Heittokangas-Laine also pointed out that for a non-constant polynomial $\alpha(z)$ and $d \in\mathbb{C}$, every solution $f$ of the form (\ref{e1.2}) reduces to a function which belongs to one of the following classes: \beas\Gamma_1 &=&\{e^{\alpha(z)} + d \},\\ \Gamma_0 &=& \{e^{\alpha(z)}\}\eeas and classified finite order entire(meromorphic) solutions of (\ref{e1.3}) as follows: \begin{theoA}\cite{Wen-Heittokangas-Laine} Let $n \geq 2$ be an integer, let $c \in\mathbb{C}\backslash \{0\}$, $q(z)$, $Q(z)$, $P(z)$ be polynomials such that $Q(z)$ is not a constant and $q(z) \not\equiv 0$. Then the finite order entire solutions $f$ of equation (\ref{e1.3}) satisfies the following conclusions: \begin{itemize} \item [(a)] Every solution $f$ satisfies $\rho(f) = \deg Q$ and is of mean type. \item [(b)] Every solution $f$ satisfies $\lambda(f) = \rho(f)$ if and only if $P(z) \not\equiv 0$. \item [(c)] A solution $f$ belongs to $\Gamma_0$ if and only if $P(z) \equiv 0$. In particular, this is the case if $n \geq 3$. \item [(d)] If a solution $f$ belongs to $\Gamma_0$ and if $g$ is any other finite order entire solution of (\ref{e1.3}), then $f = \eta g$, where $\eta^{ n-1} = 1$. \item [(e)] If $f$ is an exponential polynomial solution of the form (\ref{e1.2}), then $f \in \Gamma_1$. Moreover, if $f\in \Gamma_1\backslash \Gamma_0$, then $\rho(f) = 1$. \end{itemize} \end{theoA} Inspired by {\it Theorem A}, in 2016, Liu \cite{Liu_Mediterr_2016} replaced $f(z+c)$ by $f^{(k)}(z+c)$ in (\ref{e1.3}) and for two polynomials $p_1(z)$, $p_2(z)$ and a non-constant polynomial $\alpha(z)$, introduced two new classes of solutions: \beas\Gamma_1' &=&\{p_1(z)e^{\alpha(z)} + p_2(z) \},\\ \Gamma_0' &=& \{p_1(z)e^{\alpha(z)}\}\eeas to obtain the following theorem. \begin{theoB}\cite{Liu_Mediterr_2016} Under the same situation as in {\em Theorem A} with $k\geq 1$, the finite-order transcendental entire solution $f$ of \bea\label{e1.4}f(z)^n + q(z)e^{Q(z)}f^{(k)}(z + c) = P(z),\eea should satisfy the results {\em (a), (b), (d)} and \begin{itemize} \item [(1)] A solution $f$ belongs to $\Gamma_0'$ if and only if $P(z) \equiv 0$. In particular, this is the case if $n \geq 3$. \item [(2)] If $f$ is an exponential polynomial solution of (\ref{e1.4}) of the form (\ref{e1.2}), then $f \in \Gamma_1'$. \end{itemize} \end{theoB} Recently, Liu-Mao-Zheng \cite{Liu-Mao-Zheng_OpenMath} considered $\Delta_cf(z)$ instead of $f(z+c)$ in (\ref{e1.3}) and proved that \begin{theoC}\cite{Liu-Mao-Zheng_OpenMath} Under the same situation as in {\em Theorem A}, the finite order entire solutions $f$ of the equation \bea\label{e1.5}f(z)^n + q(z)e^{Q(z)}\Delta_cf(z) = P(z),\eea satisfies the results {\em (a), (b)} and \begin{itemize} \item [(1)] $\lambda(f) = \rho(f)-1$ if and only if $P(z) \equiv 0$. In particular, this is the case if $n \geq 3$. \item [(2)] If $n\geq3$ or $P(z) \equiv 0$, $f$ is of the form $f(z)=A(z)e^{\omega z^s}$, where $s=\deg{Q}$, $\omega$ is a nonzero constant and $A(z) (\not\equiv 0)$ is an entire function satisfying $\lambda(A) = \rho(A)=\deg{Q}-1$. In particular, if $\deg{Q}= 1$, then $A(z)$ reduces to a polynomial. \item [(3)] If $f$ is an exponential polynomial solution of (\ref{e1.5}) of the form (\ref{e1.2}), then f is of the form $$f(z) = H_0(z) + H_1(z)e^{\omega_1z},$$ where $H_1(z)$, $H_2(z)$ are non-constant polynomials and $\omega_1$ is a non-zero constant satisfying $e^{\omega_1c}=1$. \end{itemize} \end{theoC} In 2017, Li-Yang \cite{Li-Yang_jmaa_2017} considered the following form of equation \bea\label{e1.6}f^n(z)+a_{n-1}f^{n-1}(z)+\cdots+a_1f(z)+q(z)e^{Q(z)}f(z+c)=P(z),\eea where $a_i\in\mathbb{C}$ and proved the following results. \begin{theoD}\cite{Li-Yang_jmaa_2017} Under the same situation as in {\em Theorem A}, the finite order entire solutions $f$ of equation (\ref{e1.6}) satisfies the results {\em (a), (d)} and \begin{itemize} \item [(1)] If zero is a Borel exceptional value of $f(z)$, then we have $a_{n-1}=\cdots=a_1=P(z)\equiv 0$. \item [(2)] If $P(z)\equiv 0$, then we have $z^{n-1}+a_{n-1}z^{n-2}+\cdots+a_1=(z+a_{n-1}/n)^{n-1}$. Furthermore, if there exists $i_0\in\{1,\ldots, n-1\}$ such that $a_{i_0}=0$, then all of the $a_j (j=1,\ldots, n-1)$ must be zero as well and we have $\lambda(f) <\rho(f)$; otherwise we have $\lambda(f) =\rho(f)$. \item [(3)] A solution $f$ belongs to $\Gamma_0$ if and only if $P(z) \equiv 0$ and there exists an $i_0\in\{1, \ldots, n-1\}$ such that $a_{i_0}=0$. \item [(4)] When $n \geq 3$, if there exists an $i_0\in\{1, \ldots, n-1\}$ such that $a_{i_0}=0$ and $ card \{z: p(z) =p'(z) =p''(z) =0\} \geq 1$ or $ card \{z: p(z) =p'(z) =0\} \geq 2$, where $p(z) =z^n+a_{n-1}z^{n-1}+\cdots+a_1z$, then $f$ belongs to $\Gamma_0$ and $a_{n-1}=\cdots=a_1= 0\equiv P(z)$. \end{itemize} \end{theoD} In the same paper, Li-Yang \cite{Li-Yang_jmaa_2017} also proved the following result. \begin{theoE}\cite{Li-Yang_jmaa_2017} If $f$ is an exponential polynomial solution of the form (\ref{e1.2}) of the equation (\ref{e1.6}) for $n=2$ and $a_1\neq 0$, then the following conclusions hold. \begin{itemize} \item [(1)] when $m\geq 2$, there exists $i,j\in\{1,2,\ldots,m\}$ such that $\omega_i=2\omega_j$. \item [(2)] when $m =1$, then $f\in\Gamma_1$. Moreover, if $f\in\Gamma_1\backslash\Gamma_0$, then $\rho(f) =1$, $f(z)=Ke^{\frac{1}{c}(2k\pi i-\log\frac{2d+a_1}{d}}$, $Q(z)=\frac{1}{c}(2k\pi i-\log\frac{2d+a_1}{d})z$, $q(z)=-\frac{2d+a_1}{d}$ and $d^2+a_1d =P(z)$, where $K, d \in\mathbb{C}\backslash\{0\}$ and $k\in\mathbb{Z}$. \end{itemize} \end{theoE} We now introduce the generalized linear delay-differential operator of $f(z)$, \bea\label{e1.7}L(z,f)=\sum_{i=0}^{k}b_if^{(r_i)}(z+c_i)\;(\not\equiv 0),\eea where $b_i,c_i\in\mathbb{C}$, $r_i$ are non-negative integers, $c_0=0$, $r_0=0$. In view of the above theorems it is quiet natural to characterize the nature of exponential polynomial as solution of certain non-linear complex equation involving generalized linear delay-differential operator. In this regard, we consider the following non-linear delay-differential equation \bea\label{e1.8} f^{n}(z)+\sum_{i=1}^{n-1}a_{i}f^{i}(z)+q(z)e^{Q(z)}L(z,f)=P(z),\eea where $a_i\in\mathbb{C}$, $n$ be non-negative integers; $q$, $Q$, $P$ respectively be non-zero, non-constant, any polynomials. We also introduce, for any polynomials $p_i(z)$ and non-constant polynomials $\alpha_i(z)$, a new class of solution as follows: \beas \Gamma_2' &=&\{p_1(z)e^{\alpha_1(z)}+p_2(z)e^{\alpha_2(z)}+p_3(z) \}\eeas Now we are at a state to present our main result which improves all the above mentioned results as follows: \begin{theo}\label{t1.1} Under the same situation as in {\em Theorem A}, the finite order entire solutions $f$ of equation (\ref{e1.8}) satisfies \begin{itemize} \item [(i)] Every solution $f$ satisfies $\rho(f) = \deg Q$ and is of mean type. \item [(ii)] If zero is a Borel exceptional value of $f(z)$, then we have $a_{n-1}=\cdots=a_1=P(z)\equiv 0$. Conversely, if $P(z) \equiv 0$ and there exists an $i_0\in\{1, \ldots, n-1\}$ such that $a_{i_0}=0$, then all of $a_j$'s $(j=1,\ldots, n-1)$ must be zero and we have $\lambda(f) <\rho(f)$; otherwise we have $\lambda(f) =\rho(f)$. \item [(iii)] If a solution $f$ belongs to $\Gamma_0'$, then $a_{n-1}=\cdots=a_1=P(z)\equiv 0$. Conversely, let $P(z) \equiv 0$ and there exists an $i_0\in\{1, \ldots, n-1\}$ such that $a_{i_0}=0$, then either $\lambda(f)=\rho(f)-1$ for $c_i=c_j$, $1\leq i,j\leq k$ or $f$ belongs to $\Gamma_0'$. \item [(iv)] Let $n \geq 3$. If at least one $a_{i_0}=0$ $(i_0=1,2, \ldots, n-1)$ and $p(z) =z^n+a_{n-1}z^{n-1}+\cdots+a_1z$ such that $ card \{z: p(z) =p'(z) =p''(z) =0\} \geq 1$ or $ card \{z: p(z) =p'(z) =0\} \geq 2$, then $P(z)\equiv 0=a_{n-1}=\cdots=a_1= 0$ and $f\in\Gamma_0'$. Moreover, $ card \{z: p(z) =p'(z) =0\} \geq 2$ is not possible. \item [(v)] Let $f$ be given by (\ref{e1.2}), which is a solution of (\ref{e1.8}) for $n=2$ and $a_1\neq 0$. Then the following conclusions hold: \begin{itemize} \item [(a)] when $m\geq 2$, there exists $i,j\in\{1,2,\ldots,m\}$ such that $\omega_i=2\omega_j$. In this case, $f\in\Gamma_2'$. \item [(b)] when $m=1$, then $f$ takes the form $f(z) = H_0(z) + H_1(z)e^{\omega_1z^t}$, i.e., $f\in\Gamma_1'$. In this case, \begin{itemize} \item [(I)] either $t=1$, $\rho(f)=1$ and $H_0(z)$, $H_1(z)$ are polynomials and $Q(z)$ is a polynomial of degree $1$ \item [(II)] or $H_0(z)=-\frac{a_1}{2}$, $P(z)=-\frac{a_1^2}{4}$, $H_1^2(z)=\frac{b_0a_1}{2}q(z)e^{Q_{t-1}(z)}$ and $L(z,f)=b_0H_0(z)$ \item [(III)] or $H_0(z)=-\frac{a_1}{2}$, $P(z)=-\frac{a_1^2}{4}$, $H_1^2(z)=-q(z)e^{Q_{t-1}(z)}\mathcal{A}_1(z)$ and $L(z,f)=\mathcal{A}_1(z)e^{\omega_1z^t}$, where $\mathcal{A}_1(z)=\sum_{i=0}^{k}b_i\tilde{H}_1(z+c_i)e^{\omega_1(z+c_i)^t-\omega_1z^t}$ such that $\tilde{H}_1(z+c_i)$ are the delay-differential polynomial of $H_1(z)$. \end{itemize} \end{itemize} \end{itemize} \end{theo} \begin{rem} Note that {\em Cases (i)-(iv)} and {\em (v)} of {\em Theorem \ref{t1.1}} improve {\em Theorems D} and {\em E}, respectively. Also, since $L(z,f)$ includes $f^{(k)}(z + c)$ and $\Delta_cf(z)$, {\em Theorem \ref{t1.1}} improves {\em Theorems B-C} as follows:\par {\em {(I)}} {\em Cases (i)}, {\em (ii)} and {\em (iii)-(iv)} of {\em Theorem \ref{t1.1}} improve {\em Case (a)}, {\em (b)} and {\em (1)} of each of {\em Theorems B-C}, respectively.\par {\em {(II)}} {\em Case (v)-(b)} of {\em Theorem \ref{t1.1}} improves, respectively, {\em Case (2)} and {\em Case (3)} of {\em Theorem B} and {\em Theorem C}. \end{rem} This following three examples clarify {\em Cases (ii)-(iii)}. \begin{ex} Take $L(z,f)=f''(z+c)$. Then the function $f=e^{2z}$ satisfies the equation $f^2-\frac{1}{4}e^{2z}L(z,f)=0$ such that $e^{2c}=1$. Clearly, $0=\lambda(f)=\rho(f)-1$. This example clarifies {\em Theorem B} as well. \end{ex} \begin{ex} Let $L(z,f)=\Delta_cf(z)$. Then the function $f=e^{\alpha z}$ satisfies the equation $f^2-\frac{1}{2}e^{\alpha z}L(z,f)=0$ such that $e^{\alpha c}=3$. Clearly, $0=\lambda(f)=\rho(f)-1$. This example also satisfies {\em Theorem C}. \end{ex} \begin{ex} Let $L(z,f)=f(z+1)+f'(z+1)-f''(z+1)$. Then the function $f=(z+1)e^{z}$ satisfies the equation $f^2-(z+1)e^{z-1}L(z,f)=0$. Note that, here $c_1=c_2=c_3=1$ and $f\in\Gamma_0'$. \end{ex} The next example satisfies {\em Case (iv)}. \begin{ex} Let $L(z,f)=f(z+\log 2)+f''(z+\pi i)$. Then the function $f=e^{iz}$ satisfies the equation $f^3+qe^{2i z}L(z,f)=0$, where $q=\frac{1}{e^{-\pi}-2^i}$. Note that, here $p(z)=z^3$ and $ card \{z: p(z) =p'(z) =p''(z) =0\} = 1$. Also, $a_2=a_1=0\equiv P(z)$ and $f\in\Gamma_0'$. \end{ex} By the following example, it is clear that the {\em Case (v)-(a)} occurs significantly. \begin{ex} Take $L(z,f)=f'(z+\log 4)-4f(z+\log 3)$ and $m=2$. Then the function $f=e^{2z}-e^{z}+1$ satisfies the equation $f^2-2f+\frac{1}{4}e^{2z}L(z,f)=-1$. Note that here $f\in\Gamma_2'$. \end{ex} The following two examples show that the {\em Case (v)-(b)-(I)} actually holds. \begin{ex} We take $L(z,f)=f(z+c)$. Then the function $f=d+e^{\alpha z}$ satisfies the equation $f^2-df-e^{\alpha z}L(z,f)=0$ such that $e^{\alpha c}=1$. Here, $P(z)\equiv 0$.\\ Also, the same function satisfies $f^2-3df+e^{\alpha z}L(z,f)=-2d^2$ such that $e^{\alpha c}=-1$. Here, $f\in\Gamma_1'$. Here, $P(z)\not\equiv 0$. This example is true for {\em Theorem E} as well. \end{ex} \begin{ex} Put $L(z,f)=f(z+\log 2)+f'(z+\pi i)+f''(z+2\pi i)$ and $m=1$. Then the function $f=2+3e^{z}$ satisfies the equation $f^2-3f-\frac{3}{2}e^{z}L(z,f)=-2$. Here, $P(z)\not\equiv 0$.\\ Also, let $L(z,f)=f(z+\log 3)-f'(z+\log 4)+f''(z+\log 2)$ and $m=1$. Then the function $f=3+e^{z}$ satisfies the equation $f^2-3f-e^{z}L(z,f)=0$. Here, $P(z)\equiv 0$. \end{ex} Next example shows that the {\em Case (v)-(b)-(II)} actually occurs. \begin{ex} Let $L(z,f)=3f(z)+f'(z+\log 2)-3f''(z+2\pi i)$ and $m=1$. Then the function $f=-\frac{a_1}{2}+2e^{3z}$ satisfies the equation $f^2+a_1f+\frac{8}{3a_1}e^{6z}L(z,f)=-\frac{a_1^2}{4}$. Note that here $b_0=3$, $H_0=-\frac{a_1}{2}$ and so, $L(z,f)=-\frac{3a_1}{2}=b_0H_0$. \end{ex} Next example shows that the {\em Case (v)-(b)-(III)} actually occurs. \begin{ex} Let $L(z,f)=f(z)-f(z+\log 2)+\frac{1}{2}f'(z+\log 2)+\frac{2}{9}f'(z+\log 3)-\frac{1}{9}f''(z+\log 3)$ and $m=1$. Then the function $f=-\frac{a_1}{2}+ze^{2z}$ satisfies the equation $f^2+a_1f-ze^{2z} L(z,f)=-\frac{a_1^2}{4}$. Note that here, $q(z)=-z$, $Q(z)=2z$, $Q_{t-1}(z)=0$. Also, $\mathcal{A}_1(z)=\sum_{i=0}^{5}\left(b_i\tilde{H}_1(z+c_i)e^{2c_i}\right)=z$. So, $H_1^2(z)=z^2=-q(z)e^{Q_{t-1}(z)}\mathcal{A}_1(z)$ and $L(z,f)=\mathcal{A}_1(z)e^{2z}=\sum_{i=0}^{5}\left(b_i\tilde{H}_1(z+c_i)e^{2c_i}\right)e^{2z}=ze^{2z}$. \end{ex} \section{Lemmas} We give the following well-known results which are important to prove our theorems. \begin{lem}\label{l1}\cite{Chiang-Feng} Let $f$ be a non-constant meromorphic function and $c_1$, $c_2$ be two complex numbers such that $c_1\neq c_2$. Let $f(z)$ be a meromorphic function with finite order $\rho$, then for each $\epsilon > 0$, \beas m\left(r,\frac{f(z+c_1)}{f(z+c_2)}\right)=S(r,f).\eeas \end{lem} \begin{lem}\cite[Corollary 2.3.4]{Laine_Gruyter}\label{l2} Let $f$ be a transcendental meromorphic function and $k\geq 1$ be an integer. Then $m\left(r,\frac{f^{(k)}}{f}\right)= S(r,f)$. \end{lem} Combining {Lemmas \ref{l1}-\ref{l2}} we have the following lemma: \begin{lem}\label{l3} Let $f(z)$ be a meromorphic function of finite order and let $c\in\mathbb{C}$, $k\geq 1$ be an integer. Then $m\left(r,\frac{f^{(k)}(z+c)}{f}\right)= S(r,f)$. \end{lem} \begin{proof} $$ m\left(r,\frac{f^{(k)}(z+c)}{f}\right)=m\left(r,\frac{f^{(k)}(z+c)}{f(z+c)}.\frac{f(z+c)}{f(z)}\right)=S(r,f).$$ \end{proof} \begin{lem}\label{l4}\cite{Li-Yang_jmaa_2017} Let $f$ be a non-constant meromorphic function of hyper order less than $1$ and $c\in\mathbb{C}$. Then \beas N(r,1/f(z+c))= N(r,0;f(z))+S(r,f).\eeas \end{lem} \begin{lem}\label{l5}\cite{Yang-HX-Kluwer} Suppose $f_j (z)$ $( j =1 ,2,...,n+1)$ and $g_k (z)$ $( k =1 ,2,...,n)$ $(n \geq 1)$ are entire functions satisfying the following conditions: \begin{itemize} \item[(i)] $ \sum_{j=1}^{n} f_j (z)e^{g_j(z)} \equiv f_{n+1} (z)$, \item[(ii)] The order of $f_j (z)$ is less than the order of $e^{g_k(z)}$ for $1 \leq j \leq n +1$ , $1 \leq k \leq n$ and furthermore, the order of $f_j (z)$ is less than the order of $e^{g_h(z)-g_k(z)}$ for $n \geq 2$ and $1\leq j \leq n +1$, $1\leq h<k\leq n$. Then $f_j (z)\equiv 0$, $(j =1 ,2,...,n+1)$. \end{itemize} \end{lem} \begin{lem}\label{l6}\cite{Chen_Science Press,Hayman_Oxford} Let $f$ be a meromorphic function and suppose that $$R(z)=a_nf(z)^n+\cdots+a_0(z)$$ has small meromorphic coefficients $a_j(z)$, $a_n\neq 0$ in the sense of $T(r, a_j)=S(r, f)$. Moreover, assume that $$\ol N\left(r,\frac{1}{R}\right)+\ol N(r, f)=S(r, f).$$ Then $$R(z)=a_n\left(f+\frac{a_{n-1}}{na_n}\right).$$ \end{lem} The following lemma gives the Nevanlinna characteristic and counting functions of an exponential polynomial. \begin{lem}\label{l7}\cite{Steinmetz} Let $f(z)$ be given by (\ref{e1.2}). Then $$T (r, f) = C(co(W_0))\frac{r^t}{2\pi}+ o(r^t).$$ If $H_0(z) \not\equiv 0$, then $$m(r,1/f)= o(r^t),$$ while if $H_0(z) \equiv 0$, then $$N(r,1/f)=C(co(W))\frac{r^t}{2\pi}+ o(r^t).$$ \end{lem} Next, we proof the following lemmas which are the core parts of our paper. \begin{lem}\label{l8} Let $f$ be given by (\ref{e1.2}) which is a solution of (\ref{e1.8}) for $n=2$ and $\omega_i\neq 2\omega_j$. If the points $0,\omega_1,\omega_2,\ldots,\omega_m$ are collinear, then $m=1$. \end{lem} \begin{proof} [\bf\underline{Proof}] Assume on the contrary to the assertion that $m\geq 2$. For each $i \in\{1,2,\ldots, m\}$, we may write $\omega_i=\xi_i\omega$, where the constants $\xi_i\in\mathbb{C}\backslash\{0\}$ are distinct, $\xi_0=0$ and $\omega\in\mathbb{C}\backslash \{0\}$. Moreover, we may suppose that $\xi_i>\xi_j$ for $i >j$. Equation (\ref{e1.8}) can be written as \bea\label{e2.1} &&\sum_{i,j=0}^{m} H_i(z)H_j(z)e^{(\xi_i+\xi_j)\omega z^t}+a_1\sum_{l=0}^{m}H_l(z)e^{\xi_l\omega z^t}\nonumber\\&&\qquad+q(z)e^{Q_{t-1}(z)}\left[{\mathcal{A}_0(z)e^{v_tz^t}}+\sum_{h=1}^{m}\mathcal{A}_h(z)e^{(v_t+\xi_h\omega) z^t}\right]=P(z),\eea where $Q_{t-1}(z) =Q(z)-v_tz^t$ with $\deg Q_{t-1}(z)\leq t-1$ and $\mathcal{A}_0(z)=\sum_{i=0}^{k}b_iH^{(r_i)}_0(z+c_i)$, $\mathcal{A}_h(z)=\sum_{i=0}^{k}b_i\tilde{H}_h(z+c_i)e^{\omega_h(z+c_i)^t-\omega_hz^t}$, $h=1,2,\ldots,m$ such that $\tilde{H}_h(z+c_i)$ are the delay-differential polynomial of $H_h(z)$.\\ Now we consider following two cases to derive contradiction.\\ {\bf{\underline{Case 1.}}} Let $\xi_m>0$. Note that $ \max\{\xi_i+\xi_j:i,j=0,1,\ldots,m\}=2\xi_m$. Since $L(z,f)\not\equiv 0$, then at least one of $\mathcal{A}_h(z)$, $h=0,1,\ldots,m$ is not vanishing.\\ {\bf{\underline{Case 1.1.}}} Let all $\mathcal{A}_h(z)=0$, $h=1,2,\ldots,m$. Then $\mathcal{A}_0(z)\not\equiv 0$., i.e., $H_0(z)\not\equiv 0$. If $2\xi_m\omega\neq v_t$, applying {\em Lemma \ref{l5}} on (\ref{e2.1}), we obtain $H_m^2(z)\equiv 0$, a contradiction. Next, let $2\xi_m\omega= v_t$. Since, $\omega_i\neq 2\omega_j$, applying {\em Lemma \ref{l5}} on (\ref{e2.1}), we obtain $H_1^2(z)\equiv 0$, a contradiction.\\ {\bf{\underline{Case 1.2.}}} Let at least one of $\mathcal{A}_h(z)\not=0$, for $h=1,2,\ldots,m$.\\ {\bf{\underline{Case 1.2.1.}}} Let $\mathcal{A}_0(z)\neq 0$. Then by {\em Lemma \ref{l5}}, from (\ref{e2.1}), there exists one $h_0\in\{0,1,\ldots,m\}$ such that $2\xi_m\omega=v_t+\xi_{h_0}\omega$. Otherwise, we have $H_m^2(z)\equiv 0$, a contradiction.\\ {\bf{\underline{Case 1.2.1.1.}}} If $h_0=m$, then we have $v_t=\xi_m\omega$. Since $2\xi_i\neq \xi_j$, $j=0,1, \ldots, m$ and $2\xi_1\not\in\{\xi_i+\xi_j:0\leq i,j\leq m,(i,j)\neq(1,1)\}$ and $2\xi_1\not\in\{\xi_m+\xi_i:i=0,1,\ldots,m\}$. By {\em Lemma \ref{l5}}, we obtain $H_1^2(z)\equiv 0$, a contradiction.\\ {\bf{\underline{Case 1.2.1.2.}}} If $h_0\in\{0,1,\ldots,m-1\}$, since $0=\xi_0<\xi_1<\xi_2<\cdots<\xi_{m-1}<\xi_m$ and $2\xi_i\neq \xi_j$, $i,j=0,1, \ldots, m$, then for $m >h_0$, \beas 2\xi_m-\xi_{h_0}+\xi_m>\max\{2\xi_m-\xi_{h_0}+\xi_i:i=0,1,\ldots,m-1\}.\eeas Also, $2\xi_m=\max\{\xi_i+\xi_j:i,j=0,1,\ldots,m\}$. In view of {\em Lemma \ref{l5}}, we obtain $q(z)e^{Q_{t-1}(z)}\mathcal{A}_m\equiv 0$, i.e., $q(z)\equiv 0$, a contradiction.\\ {\bf{\underline{Case 1.2.2.}}} Let $\mathcal{A}_0(z)= 0$ and $H_0(z)\neq 0$. Then (\ref{e2.1}) becomes \bea\label{e2.2} \sum_{i,j=0}^{m} H_i(z)H_j(z)e^{(\xi_i+\xi_j)\omega z^t}+a_1\sum_{l=0}^{m}H_l(z)e^{\xi_l\omega z^t}+q(z)e^{Q_{t-1}(z)}\sum_{h=1}^{m}\mathcal{A}_h(z)e^{(v_t+\xi_h\omega) z^t}=P(z).\eea Then similar as {\em Case 1.2.1}, by {\em Lemma \ref{l5}}, from (\ref{e2.2}), we have there exists one $h_0\in\{1,2,\ldots,m\}$ such that $2\xi_m\omega=v_t+\xi_{h_0}\omega$ and proceeding similarly as done in {\em Case 1.2.1}, we can get a contradiction.\\ {\bf{\underline{Case 1.2.3.}}} Let $\mathcal{A}_0(z)= 0$ and $H_0(z)= 0$. Then (\ref{e2.1}) becomes \bea\label{e2.3} \sum_{i,j=1}^{m} H_i(z)H_j(z)e^{(\xi_i+\xi_j)\omega z^t}+a_1\sum_{l=1}^{m}H_l(z)e^{\xi_l\omega z^t}+q(z)e^{Q_{t-1}(z)}\sum_{h=1}^{m}\mathcal{A}_h(z)e^{(v_t+\xi_h\omega) z^t}=P(z).\eea Next, similar as {\em Case 1.2.1}, by {\em Lemma \ref{l5}}, from (\ref{e2.3}), we have there exists one $h_0\in\{1,2,\ldots,m\}$ such that $2\xi_m\omega=v_t+\xi_{h_0}\omega$ and adopting the same method as done in {\em Case 1.2.1}, we get a contradiction.\\ {\bf{\underline{Case 2.}}} $\xi_m<0$. Note that $\min\{\xi_i+\xi_j:i,j=0,1,\ldots,m\}=2\xi_1.$ Similar as {\em Case 1}, we divide the following cases.\\ {\bf{\underline{Case 2.1.}}} Let all $\mathcal{A}_h(z)=0$, $h=1,2,\ldots,m$. Then $\mathcal{A}_0(z)\not\equiv 0$., i.e., $H_0(z)\not\equiv 0$. If $2\xi_1\omega\neq v_t$, applying {\em Lemma \ref{l5}} on (\ref{e2.1}), we obtain $H_1^2(z)\equiv 0$, a contradiction. Next, let $2\xi_1\omega= v_t$. Since, $\omega_i\neq 2\omega_j$, applying {\em Lemma \ref{l5}} on (\ref{e2.1}), we obtain $H_m^2(z)\equiv 0$, a contradiction.\\ {\bf{\underline{Case 2.2.}}} Let at least one of $\mathcal{A}_h(z)\neq0$ for $h=1,2,\ldots,m$.\\ {\bf{\underline{Case 2.2.1.}}} Let $\mathcal{A}_0(z)\neq 0$. Then by {\em Lemma \ref{l5}}, there exists one $h_0\in\{0,1,\ldots,m\}$ such that $2\xi_1\omega=v_t+\xi_{h_0}\omega$. Otherwise, we have $H_1^2(z)\equiv 0$, a contradiction.\\ {\bf{\underline{Case 2.2.1.1.}}} If $h_0=1$, then we have $v_t=\xi_1\omega$.\\ Since, $2\xi_i\neq\xi_j$ and $2\xi_m\not\in\{\xi_i+\xi_j:0\leq i,j\leq m, (i,j)\neq (m,m)\}$ and $2\xi_m\not\in\{\xi_1+\xi_i:i=0,2,\ldots,m\}$. By {\em Lemma \ref{l5}}, we obtain $H_m^2(z)\equiv 0$, a contradiction.\\ {\bf{\underline{Case 2.2.1.2.}}} If $h_0\in\{0,2,3,\ldots,m\}$, since $0=\xi_0<\xi_1<\xi_2<\cdots<\xi_{m-1}<\xi_m$ and $2\xi_i\neq \xi_j$, $i,j=0,1, \ldots, m$, then \beas 2\xi_1-\xi_{h_0}+\xi_1<\min\{2\xi_1-\xi_{h_0}+\xi_i:i=0,2,3,\ldots,m\}.\eeas Also, $\min\{\xi_i+\xi_j:i,j=0,1,\ldots,m\}=2\xi_1$. In view of {\em Lemma \ref{l5}}, we obtain $q(z)e^{Q_{t-1}}\mathcal{A}_1(z)\equiv 0$, i.e., $q(z)\equiv 0$, a contradiction.\\ {\bf{\underline{Case 2.2.2.}}} Let $\mathcal{A}_0(z)= 0$ and $H_0(z)\neq 0$. Then in this case, we get equation (\ref{e2.2}). Similar as {\em Case 2.2.1}, by {\em Lemma \ref{l5}}, there exists one $h_0\in\{2,3,\ldots,m\}$ such that $2\xi_1\omega=v_t+\xi_{h_0}\omega$ and proceeding similarly as adopted in {\em Case 2.2.1}, we can get a contradiction.\\ {\bf{\underline{Case 2.2.3.}}} Let $\mathcal{A}_0(z)= 0$ and $H_0(z)= 0$. Then, we have equation (\ref{e2.3}). Similar as {\em Case 2.2.1}, by {\em Lemma \ref{l5}}, there exists one $h_0\in\{2,3,\ldots,m\}$ such that $2\xi_1\omega=v_t+\xi_{h_0}\omega$. Next, adopting the same method as executing in {\em Case 2.2.1}, we get a contradiction. \end{proof} \begin{lem}\label{l9} If $m\geq 2$ and $\omega_i\neq 2\omega_j$ for any $i\neq j$, then $f$ of the form (\ref{e1.2}) is not a solution of (\ref{e1.8}) for $n=2$. \end{lem} \begin{proof} [\bf\underline{Proof}] Suppose on the contrary to the assertion that, $m\geq 2$. Substituting $f$ of the form (\ref{e1.2}) into (\ref{e1.8}), we get \bea\label{e2.4}F(z)&=&f^2(z)+a_1f(z)-P(z)\nonumber\\&=& G(z)+\sum_{\scriptsize{ {\begin{array}{clcr}i,j=0\\\omega_i+\omega_j\neq0\end{array}}}}^{m} H_i(z)H_j(z)e^{(\omega_i+\omega_j)z^t}+a_1\sum_{l=1}^{m}H_l(z)e^{\omega_lz^t},\eea where $G(z)=H_0(z)(H_0(z)+a_1)-P(z)$ is either an exponential polynomial of degree $<t$ or a polynomial in $z$.\\Therefore, also \bea\label{e2.5}F(z)=-q(z)e^{Q(z)}L(z,f)=-q(z)e^{Q(z)}\sum_{h=0}^{m}\mathcal{A}_h(z)e^{\omega_h z^t},\hspace{4.1cc}\eea such that $\mathcal{A}_h(z)$ is defined as in(\ref{e2.1}).\\ Now we set \beas X_1&=&\{\ol\omega_1,\ldots,\ol\omega_m,\ol\omega_i+\ol\omega_j:\ol\omega_i+\ol\omega_j\neq 0,i,j=1,\ldots,m\},\\ X_2&=& \{\ol\omega_1,\ldots,\ol\omega_m,2\ol\omega_1,\ldots,2\ol\omega_m\},\\ X_3&=&\{2\ol\omega_1,\ldots,2\ol\omega_m\}.\eeas Clearly, by the theory of convexity, we have $\ol\omega_i+\ol\omega_j=\frac{1}{2}\cdot2\ol\omega_i+(1-\frac{1}{2})\cdot2\ol\omega_j$, i.e., $co(X_1)=co(X_2)$. Since, $X_3\subset X_2$, we have $co(X_3)\leq co(X_2)$, respectively.\\ Next, we consider the following cases to show a contradiction.\\ {\bf{\underline{Case 1.}}} If all $\mathcal{A}_h(z)=0$ for $h=1,\ldots,m$, then we have $\mathcal{A}_0(z)\neq 0$, which implies $H_0(z)\neq 0$. Then (\ref{e2.5}) becomes $F(z)=-q(z)e^{Q(z)}\mathcal{A}_0(z)$. Then applying {\em Lemma \ref{l7}}, we get \bea\label{e2.6} N\left(r,\frac{1}{F(z)}\right)=N\left(r,\frac{1}{\mathcal{A}_0(z)}\right)=o(r^t).\eea {\bf{\underline{Sub-case 1.1.}}} Let $G(z)\equiv 0$. Applying {\em Lemma \ref{l7}} on (\ref{e2.4}), we have \bea\label{e2.7} N\left(r,\frac{1}{F(z)}\right)=C(co(X_1))\frac{r^t}{2\pi}+o(r^t).\eea Therefore, (\ref{e2.6}) and (\ref{e2.7}) yields a contradiction.\\ {\bf{\underline{Sub-case 1.2.}}} Let $G(z)\not\equiv 0$. Applying {\em Lemma \ref{l7}} on (\ref{e2.4}), we have $m\left(r,\frac{1}{F(z)}\right)=o(r^t)$ and then \bea\label{e2.8} m\left(r,\frac{1}{F(z)}\right)+N\left(r,\frac{1}{F(z)}\right)&=&T(r,F(z))+O(1)=2T(r,f(z))+S(r,f)\nonumber\\&=&2\left(C(co(W_0))\frac{r^t}{2\pi}+o(r^t)\right)+S(r,f)\nonumber\\\implies N\left(r,\frac{1}{F(z)}\right)&=&2C(co(W_0))\frac{r^t}{2\pi}+o(r^t).\eea Therefore, using (\ref{e2.6}) and (\ref{e2.8}), we get a contradiction.\\ {\bf{\underline{Case 2.}}} Let there exists some $h_0\in\{1,2,\ldots,m\}$ such that $\mathcal{A}_{h_0}(z)\neq 0$. Now, we denote the following set as \beas V=\{\ol\omega_{h_0}:h_0\in\{1,2,\ldots,m\}\text{ for which }\mathcal{A}_{h_0}(z)\neq 0\}\text{ and }V_0=V\cup\{0\}.\eeas Since, $V\subseteq W$ and $V_0\subseteq W_0$, then $C(co(V))\leq C(co(W))$ and $C(co(V_0))\leq C(co(W_0))$, respectively.\\ {\bf{\underline{Case 2.1.}}} Let $\mathcal{A}_0(z)\equiv 0$ and $H_0(z)\equiv 0$. Then using {\em Lemma \ref{l7}} on (\ref{e2.5}), we have \bea\label{e2.9} N\left(r,\frac{1}{F(z)}\right)=N\left(r,\frac{1}{L(z,f)}\right)+O(\log r)=C(co(V))\frac{r^t}{2\pi}+o(r^t).\eea {\bf{\underline{Case 2.1.1.}}} If $G(z)\not\equiv 0$. Then similar as {\em Sub-case 1.2}, we have equation (\ref{e2.8}). From (\ref{e2.8}) and (\ref{e2.9}), we obtain a contradiction by $C(co(W_0))\geq C(co(V_0))\geq C(co(V))=2C(co(W_0))$.\\ {\bf{\underline{Case 2.1.2.}}} If $G(z)\equiv 0$, using {\em Lemma \ref{l7}} on (\ref{e2.4}), we have equation (\ref{e2.7}). Using (\ref{e2.7}) and (\ref{e2.9}), from $C(co(W))\geq C(co(V))=C(co(X_1))=C(co(X_2))\geq C(co(X_3))=2C(co(W))$, we get a contradiction.\\ {\bf{\underline{Case 2.2.}}} Let $\mathcal{A}_0(z)\equiv 0$ and $H_0(z)\not\equiv 0$. Then proceeding similarly as done in {\em Case 2.1}, we get a contradiction.\\%using {\em Lemma \ref{l7}} on (\ref{e2.5}), again we get (\ref{e2.9}).\\ {\bf{\underline{Case 2.3.}}} Let $\mathcal{A}_0(z)\neq 0$, which implies $H_0(z)\not\equiv 0 $. Then using {\em Lemma \ref{l7}} on (\ref{e2.5}), we have $m\left(r,\frac{1}{L(z,f)}\right)=o(r^t)$ and then \bea\label{e2.10} N\left(r,\frac{1}{F(z)}\right)&=&N\left(r,\frac{1}{L(z,f)}\right)+O(\log r)\nonumber\\&=&T(r,L(z,f))+o(r^t)=C(co(V_0))\frac{r^t}{2\pi}+o(r^t).\eea {\bf{\underline{Case 2.3.1.}}} If $G(z)\not\equiv 0$. Then, from (\ref{e2.8}) and (\ref{e2.10}), we get $C(co(W_0))\geq C(co(V_0))=2C(co(W_0))$, a contradiction.\\ {\bf{\underline{Case 2.3.2.}}} If $G(z)\equiv 0$. Using (\ref{e2.7}) and (\ref{e2.10}), we get \bea\label{e2.11}C(co(V_0))=C(co(X_1)).\eea Now, since $m\geq 2$, by {\em Lemma \ref{l8}}, $co(W_0)$ can not be a line-segment. Therefore, $co(W_0)$ must be a polygon with non-empty interior. If $0$ is not a boundary point of $co(W_0)$, then we have $co(W_0) = co(W)$. Then we have $C(co(W))=C(co(W_0))\geq C(co(V_0))=C(co(X_1))=C(co(X_2))\geq C(co(X_3))=2C(co(W))$, a contradiction. So, $0$ is a boundary point of $co(W_0)$. We choose the other non-zero corner points of $co(W_0)$ among the points $\ol\omega_1,\ldots,\ol\omega_m$ are $u_1,\ldots,u_t$, $t\leq m$ such that $0\leq \arg(u_i)\leq\arg(u_{i+1})\leq2\pi$ for $1\leq i\leq t-1$. Hence, \bea\label{e2.12} C(co(W_0))=|u_1|+|u_2-u_1|+\cdots+|u_t-u_{t-1}|+|u_t|. \eea Let $X_4=\{u_1,2u_1,2u_2,\ldots,2u_t,u_t\}$. Therefore, the points $2u_1,2u_2,\ldots,2u_t$ are the corner points of $co(X_4)$. However, since $t\leq m$, $co(X_4)$ may have more corner points. Then, using (\ref{e2.12}), we have \beas C(co(X_3))>C(co(X_4))&>&|2u_1-u_1|+|2u_2-2u_1|+\cdots+|2u_t-2u_{t-1}|+|u_t-2u_t|\nonumber\\&=& |u_1|+2|u_2-u_1|+\cdots+2|u_t-u_{t-1}|+|u_t|\nonumber\\&>&|u_1|+|u_2-u_1|+\cdots+|u_t-u_{t-1}|+|u_t|\nonumber\\&=&C(co(W_0))\eeas Therefore, $$C(co(X_1))=C(co(X_2))\geq C(co(X_3))>C(co(W_0))\geq C(co(V_0)),$$ which contradicts (\ref{e2.11}).\\ Hence, the proof is completed. \end{proof} \begin{lem}\label{l10} Let $f$ be given by (\ref{e1.2}), which is a solution of (\ref{e1.8}) for $n=2$, then $f$ takes the form, $$f(z) = H_0(z) + H_1(z)e^{\omega_1z^t},$ i.e., $f\in\Gamma_1'$. In this case, \begin{itemize} \item [(I)] either $t=1$, $\rho(f)=1$ and $H_0(z)$, $H_1(z)$ are polynomials and $Q(z)$ is a polynomial of degree $1$ \item [(II)] or $H_0(z)=-\frac{a_1}{2}$, $P(z)=-\frac{a_1^2}{4}$, $H_1^2(z)=\frac{b_0a_1}{2}q(z)e^{Q_{t-1}(z)}$ and $L(z,f)=b_0H_0(z)$ \item [(III)] or $H_0(z)=-\frac{a_1}{2}$, $P(z)=-\frac{a_1^2}{4}$, $H_1^2(z)=-q(z)e^{Q_{t-1}(z)}\mathcal{A}_1(z)$ and $L(z,f)=\mathcal{A}_1(z)e^{\omega_1z^t}$, where $\mathcal{A}_1(z)=\sum_{i=0}^{k}b_i\tilde{H}_1(z+c_i)e^{\omega_1(z+c_i)^t-\omega_1z^t}$ such that $\tilde{H}_1(z+c_i)$ are the delay-differential polynomial of $H_1(z)$. \end{itemize} \end{lem} \begin{proof} [\bf\underline{Proof}] For $n=2$, (\ref{e1.8}) becomes \bea\label{e2.13} f^{2}(z)+a_{1}f(z)+q(z)e^{Q(z)}L(z,f)=P(z).\eea By {\em Lemma \ref{l9}}, we have $m=1$, i.e., (\ref{e1.2}) becomes \bea\label{e2.14}f(z) = H_0(z) + H_1(z)e^{\omega_1z^t},\eea where $H_0(z)$, $H_1(z)(\not\equiv 0)$ are either exponential polynomials of order $<t$ or ordinary polynomials in $z$. Substituting (\ref{e2.14}) in (\ref{e2.13}), we have \bea\label{e2.15} &&H_1(z)(2H_0(z)+a_1)e^{\omega_1z^t} + H_1^2(z)e^{2\omega_1z^t} +q(z)e^{Q_{t-1}(z)}{\mathcal{A}_0(z)e^{v_tz^t}}\nonumber\\&&\quad+q(z)e^{Q_{t-1}(z)}\mathcal{A}_1(z)e^{(v_t+\omega_1) z^t}=P(z)-H_0(z)(H_0(z)+a_1),\eea where $Q_{t-1}(z) =Q(z)-v_tz^t$ with $\deg Q_{t-1}(z)\leq q-1$ and $\mathcal{A}_0(z)=\sum_{i=0}^{k}b_iH^{(r_i)}_0(z+c_i)$, $\mathcal{A}_1(z)=\sum_{i=0}^{k}b_i\tilde{H}_1(z+c_i)e^{\omega_1(z+c_i)^t-\omega_1z^t}$ such that $\tilde{H}_h(z+c_i)$ are the delay-differential polynomial of $H_h(z)$ for $h=1,2$. Since, $L(z,f)\not\equiv 0$, then at least one of $\mathcal{A}_0(z)$ and $\mathcal{A}_1(z)$ is non-vanishing. Next, we divide the following cases to prove our result.\\ {\bf{\underline{Case 1.}}} Let $\mathcal{A}_1(z)\equiv 0$. Then $\mathcal{A}_0(z)\not\equiv 0$, which implies $H_0(z)\not\equiv 0$.\\ If $v_t\neq \omega_1,2\omega_1$ or $v_t=\omega_1$, applying {\em Lemma \ref{l5}} on (\ref{e2.15}), we have $H_1^2(z)\equiv 0$, a contradiction.\\ If $v_t=2\omega_1$, then, applying {\em Lemma \ref{l5}} on (\ref{e2.15}), we have $$H_1(2H_0(z)+a_1)= 0,$$ $$ H_1^2(z)+q(z)e^{Q_{t-1}(z)}{\mathcal{A}_0(z)}=0,$$ $$P(z)-H_0(z)(H_0(z)+a_1)=0.$$ Since, $H_1(z)\not\equiv 0$. Therefore, solving these three equations, we have $H_0(z)=-\frac{a_1}{2}$, $P(z)=-\frac{a_1^2}{4}$ and $H_1^2(z)=\frac{b_0a_1}{2}q(z)e^{Q_{t-1}}$. Therefore, in this case $L(z,f)=b_0H_0(z)$.\\ {\bf{\underline{Case 2.}}} Let $\mathcal{A}_0(z)\equiv 0$. Then $\mathcal{A}_1(z)\not\equiv 0$.\\ {\bf{\underline{Sub-case 2.1.}}} Let $H_0(z)\equiv 0$. If $v_t=\pm\omega_1$, using {\em Lemma \ref{l5}} on (\ref{e2.15}), we have $H_1\equiv 0$, a contradiction.\\ {\bf{\underline{Sub-case 2.2.}}} Let $H_0(z)\not\equiv 0$. If $v_t=-\omega_1$, in view of {\em Lemma \ref{l5}}, from on (\ref{e2.15}), we have $H_1\equiv 0$, a contradiction. If $v_t=\omega_1$, similar as {\em Case 1}, we have $H_0(z)=-\frac{a_1}{2}$, $P(z)=-\frac{a_1^2}{4}$ and $H_1^2(z)=-q(z)e^{Q_{t-1}}\mathcal{A}_1$. Also, in this case $L(z,f)=\mathcal{A}_1e^{\omega_1z^t}$.\\ {\bf{\underline{Case 3.}}} Let $\mathcal{A}_0(z)\not\equiv 0$ and $\mathcal{A}_1(z)\not\equiv 0$, which implies $H_0(z)\not\equiv 0$.\\ If $v_t=-\omega_1$ or $v_t\neq\pm\omega_1$, using {\em Lemma \ref{l5}} on (\ref{e2.15}), we have $H_1\equiv 0$, a contradiction. If $v_t=2\omega_1$, by {\em Lemma \ref{l5}}, from (\ref{e2.15}), we have $\mathcal{A}_1\equiv 0$, a contradiction.\\ If $v_t=\omega_1$, applying {\em Lemma \ref{l5}} on (\ref{e2.15}), we have \bea\label{e2.16}H_1(z)(2H_0(z)+a_1)+q(z)e^{Q_{t-1}(z)}\sum_{i=0}^{k}b_iH^{(r_i)}_0(z+c_i)=0,\eea \bea\label{e2.17}H_1^2(z)+q(z)e^{Q_{t-1}(z)}\sum_{i=0}^{k}b_i\tilde{H}_1(z+c_i)e^{\omega_1(z+c_i)^t-\omega_1z^t}=0,\eea \bea\label{e2.18}P(z)-H_0(z)(H_0(z)+a_1)=0.\eea Now, we show that $H_0(z)$ is a polynomial. If possible let, $H_0(z)$ is transcendental. Then from (\ref{e2.18}), we have $$2T(r,H_0(z))+S(r,H_0(z))=T(r,P)=O(\log r),$$ a contradiction.\\ Next, from (\ref{e2.16}), we have \bea\label{e2.19} H_1(z)=\beta(z)e^{Q_{t-1}(z)},\eea where $\beta(z)=-\frac{q(z)\sum_{i=0}^{k}b_iH^{(r_i)}_0(z+c_i)}{2H_0(z)+a_1}$. Since, $f$ is entire, then $\beta(z)$ is a polynomial. Substituting (\ref{e2.19}) in (\ref{e2.17}), we have \bea\label{e2.20} \beta^2(z)+q(z)\sum_{i=0}^{k}b_i\tilde{\beta}(z+c_i)e^{Q_{t-1}(z+c_i)-Q_{t-1}(z)+\omega_1(z+c_i)^t-\omega_1z^t}=0, \eea where $\tilde{\beta}(z+c_i)$ is a delay-differential polynomial in $H_0(z)$ and $Q_{t-1}(z)$.\\ Note that $\deg(\omega_1(z+c_i)^t-\omega_1z^t)=t-1$ and $\deg(Q_{t-1}(z+c_i)-Q_{t-1}(z))\leq t-2$. If $t\geq 2$, applying {\em Lemma \ref{l5}} on (\ref{e2.20}), we have $q(z)=0$, a contradiction. Therefore, $t=1$, i.e., $H_1(z)$ is a polynomial. Hence, $f(z)$ reduces to the form $$f(z) = H_0(z) + H_1(z)e^{\omega_1z},$$ where $H_0(z)$ and $H_1(z)$ are polynomials. So, $f\in\Gamma_1'$. \end{proof} \section{Proofs of Theorems} \begin{proof} [\bf\underline{Proof of Theorem \ref{t1.1} (i)}] Suppose that $f$ be a finite order non-vanishing entire solution of (\ref{e1.8}). Using {\em Lemma \ref{l5}}, we have $f$ is transcendental. Otherwise, we will get $L(z,f)\equiv 0$, which yields a contradiction. In view of {\em Lemma \ref{l3}}, from (\ref{e1.8}), we get \bea\label{e3.1} n~T(r, f)+S(r, f)&=&m\left(r, f^{n}(z)+\sum_{i=1}^{n-1}a_{i}f^{i}(z)\right)\nonumber\\&=&m(r,P(z))-q(z)e^{Q(z)}L(z,f))\nonumber\\&=&m(r,e^{Q(z)})+m(r,L(z,f))+O(1)\nonumber\\&=&m(r,e^{Q(z)})+m\left(r,\frac{L(z,f)}{f(z)}\right)+m(r,f(z))+O(1)\nonumber\\&=&T(r,e^{Q(z)})+T(r,f(z))+S(r,f)\nonumber\\\implies (n-1)~T(r, f)&\leq& T(r,e^{Q(z)})+S(r,f).\eea Therefore, for $n\geq 2$, $\rho(f)\leq \deg{Q(z)}$. If $\rho(f)< \deg{Q(z)}$, then comparing order of growth of (\ref{e1.8}), we get a contradiction. So, $\rho(f)= \deg{Q(z)}$. Now, from the definition of type, we have $$\tau(f)=\ol\lim_{r\rightarrow\infty}\frac{T(r,f)}{r^{\rho(f)}}=\ol\lim_{r\rightarrow\infty}\frac{T(r,f)}{r^{\deg{Q(z)}}}\in(0,\infty),$$ i.e., $f$ is of mean type. \end{proof} \begin{proof} [\bf\underline{Proof of Theorem \ref{t1.1} (ii)}] First, we prove that if zero is a Borel exceptional value of $f(z)$, then we have $a_{n-1}=\cdots=a_1=0\equiv P(z)$. Adopting the similar process as done in the proof of Theorem 1.2(b) in \cite{Li-Yang_jmaa_2017}, using {\em Lemmas \ref{l3}, \ref{l4}, \ref{l6}} and replacing $f(z+c)$ by $L(z,f)$, we can prove our result. In this regards, only the equation (10) of \cite{Li-Yang_jmaa_2017} is replaced by the following lines \beas N\left(r,\frac{1}{G(z)}\right)&=&N\left(r,\frac{1}{q(z)e^{Q(z)}L(z,f)}\right)\nonumber\\&\leq& N\left(r,\frac{1}{q(z)}\right)+N\left(r,\frac{1}{L(z,f)}\right)+S(r,f)\nonumber\\&\leq& (k+1)~N\left(r,\frac{1}{f(z)}\right)+S(r,f)=S(r,f).\eeas Next, we prove the converse part of {\em Theorem \ref{t1.1} (ii)}. For this, using {\em Lemmas \ref{l3}, \ref{l6}} and replacing $f(z+c)$ by $L(z,f)$, we proceed similar up to equation (17) in the proof of Theorem 1.2(c) in \cite{Li-Yang_jmaa_2017}. Here, the equations (15), (16) and (17) of \cite{Li-Yang_jmaa_2017}, respectively, will be \bea\label{e3.2}T\left(r,\frac{L(z,f)}{f(z)}\right)=S(r,f),\eea \bea\label{e3.3}\left(f(z)+\frac{a_{n-1}}{n-1}\right)^{n-1}=f^{n-1}(z)+\sum_{i=1}^{n-1}a_{i}f^{i-1}(z)=-q(z)e^{Q(z)}\frac{L(z,f)}{f(z)}\eea and \bea\label{e3.4} a_{i+1}=\frac{(n-1)!}{i!(n-1-i)!}\left(\frac{a_{n-1}}{n-1}\right)^{n-1-i},\;\;i=0,1,...,n-2.\eea {\bf Case 1.} If there exists $i_0\in\{1,\ldots, n-1\}$ such that $a_{i_0}=0$, then from (\ref{e3.4}), we have all $a_{i}$ must be equal to zero for $i=1,2,\ldots, n-1$. Therefore, (\ref{e3.3}) becomes \beas f(z)^{n-1}=-q(z)e^{Q(z)}\frac{L(z,f)}{f(z)}.\eeas By using (\ref{e3.2}), for each $\epsilon>0$, we have \beas (n-1)N\left(r,\frac{1}{f(z)}\right)&=&N\left(r,\frac{1}{q(z)\frac{L(z,f)}{f(z)}}\right)\\&\leq& N\left(r,\frac{1}{q(z)}\right)+ N\left(r,\frac{1}{\frac{L(z,f)}{f(z)}}\right)+S(r,f)=S(r,f).\eeas Therefore, $\lambda(f)<\rho(f)$. {\bf Case 2.} If there exists no $i_0\in\{1,\ldots, n-1\}$ such that $a_{i_0}=0$, then from (\ref{e3.2}) and (\ref{e3.3}), we have \beas \ol N\left(r,\frac{1}{f(z)+\frac{a_{n-1}}{n-1}}\right)\leq \ol N\left(r, \frac{1}{q(z)} \right)+\ol N\left(r, \frac{1}{\frac{L(z,f)}{f(z)}} \right)+S(r,f)=S(r,f).\eeas Using the second main theorem, we have \beas T(r,f)&\leq& \ol N\left(r,\frac{1}{f(z)+\frac{a_{n-1}}{n-1}}\right)+\ol N\left(r,\frac{1}{f(z)}\right)+\ol N\left(r,f(z)\right)\\&=&\ol N\left(r,\frac{1}{f(z)}\right)+S(r,f)\eeas Therefore, $\rho(f)\leq\lambda(f)$ but we know that $\lambda(f)\leq\rho(f)$. Therefore, $\lambda(f)=\rho(f)$. \end{proof} \begin{proof} [\bf\underline{Proof of Theorem \ref{t1.1} (iii)}] Suppose that $f$ be a non-vanishing finite order entire solution of (\ref{e1.8}). Similar as {\em Theorem \ref{t1.1} (i)}, $f$ is transcendental.\par First suppose that $f$ belongs to $\Gamma_0'$, which means that $0$ is a Borel exceptional value of $f$. Thus, from {\em Theorem \ref{t1.1} (ii)}, we have $a_{n-1}=\cdots=a_1=0\equiv P(z)$.\par Next, we suppose that $P(z) \equiv 0$ and there exists an $i_0\in\{1, \ldots, n-1\}$ such that $a_{i_0}=0$, then from the converse part of {\em Theorem \ref{t1.1} (ii)}, we have all of the $a_i (i=1,\ldots, n-1)$ must be zero as well and $\lambda(f) <\rho(f)$. From Hadamard factorization theorem, we can see that \bea\label{e3.5}f(z)=h(z)e^{\alpha(z)},\eea where $\alpha(z)$ is a polynomial and $h(z)$ is the canonical product of zeros of $f$ with $\deg{\alpha(z)}=\rho(f)=\deg{Q(z)}=t$ and $\rho(h)=\lambda(h)=\lambda(f)<\rho(f)$.\\ Substituting (\ref{e3.5}) in (\ref{e1.8}) with all $a_i=0$, we have \bea\label{e3.6} h^n(z)e^{n \alpha(z)}+q(z)e^{Q(z)+\alpha(z)}\left(\sum_{i=0}^{k}L_i(z,h)e^{\Delta_{c_i}\alpha(z)}\right)=0,\eea where \beas L_i(z,h)&=&b_i\left[h(z+c_i)M_{k_i}(\alpha'(z+c_i), \alpha''(z+c_i),\ldots,\alpha^{(k_i)}(z+c_i))\right.\\&&\qquad\left.+h'(z+c_i)M_{k_i-1}(\alpha'(z + c_i), \alpha''(z+c_i),\ldots,\alpha^{(k_i-1)}(z+c_i))\right.\\&&\qquad\left.+ \cdots +h^{(k_i-1)}(z+c_i)M_1(\alpha'(z+c_i))+h^{(k_i)}(z+c_i)\right].\eeas Clearly, $\rho(L_i(z,h))< t$. Rewriting (\ref{e3.6}), we have \bea\label{e3.7} h^n(z)e^{n \alpha_{t-1}(z)}e^{n u_tz^t}+q(z)e^{\alpha_{t-1}(z)+Q_{t-1}(z)}\left(\sum_{i=0}^{k}L_i(z,h)e^{\Delta_{c_i}\alpha(z)}\right)e^{(u_t+v_t)z^t}=0\eea such that $\alpha(z)=u_tz^t+\alpha_{t-1}(z)$ and $Q(z)=v_tz^t+Q_{t-1}(z)$, where $u_t$, $v_t$ are non-zero constants and $\alpha_{t-1}(z)$, $Q_{t-1}(z)$ are of degree $\leq t-1$.\\ In view of {\em Lemma \ref{l5}}, we can easily say that (\ref{e3.7}) is possible only when $(n-1)u_t=v_t$. Therefore, (\ref{e3.7}) becomes \bea\label{e3.8} h^n(z)+q(z)e^{(1-n)\alpha_{t-1}(z)+Q_{t-1}(z)}\left(\sum_{i=0}^{k}L_i(z,h)e^{\Delta_{c_i}\alpha(z)}\right)=0.\eea Here, the following cases arise.\\ {\bf\underline{Case 1:}} Let $\rho(h)<t-1>0$. If $\deg\{(1-n)\alpha_{t-1}(z)+Q_{t-1}(z)\}= t-1$, applying {\em Lemma \ref{l5}}, we have $q(z)=0$, a contradiction. If $\deg\{(1-n)\alpha_{t-1}(z)+Q_{t-1}(z)\}< t-1$, from $\deg \{\Delta_{c_i}\alpha(z)\}=t-1$, by using {\em Lemma \ref{l5}}, again we have $q(z)=0$, a contradiction.\\ {\bf\underline{Case 2:}} Let $\rho(h)\geq t-1>\rho(h)-1$, $t-1>0$. By logarithmic derivative lemma \cite[Corollary 2.5]{Chiang-Feng}, for each $\epsilon>0$, we have \bea\label{e3.9} m\left(r,\frac{L_i(z,h)}{h(z)}\right)=O(r^{\rho(h)-1+\epsilon})+O(\log r).\eea Since, $h$ is entire, using (\ref{e3.8}) and (\ref{e3.9}), we have \bea\label{e3.10} T\left(r,\sum_{i=0}^{k}\frac{L_i(z,h)}{h}e^{\Delta_{c_i}\alpha(z)}\right)&=&m\left(r,\sum_{i=0}^{k}\frac{L_i(z,h)}{h}e^{\Delta_{c_i}\alpha(z)}\right)+N\left(r,\sum_{i=0}^{k}\frac{L_i(z,h)}{h}e^{\Delta_{c_i}\alpha(z)}\right)\nonumber\\&\leq& \sum\limits_{i=0}^{k}T\left(r,e^{\Delta_{c_i}\alpha(z)}\right)+O(r^{\rho(h)-1+\epsilon})+O(\log r).\eea Therefore, using (\ref{e3.8}) and (\ref{e3.10}), we obtain for each $\epsilon>0$ \bea\label{e3.11} (n-1)N\left(r,\frac{1}{h(z)}\right)&\leq& N\left(r,\frac{1}{\sum_{i=0}^{k}\frac{L_i(z,h)}{h}e^{\Delta_{c_i}\alpha(z)}}\right)+O(\log r)\nonumber\\&\leq& T\left(r,\sum_{i=0}^{k}\frac{L_i(z,h)}{h}e^{\Delta_{c_i}\alpha(z)}\right)+O(\log r)\nonumber\\&\leq& \sum\limits_{i=0}^{k}T\left(r,e^{\Delta_{c_i}\alpha(z)}\right)+O(r^{\rho(h)-1+\epsilon})+O(\log r).\eea Now, if $c_i=c_j$ for all $1\leq i,j\leq k$, say, $c$, then (\ref{e3.11}) becomes \beas (n-1)N\left(r,\frac{1}{h(z)}\right)\leq T\left(r,e^{\Delta_{c}\alpha(z)}\right)+O(r^{\rho(h)-1+\epsilon})+O(\log r).\eeas Thus, from the above equation, we have $\lambda(h)\leq t-1$. But in this case, $\lambda(h)=\rho(h)\geq t-1$. Therefore, $\lambda(f)=\lambda(h)=t-1=\rho(f)-1$.\par If $t-1=0$, we get $\rho(h)=\lambda(h)<\rho(f)=t=1$. If $h(z)$ is transcendental, then $h(z)$ has infinitely many zeros. Now noting that $\Delta_{c_i}\alpha(z)$ is of degree $t-1$, from (\ref{e3.11}), we get $N\left(r,\frac{1}{h(z)}\right)=O(\log r)$, a contradiction. Therefore, $h(z)$ will be a polynomial. So, $f$ belongs to $\Gamma_0'$.\\ {\bf\underline{Case 3:}} Let $\rho(h)\geq t$. Then from (\ref{e3.11}), we obtain $\lambda(h)<\rho(h)$, a contradiction. So, $h(z)$ will be a polynomial. Therefore, $f$ belongs to $\Gamma_0'$. \end{proof} \begin{proof} [\bf\underline{Proof of Theorem \ref{t1.1} (iv)}] Suppose that $f$ is a non-vanishing finite order entire solution of (\ref{e1.8}). Similarly, $f$ is transcendental. If possible let, $P(z)\not\equiv 0$. By the assumption, $ card \{z: p(z) =p'(z) =p''(z) =0\} \geq 1$ or $ card \{z: p(z) =p'(z) =0\} \geq 2$, where $p(z) =z^n+a_{n-1}z^{n-1}+\cdots+a_1z$, we mean that $p(z)$ has at least one zero with multiplicity at least three or at least $2$ zeros with multiplicities at least two. In view of {\em Lemma \ref{l3}} and the second main theorem, we have \beas n~T(r,f)&=&T(r,f^n(z)+a_{n-1}f^{n-1}(z)+\cdots+a_1f(z))\\&\leq&\ol N\left(r,\frac{1}{f^n+a_{n-1}f^{n-1}+\cdots+a_1f-P(z)}\right)\\&&+\ol N\left(r,\frac{1}{f^n+a_{n-1}f^{n-1}+\cdots+a_1f}\right)+\ol N(r,f^n+a_{n-1}f^{n-1}+\cdots+a_1f)\\&\leq&\ol N\left(r,\frac{1}{q(z)L(z,f)}\right)+(n-2)~T(r,f)+S(r,f) \\&\leq& T\left(r,q(z)L(z,f)\right)+(n-2)~T(r,f)+S(r,f)\\&\leq& N\left(r,q(z)L(z,f)\right)+m\left(r,q(z)L(z,f)\right)+(n-2)~T(r,f)+S(r,f)\\&\leq& m\left(r,q(z)\frac{L(z,f)}{f(z)}\right)+m(r,f(z))+(n-2)~T(r,f)+S(r,f)\\&\leq& T(r,f)+(n-2)~T(r,f)+S(r,f)\\&\leq&(n-1)~T(r,f)+S(r,f), \eeas a contradiction. Therefore, $P(z)\equiv 0$.\par Since, at least one $a_{i_0}=0$ $(i_0=1,2, \ldots, n-1)$, using {\em Theorem \ref{t1.1} (iii)}, we have $f\in\Gamma_0'$.\\ Note that, since, $P(z)\equiv 0$ and at least one $a_{i_0}=0$ $(i_0=1,2, \ldots, n-1)$, from {\em Theorem \ref{t1.1} (ii)}, we have all of $a_j$'s $(j=1,\ldots,n-1)$ must be zero. Therefore $p(z)$ is of the form $z^n$, which implies $ card \{z: p(z) =p'(z) =0\} \geq 2$ is not possible. \end{proof} \begin{proof} [\bf\underline{Proof of Theorem \ref{t1.1} (v)}] From {\em Theorem \ref{t1.1} (i)}, we have $\rho(f)=\deg(Q(z))$. Using {\em Lemmas \ref{l8} - \ref{l10}}, we can prove the result. \end{proof}
1,108,101,563,806
arxiv
\section{Introduction}\label{sec:introduction} This template includes all the information about formatting manuscripts for the ISMIR \conferenceyear\ Conference. Please follow these guidelines to give the final proceedings a uniform look. Most of the required formatting is achieved automatically by using the supplied style file (\LaTeX) or template (Word). If you have any questions, please contact the Program Committee (\texttt{ismir\[email protected]}). This template can be downloaded from the ISMIR \conferenceyear\ web site (\texttt{http://ismir\conferenceyear.ismir.net}). \section{Paper Length \& File Size} We adopt a ``(6+n)-page policy'' for ISMIR \conferenceyear. That is, each paper may have a maximum of six pages of technical content (including figures and tables) \textcolor{red}{with additional optional pages that contain only references and acknowledgments. Note that acknowledgments should not be included in the anonymized submission.} Paper should be submitted as PDFs and the \textcolor{red}{file size is limited to 10MB}. Please compress images and figures as necessary before submitting. \section{Page Size}\label{sec:page_size} The proceedings will be printed on \underline{portrait A4-size paper} \underline{(21.0cm x 29.7cm)}. All material on each page should fit within a rectangle of 17.2cm x 25.2cm, centered on the page, beginning 2.0cm from the top of the page and ending with 2.5cm from the bottom. The left and right margins should be 1.9cm. The text should be in two 8.2cm columns with a 0.8cm gutter. All text must be in a two-column format. Text must be fully justified. \section{Typeset Text}\label{sec:typeset_text} \subsection{Normal or Body Text}\label{subsec:body} Please use a 10pt (point) Times font. Sans-serif or non-proportional fonts can be used only for special purposes, such as distinguishing source code text. The first paragraph in each section should not be indented, but all other paragraphs should be. \subsection{Title and Authors} The title is 14pt Times, bold, caps, upper case, centered. \textcolor{red}{Authors' names are omitted when submitting for double-blind reviewing.} The following is for making a camera-ready version. Authors' names are centered. The lead author's name is to be listed first (left-most), and the co-authors' names after. If the addresses for all authors are the same, include the address only once, centered. If the authors have different addresses, put the addresses, evenly spaced, under each authors' name. \subsection{First Page Copyright Notice} Please include the copyright notice exactly as it appears here in the lower left-hand corner of the page. It is set in 8pt Times. \textcolor{red}{After your paper is accepted, you will need to insert the appropriate author names and paper title in the copyright notice when submitting the camera-ready version.} For \LaTeX users, this will be handled by the template automatically. For Word users, this has to be done manually. \subsection{Page Numbering, Headers and Footers} Do not include headers, footers or page numbers in your submission. These will be added when the publications are assembled. \subsection{Line Numbers} Line numbers should be included in your originally submitted manuscript, for reference during reviewing. \textcolor{red}{However, after your paper is accepted, you must remove all line numbers from the final camera-ready version.} This can be done in \LaTeX by commenting out the command \verb|\linenumbers| on Line 22. This can be done in Microsoft Word by selecting Layout > Line Numbers > None. \section{First Level Headings} First level headings are in Times 10pt bold, centered with 1 line of space above the section head, and 1/2 space below it. For a section header immediately followed by a subsection header, the space should be merged. \subsection{Second Level Headings} Second level headings are in Times 10pt bold, flush left, with 1 line of space above the section head, and 1/2 space below it. The first letter of each significant word is capitalized. \subsubsection{Third and Further Level Headings} Third level headings are in Times 10pt italic, flush left, with 1/2 line of space above the section head, and 1/2 space below it. The first letter of each significant word is capitalized. Using more than three levels of headings is highly discouraged. \section{Footnotes and Figures} \subsection{Footnotes} Indicate footnotes with a number in the text.\footnote{This is a footnote.} Use 8pt type for footnotes. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a 0.5pt horizontal rule. \subsection{Figures, Tables and Captions} All artwork must be centered, neat, clean, and legible. All lines should be very dark for purposes of reproduction and art work should not be hand-drawn. The proceedings are not in color, and therefore all figures must make sense in black-and-white form. Figure and table numbers and captions always appear below the figure. Leave 1 line space between the figure or table and the caption. Each figure or table is numbered consecutively. Captions should be Times 10pt. Place tables/figures in text as close to the reference as possible. References to tables and figures should be capitalized, for example: see \figref{fig:example} and \tabref{tab:example}. Figures and tables may extend across both columns to a maximum width of 17.2cm. \begin{table} \begin{center} \begin{tabular}{|l|l|} \hline String value & Numeric value \\ \hline Hello ISMIR & \conferenceyear \\ \hline \end{tabular} \end{center} \caption{Table captions should be placed below the table.} \label{tab:example} \end{table} \begin{figure} \centerline{\framebox{ \includegraphics[width=0.9\columnwidth]{figure.png}}} \caption{Figure captions should be placed below the figure.} \label{fig:example} \end{figure} \section{Equations} Equations should be placed on separate lines and numbered. The number should be on the right side, in parentheses, as in \eqnref{relativity}. \begin{equation}\label{relativity} E=mc^{2} \end{equation} \section{Citations} All bibliographical references should be listed at the end of the submission, in a section named ``REFERENCES,'' numbered and in the order that they first appear in the text. Formatting in the REFERENCES section must conform to the IEEE standard (\url{https://ieeeauthorcenter.ieee.org/wp-content/uploads/IEEE-Reference-Guide.pdf}). Approved IEEE abbreviations (Proceedings $\rightarrow$ Proc.) may be used to shorten reference listings. All references listed should be cited in the text. When referring to documents, place the numbers in square brackets (e.g., \cite{ISMIR17Author:01} for a single reference, or \cite{JNMR10Someone:01,Book20Person:01,Chapter09Person:01} for a range). \textcolor{red}{As submission is double blind, refer to your own published work in the third person. That is, use ``In the previous work of \cite{ISMIR17Author:01},'' not ``In our previous work \cite{ISMIR17Author:01}.'' If you cite your other papers that are not widely available (e.g., a journal paper under review), use anonymous author names in the citation, e.g., an author of the form ``A. Anonymous.''} \section{Introduction}\label{sec:introduction} Musical instrument recognition is a machine learning task that aims to label audio recordings of musical instruments, typically at a fine temporal granularity (second by second)~\cite{fu2010survey,eronen2000musical,krishna2004music}. Musical instrument recognition can be viewed as a subtask of Sound Event Detection (SED), which consists of identifying and locating any type of sound event (\textit{e.g.}, car horn, dog bark) in an audio recording ~\cite{mesaros2016metrics,mesaros2016tut,salamon2017deep}. Labelling audio tracks is extremely important for organizing the dozens of tracks in a typical Digital Audio Workstation (DAW) recording session~\cite{savage2011art,owsinski2013mixing}, but manual labelling is a tedious process. Automated musical instrument recognition could enable automated track labeling. Automated second-by-second labeling could go further, enabling navigation through recording projects by traversing musical instrument \textit{labels}, rather than waveform visualizations. This would be especially helpful for audio engineers with low or no vision, as existing interfaces leave accessibility as an afterthought \cite{saha-vision-2020} and navigating by visually examining waveforms is not a viable option for them \cite{tanaka2016haptic}. A barrier to incorporating instrument recognition into DAWs is that most existing deep learning techniques must be trained on instruments that have abundant labeled training data. The datasets that support these systems only focus on the limited set of instrument classes that have sufficient data~\cite{bosch2012comparison,humphrey-openmic-2018,hung-timbre-2018,gururani-attention-2019,hung-multitask-2019,taenzer2019investigating, kratimenos2021augmentation}. However, the vast diversity of musical instrument sounds necessitates supporting a broader set of instrument classes~\cite{lostanlen2018extended}. While expanding current datasets with more diverse coverage can ameliorate this issue, collecting human annotations for a large number of audio files is a tedious, time consuming task \cite{kim-ised-2017, cartwright2019crowdsourcing}, and there will always be unanticipated sound categories that an end-user would like to automatically label. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{figs/hier_prototypes_v2.png} \caption{Overview of our method. Prototypes from a set of embedded support examples at a fine-grained level (bottom left) are aggregated to make a set of \textit{metaprototypes} at a coarser-grained level (top left). In this way, we learn a hierarchical set prototypes that corresponds to a musical instrument hierarchy (right).} \label{fig:hier_prototypes} \vspace{-12pt} \end{figure} Therefore, musical instrument recognition systems should be able to dynamically expand their vocabularies after deployment, to conform to end-user needs. This requires an approach that lets a system learn a new sound category given only a few examples that can be provided by an end user, \textit{a la} few-shot learning. Using a hierarchical system, like the widely-used Hornbostel-Sachs hierarchy~\cite{hornbostel-classification-1961}, to organize and classify musical instruments has broad precedent in many human cultures ~\cite{kartomi1990concepts}. We can take advantage of a musical instrument hierarchy, like the widely-used Hornbostel-Sachs hierarchy~\cite{hornbostel-classification-1961}, to improve few-shot learning. A system could learn a feature space meaningful for unseen classes that share hierarchical ancestry with the classes seen during training. For example, the Chinese zhongruan is a plucked string instrument that shares ancestry with other chordophones in the Hornbostel-Sachs hierarchy (like the guitar), which might be more common in datasets of Western instruments. A model could leverage the hierarchical relationship between an instrument it has never been trained on (\textit{e.g.} the zhongruan) and more common instruments seen during training (\textit{e.g.} the guitar) to produce a meaningful representation of the new instrument with only a few support examples. In this work, we propose a simple extension to prototypical networks \cite{snell-prototypical-2017} that imposes a hierarchical structure on the learned embedding space (Figure \ref{fig:hier_prototypes}). We first create prototypes from an initial set of embedded support examples at the most granular level. We then aggregate these initial prototypes into new prototypes corresponding to a coarser hierarchical level, in a manner reminiscent of agglomerative clustering~\cite{oded2006datamining}. Repeating this process lets our system represent classes at many granularities of a predefined instrument hierarchy. We also propose a weighted, hierarchical extension of cross-entropy loss to ensure the network learns the hierarchy. Compared to a non-hierarchical few-shot baseline~\cite{wang-fewshotsed-2020}, our method shows a significant increase in classification accuracy and significant decrease in mistake severity on unseen instrument classes. \vspace{-12pt} \section{Related Work}\label{sec:related_work} Musical instrument recognition can be performed in single-source contexts \cite{benetos-nmf-2006, eronen-mfcc-2000, lostanlen-spiral-2016, essid-hierarchicalsolos-2006}, where only a single sound source may be active at any given time, as well as in multi-source contexts \cite{han-predominant-2017,hung-multitask-2019,hung-timbre-2018, gururani-attention-2019,gururani-iad-2018}, where multiple sound sources may be active at the same time. We consider the single-source case, as the vast majority of audio in a studio music production workflow is single-source. Hierarchical structures have shown to be effective for many machine learning tasks, such as text classification \cite{stein-hierarchicaltext-2019} and image classification \cite{ankit-image_hierarchical-2020, sun-hierarchicalimage-2019}. In fact, Bertinetto \textit{et al.} \cite{bertinetto-mistakes-2020} propose a hierarchical image classification approach that uses a similar exponentially weighed hierarchical loss function to the one proposed here, although they do not focus on a few-shot setting, as we do, and they favor learning broader classes, whereas we are also interested in finer classes. Hierarchical structure was explored for musical instrument recognition by using fixed signal processing feature extraction techniques~\cite{essid-hierarchicalsolos-2006, essid-2005-instrument, kitahara-hierarchicalnonregistered-2004}. Here, we use deep learning methods to flexibly learn a feature space that mirrors musical instrument hierarchies. Recent work has studied how hierarchical structures can be incorporated into neural network models for different tasks. In the automatic speech recognition (ASR) domain, CTC-based hierarchical ASR models \cite{fernandez-hierarchicalsequencernn-2007, sanabria-hierarchicalctc-2018, krishna-hierarchicalspeech-2018} employ hierarchical multitask learning techniques, particularly by using intermediate representations output by the model to perform intermediate predictions in a coarse-to-fine scheme. Manilow \textit{et al.} \cite{manilow-hierarchical-2020} have shown that hierarchical priors can have significant benefits for performing source separation of musical mixtures. None of these systems, however, were designed for few-shot learning. Previous deep learning systems have been proposed for multilevel audio classification~\cite{xu2016hierarchical,jati-hierarchical_loss-2019,cramer-taxonet-2020}. However, none of these systems work in a few-shot setting and they require either specialized network architectures or complex data pipelines to learn a hierarchy. Our approach is a simple extension to incorporate hierarchy into an established few-shot learning paradigm. Recent work in audio tagging and sound event detection tasks has explored few-shot learning in the audio domain~\cite{kim-ised-2017,cheng-fewshotsed-2019,shi-fewshotacoustic-2020,wang-fewshotdrum-2020,wang-fewshotsed-2020}, though none of this work assumed any hierarchical structure. Here, we propose a method for hierarchical representation learning in a few-shot setting, leveraging the increased flexibility of both hierarchy and few-shot methods for musical instrument recognition. \vspace{-0.5cm} \section{Background} \subsection{Few-shot Learning} \label{sec:few_shot} In a few-shot classification setting, we consider a target class $k \in \mathcal{K}$ for a set of target classes, $\mathcal{K}$, of size $|\mathcal{K}|$. Let $x_s$ be a single support example drawn from a set of examples $\mathcal{S}$, called the support set. Assume $N$ labeled support examples (\textit{i.e.}, shots) per class $k$, totalling $N \times |\mathcal{K}|$ labeled examples. We define $\mathcal{S}_k$ as the subset of $\mathcal{S}$ containing the examples of class $k$. We are provided an unlabeled query set $\mathcal{Q}$ of $M$ unlabeled examples. The goal of the task is to label each query example $x_q \in \mathcal{Q}$ with a target class $k \in \mathcal{K}$. A neural network model $f_\theta$ projects both the support and query sets into a discriminative embedding space. The query is assigned to the class of the support set it is closest to, according to distance metric $d$. \subsection{Prototypical Networks}\label{sec:protonets} Prototypical networks~\cite{snell-prototypical-2017} compute an embedding vector for each instance in $\mathcal{S}_k$. The prototype, $c_k$, for class $k$ is the mean vector of all the support embeddings belonging to class $k$: \begin{equation} \label{eq:proto} c_k = \frac{1}{|S_k|} \sum_{x_s \in \mathcal{S}_k} f_\theta(x_s). \end{equation} Using a distance function $d$, we can produce a probability distribution over the set of classes $\mathcal{K}$ for a given query $x_q$ by applying a softmax over the negated distances from the query to each class prototype: \begin{equation} \label{eq:proto-softmax} p(\hat{y}_{q}=k|x_q) = \ \frac{\exp{(-d(f_\theta(x_q), c_{k}))}} {\sum_{c'_{k}} \exp{(-d(f_\theta(x_q), c'_{k}))}}. \end{equation} We use the Euclidean distance as $d$ in this work. \section{Method} Musicologists have long categorized musical instruments into hierarchical taxonomies, such as the Hornbostel-Sachs system \cite{hornbostel-classification-1961}, which classifies musical instruments into a hierarchy corresponding to their sound producing mechanisms. We can improve upon existing few-shot models by leveraging the hierarchical structure intrinsic to musical instrument taxonomies. To do this, we extend prototypical networks by training on a multitask scenario composed of multiple classification tasks, one for each level of a class tree, where the prototype for a parent node in the class tree is defined as the mean of the prototypes for each of the parent node's children. We impose hierarchical structure on our few-shot task by constructing a tree, $T$, with height $H$, starting from a set of leaf nodes. We define the leaf nodes as the same set of classes, $\mathcal{K}$, that we defined for our standard few-shot setup in Sec.~\ref{sec:few_shot}. We then define the parents of the leaf nodes by aggregating classes, $k \in \mathcal{K}$. For musical instrument recognition, we aggregate classes according to a predefined instrument hierarchy (\textit{e.g.}, Hornbostel-Sachs). We iteratively aggregate child classes up to the max height of the tree $H$. We index the tree as $T_{i, h}$, where $i \in \mathcal{K}_i$ indexes over the set of sibling classes at level $h$, for $h=0, \dots H$, with level $0$ containing the most specific classes and level $H$ containing the broadest. In our notation $H=0$ describes a tree with no hierarchy and is equivalent the non-hierarchical prototypical network defined in Sec.~~\ref{sec:protonets}. $H=1$ has two levels, and so on. \subsection{Hierarchical Prototypical Networks} \label{sec:hproto} We define our proposed hierarchical prototypical network by extending typical prototypical networks~\cite{snell-prototypical-2017} to a hierarchical multitask learning scenario, where we wish to label each query example, $x_q \in \mathcal{Q}$, at multiple levels of our class tree, $T$. Here, labeling at each level is a separate task. Like a normal prototypical network, we use a network $f_\theta$ to produce embeddings for every example in the support set. The mean of these embedded support examples creates an initial set of prototypes (Eq.~\ref{eq:proto}). We deviate from the typical setup by considering this initial set of prototypes as the lowest level of our tree, $T$, and aggregating these initial prototypes \textit{again} to make another set of prototypes representing the next level. The prototypes at this higher level are, thus, prototypes of prototypes, or \textit{metaprototypes}, and define a hierarchy according to the structure of our tree, $T$. We continue to iteratively aggregate prototypes in this fashion for all levels of our tree. The prototype for each parent class at level $h+1$ is notated $c_{T_{i, h+1}}$ and is the mean of the members of its support set $\mathcal{S}_{T_{i, h}}$. For levels $h>0$, each example $\hat{x}_s$, is itself a prototype: \begin{equation} \label{eq:hproto} c_{T_{i, h+1}} = \frac{1}{|\mathcal{S}_{T_{i, h}}|} \sum_{\hat{x}_s \in \mathcal{S}_{T_{i, h}}} f_\theta (\hat{x}_s) , \end{equation} This process is shown in Figure \ref{fig:hier_prototypes}. Given a query example $x_{q}$, we use the network to create an embedding $f_\theta(x_q)$ and measure its distance to each class prototype or metaprototype $c_{T_{i, h}}$ at a given level $h$. Given these distances, we output $H$ probability distributions, one for each level in our class tree: \begin{equation} \label{eq:hproto-softmax} p(T_{i, h}|x_q) = \ \frac{\exp{(-d(f_\theta(x_q), c_{T_{i, h}}))}} {\sum_{c'_{T_{i, h}}} \exp{(-d(f_\theta(x_q), c'_{T_{i, h}}}))}. \end{equation} We note that Eqs.~\ref{eq:proto} and \ref{eq:proto-softmax} are special cases of the proposed Eqs.~\ref{eq:hproto} and \ref{eq:hproto-softmax}, evaluated at $h = 0$. Our generalization allows multi-task few-shot classification at multiple levels of a hierarchical class tree. Our proposed method does not require any specific network architecture. Instead, it provides a hierarchical label structure for support examples $x_s$ to be aggregated together, forming fine-to-coarse representations (\textit{i.e.}, $c_{T_{i, h}}$) that we can leverage and optimize with. This exposes the potential for a model to be trained with multiple concurrent hierarchies, a direction for future work. \subsection{Multi-Task Hierarchical Loss} \label{sec:method-loss} We now set up a learning objective, where we minimize the cross-entropy loss between the predicted distribution and the ground truth class for each level in the class tree. The intuition behind our approach is that we can use a hierarchically structured objective to encourage our model to produce an embedding space with discriminative properties at both coarse and fine granularities, allowing some of these coarse features to generalize beyond the training set of fine grained leaf classes to their unseen siblings in the class tree. We use an exponentially decaying sum of loss terms for each level in the hierarchy~\cite{bertinetto-mistakes-2020}: \begin{equation} \mathcal{L}_{hierarchical} = \sum_{h=0}^{H} e^{-\alpha \cdot h} \mathcal {L}_{CE}^{(h)}, \end{equation} where $\mathcal {L}_{CE}^{(h)}$ denotes the cross-entropy loss for the classification task at height $h$, and $\alpha$ is a hyperparameter that determines the decay of each loss term w.r.t height. Setting $\alpha > 0$ places more more weight on finer-grained tasks, $\alpha < 0$ places more weight on coarser-grained tasks, and $\alpha = 0 $ weighs all tasks equally. We note that $H = 0$ reduces to the non-hierarchical (baseline) definition of the problem, where we only optimize for the fine-grained task. \section{Experimental Design} \label{sec:exp_design} We evaluated our proposed hierarchical prototypical approach using a non-hierarchical prototypical method~\cite{wang-fewshotsed-2020} as a baseline. We evaluated all models on a few-shot musical instrument recognition task, measuring standard classification metrics (F1) as well as mistake severity. We conducted ablations for class tree height, choice of class hierarchy, and proposed loss function. \subsection{Datasets} For all experiments, we trained and evaluated using isolated tracks from the MedleyDB\cite{bittner-medleydb-2014} and MedleyDB 2.0 \cite{bittner-medleydb2-2016} datasets. MedleyDB contains multi-track recordings of musical instruments and vocals. We excluded recordings that do not have fine-grained instrument labels (\textit{e.g.}, ``brass'' was excluded because the audio could be of trumpets, trombones, etc.). Additionally, we considered sections of a single instrument to be the same class as the instrument itself (\textit{e.g.} "violin section" and "violin" both belong to the class "violin"). Altogether, the dataset consists of 63 different instruments, with 790 tracks in total. For training and evaluation, we removed the silent regions of each audio track. We then split the remainder of the track into 1 second segments with a hop size of 0.5 seconds, where each 1 second segment is an input example to the model. All audio was downsampled to 16kHz. For each example, we compute a 128-bin log-Mel spectrogram with a 32ms window and an 8ms hop. After preprocessing, our training and evaluation datasets contained 539k and 56k 1-second examples, respectively. We performed silence removal using \code{pysox} \cite{bittner-pysox-2016}. \subsection{Network Architecture} The backbone network architecture used in all experiments was based on the prototypical network described in Wang \textit{et al.} \cite{wang-fewshotdrum-2020}. It uses a log-Mel spectrogram as input, and consists of four CNN blocks, where each convolutional filter has a kernel size of $3 \times 3$, followed by a batch normalization layer, a ReLU activation, and a $2 \times 2$ maxpooling layer. After the last convolutional block, we applied maxpooling over the time dimension, to obtain a 1024-dimensional embedding. Finally, we added a linear projection layer that reduces the 1024-dimensional embedding to 128 dimensions. \subsection{Hornbostel-Sachs Class Tree}\label{sec:classtree} We used a musical instrument hierarchy inspired by the Hornbostel-Sachs~\cite{hornbostel-classification-1961} taxonomy,\footnote{See: https://en.wikipedia.org/wiki/Hornbostel-Sachs} (maximum height of 4) which is organized by the sound production mechanisms of each instrument. Since similar sound production mechanisms can lead to similar sounds, we believe this is a natural organization that our model can leverage to learn discriminative features at different levels of a class hierarchy. \subsection{Episodic Training and Evaluation} \label{sec:episodic-eval} We have a musical instrument hierarchy tree, where individual instrument classes are leaf nodes (e.g. violin, guitar). Nodes at higher levels ($h>0$) are instrument families, (e.g. bowed strings, plucked strings). Our goal is to observe classification performance on previously-unseen leaf classes (e.g. zhongruan, erhu). Therefore, we created a data split of 70\% train, 30\% evaluation, with no overlap between train and evaluation classes at the leaf instrument level ($h=0$). We further added the constraint that the classes in both testing and evaluation sets be distributed evenly among the instrument families ($h>0$). This avoids a problem where, for example, the train set consists only of percussion and the evaluation set consists only of chordophones. All experiments shared a train/evaluation split. For each experiment, we trained every model in a few-shot learning scenario using episodic training. Each model was presented with a unique $\mathcal{|K|}$--way, $N$--shot learning task (an episode) with $M$ queries per leaf class at each training step. We constructed an episode by sampling a set of $|\mathcal{K}|$ instrument classes from the training data. For each of these $|\mathcal{K}|$ classes, we sampled $N + M$ audio examples. Here, for each class $k$, $N=|S_k|$ is the number of "shots" in the support set and $M$ is the size of the query set. We trained all models using the same random initialization for a maximum of 60,000 steps with early stopping after the evaluation loss stopped improving for 4500 steps, using the Adam optimizer and a learning rate of 0.03. During training, we set $|\mathcal{K}| = 12$, $N = 4$, and $M = 12$. We evaluated each trained model on episodes constructed from the test data. For each evaluation, we made 100 episodes, with $|\mathcal{K}| = 12$, $M = 120$. All hyperparameters were fixed except those we ablated, as described below. \subsection{Evaluation Metrics} \label{sec:evaluation-metrics} We used the F1-score as our primary classification metric, reporting the distribution of F1 scores computed for each episode, evaluated for predictions made at the finest level of the hierarchy. Similar to Bertinetto \textit{et al.} \cite{bertinetto-mistakes-2020}, we used the hierarchical distance of a mistake as a metric indicative of a model's mistake severity. Given a class tree, the hierarchical distance of mistake is defined as the height of the lowest common ancestor (LCA) between the prediction node and ground truth node when the input is misclassified (that is, when the model makes a mistake) We report the average hierarchical distance of a mistake over all evaluation episodes. For all hierarchical models, we measured mistake severity with respect to its own hierarchy. For the non-hierarchical model, we evaluated with respect to our proposed 4-level version of the Hornbostel-Sachs hierarchy, as we believe that its organization is meaningful. \section{Experiments} We now describe specific experiments to measure the effects of different design choices. We trained and evaluated all models using the procedure described in Section~\ref{sec:exp_design}. Our experiment code is available online~\footnote{https://github.com/hugofloresgarcia/music-trees}. \subsection{Tree Height} To observe the effect of tree height on classification, we constructed shorter trees from the Hornbostel-Sachs class tree by removing every leaf node's parent until the desired max height of the tree is met. We trained and evaluated five models using our proposed class tree, shortened to different heights $H \in \{0, 1, 2, 3, 4\}$, where $H = 0$ is the baseline, non-hierarchical case inspired by Wang \textit{et al}. \cite{wang-fewshotsed-2020}. Each model was trained with $\alpha = 1$ and evaluated with $N = 8$ support examples per class, at inference. \begin{figure} \centerline{ \includegraphics[width=1\columnwidth]{figs/height-vs-f1.png} \vspace{-12pt}} \caption{F1 scores for models trained with class trees of varying height $H$, evaluated over 100 episodes. Means are shown as green triangles. Note that $H = 0$ is our baseline model (Wang \textit{et al}.~\cite{wang-fewshotsed-2020}), as it is trained without a class tree.} \vspace{-12pt} \label{fig:height} \end{figure} Results are shown in Figure \ref{fig:height}. All variations of the proposed model achieved a better classification performance than the baseline. The best F1 score was seen at $H = 1$, with a mean value of .8111 over all evaluation episodes. Compared to the baseline mean score of .7792, this is a 4\% improvement. A Wilcoxon signed-rank test showed that all of our proposed models achieve a statistically significant improvement when compared to the baseline, with $p < 10^{-7}$ for all hierarchies. These results show that incorporating our method into a prototypical network can lead to statistically significant improvements in classification performance under few-shot learning conditions. Surprisingly, a shallow tree with only the coarsest categories and the leaf nodes ($H = 1$) achieved the highest increase in performance. We believe this is due to the small number of classes encountered in a training episode (in our case, 12). At a given level of the tree, at least 2 of the classes in the support set need to have a parent node in common for our method to be able to compute a meaningful metaprototype that can be leveraged by our loss. As a class tree gets deeper, the number of nodes at a given level can grow exponentially, meaning that our support set of 12 classes has a lower chance of finding meaningful groupings at deeper levels. This indicates that loss terms for levels closer to the leaf nodes are more likely be identical to the non-hierarchical loss. Though the loss term for the coarsest level is still present in these deeper trees, it has a smaller impact on the gradient of the primary loss function, as loss terms are weighted to decay exponentially as the height increases. We believe training with a higher $|\mathcal{K}|$ can help leverage deeper hierarchies better. However, we leave this for future work. \subsection{Number of Support Examples} We evaluated our best proposed model ($H = 1$, $\alpha = 1$) as well as our baseline model by varying the number of support examples $N$ provided to the model, where $N \in \{1, 4, 8, 16\}$. Results are shown in Figure \ref{fig:nshot} (left). We notice that increases in performance are greater when more support examples are provided, with the smallest increase (+2.17\% in the mean relative to baseline) occurring when $N = 1$. Our model achieved a statistically significant improvement on all test cases ($p < 10^{-4}$ for all $N$). \begin{figure} \centerline{ \includegraphics[width=1\columnwidth]{figs/n_shot-vs-f1-hlca.png}} \caption{Model comparison between the baseline model and our best proposed model ($H = 1$), evaluated under conditions with a different number of shots (support examples) provided during inference. } \vspace{-12pt} \label{fig:nshot} \end{figure} As shown in Figure \ref{fig:nshot} (right), our model achieved a lower hierarchical distance of a mistake, on average. A Wilcoxon signed-rank test indicates that all improvements are statistically significant ($p < .0005$). This means that, when making incorrect predictions, our method was more likely to make predictions that are closer to the ground truth in terms of the class hierarchy (\textit{i.e.}, lower mistake severity). We believe it is fair to assume that mistake severity from a sound production perspective (as in our class hierarchy) is related to mistake severity in predictions made by humans. That is, a human is more likely to confuse a viola for a violin than to confuse a viola for a drum. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figs/improvements.png} \vspace{-14pt} \caption{Difference in F1-score between our best proposed model ($H=1$, $\alpha=1$) and the baseline (Wang \textit{et al.} \cite{wang-fewshotsed-2020}) on all instruments in the test set. Both models were evaluated with $N = 8$.} \vspace{-12pt} \label{fig:all_insts} \end{figure*} \vspace{-6pt} \subsection{Arbitrary Class Trees} To understand how the choice of hierarchy affects the results of our model, we evaluated the same prototypical network architecture trained using the Hornbostel-Sachs hierarchy and also 10 randomly generated class trees. We generated each tree by performing random pairwise swaps between leaf nodes in our original class tree, doing so 1000 times for each node. For this experiment, all trees were trained with ($H = 3$, $\alpha=1$), and evaluated with $N = 16$. \begin{figure} \centerline{ \includegraphics[width=1\columnwidth]{figs/random.png}} \vspace{-12pt} \caption{Comparison between the best and worst performing models trained on random hierarchies. The hierarchical distance of a mistake is calculated using the hierarchy the model was trained on. For the baseline (Wang \textit{et al.}~\cite{wang-fewshotsed-2020}) , we calculated the hierarchical distance of a mistake using the Hornbostel-Sachs hierarchy.} \vspace{-12pt} \label{fig:random} \end{figure} Results for our evaluation of random class hierarchies are shown in Figure \ref{fig:random}. Our best performing random hierarchy in terms of classification performance ("random-best") achieves an F1 score comparable to our proposed hierarchy ($p > 0.05)$ though with a larger spread. Additionally, "random-best" obtains much worse mistake severity relative to the hierarchy it was trained on. This indicates that the model was not able to generalize the hierarchical structure it was trained on to out-of-distribution classes. On the other hand, our worst performing random hierarchy, "random-worst", caused a statistically significant deterioration in both classification performance and mistake severity compared to the baseline ($p < 0.005$). Even though the random-best model fairs comparably to Hornbostel-Sachs model, it is impossible to know \textit{a priori} whether any random tree will produce good results, therefore for practical uses (\textit{i.e.}, within a DAW), we find Hornbostel-Sachs to be a suitable choice. \subsection{Hierarchical Loss Functions} \label{sec:loss-ablation} To measure the impact of our proposed multi-task hierarchical loss, we compared it to a reasonable baseline "flat" loss. As our baseline approach, we treated hierarchical classification as a single-task, multilabel classification problem, where the ground truth is a multi-hot vector, with $1$s for the leaf ground truth node and all of its ancestors in the tree, and $0$s otherwise. Furthermore, we minimized the binary cross entropy between each individual predicted node and ground truth node. Note that this required us to use a sigmoid function instead of Eq.~\ref{eq:proto-softmax}, which uses a softmax function. Additionally, we performed a hyperparameter search to find the best value of the $\alpha$ parameter for our proposed loss function (Section \ref{sec:loss-ablation}) using the search space $\alpha \in \{-1, -0.5, 0, 0.5, 1\} $. For this experiment, all trees were trained with $H = 4$ and evaluated with $N = 16$. \begin{figure} \centerline{ \includegraphics[width=1\columnwidth]{figs/alpha.png}} \vspace{-12pt} \caption{Evaluating the loss function. We vary $\alpha$ in our proposed hierarchical loss from negative (emphasize loss on broader categories) to positive (emphasize loss on finer categories) and additionally compare to a "flat" binary cross entropy (BCE) baseline. } \vspace{-12pt} \label{fig:alpha} \end{figure} Results are shown in Figure \ref{fig:alpha}. We observe that only the models with $\alpha > 0$ cause an improvement over Wang \textit{et al}. \cite{wang-fewshotsed-2020}. Moreover, the flat loss causes a severe degradation in classification performance. This may be because training prototypical networks using a binary, one-vs-all formulation could yield a much less discriminative embedding space. Wang et al. \cite{wang-fewshotsed-2020} found a similar result: training prototypical networks with a binary formulation did not yield performance improvements. \vspace{-0.25cm} \subsection{Examining All Instrument Classes} In Figure~\ref{fig:all_insts}, we examine the classification performance of every instrument in our test set. We compare our best model ($H=1$, $\alpha=1$) to the baseline model from Wang \textit{et al.}~\cite{wang-fewshotsed-2020}, evaluated with $N = 8$. For clarity, we report the difference in F1 Score between the models. Our model beats the baseline on 18 of the 24 classes in the test set. In particular, our model shows a substantial improvement ($+16.56\%$) in F1 Score when classifying \textit{zhongruan}, which may be rarely seen in a dataset composed of Western music. Figure~\ref{fig:all_insts} demonstrates that, overall, our hierarchical few-shot model is better at identifying a wider range of instrument classes than the baseline. This is important if we desire to make systems that are more robust to biases in the training data and, thus, can classify more a diverse set of instrument types. \vspace{-0.25cm} \section{Conclusion} We presented an approach for incorporating hierarchical structures in a few-shot learning model for the purpose of improving classification performance on classes outside of the training distribution. Our method builds on top of prototypical networks by computing prototypical representations at fine and coarse granularities, as defined by a class hierarchy. We showed that our proposed method yields statistically significant increases in classification performance and significant decreases mistake severity when evaluated on a classification task composed of unseen musical instruments. Moreover, we found that the choice of hierarchical structure is not arbitrary, and using a hierarchy based on the sound production mechanisms of musical instruments had the best results. We hope our work enables users with diverse cultural backgrounds with the ability to classify diverse collections of musical instruments. Future directions include examining new types of hierarchies, learning multiple hierarchies simultaneously, and the unsupervised discovery of hierarchies from unlabeled data. \section{Acknowledgements} This work was funded, in part, by USA National Science Foundation Award 1901456. \newpage
1,108,101,563,807
arxiv
\section{Attentive Temporal Pooling} For the purpose of video face recognition, we propose a simple yet effective technique which we refer to as \textit{attentive temporal pooling}, inspired from \cite{yang2017neural}. The intuition behind this model is to exploit the hidden pose information in a trainable fashion to extract useful information in the noisy sequences of video frames. The proposed approach consists of three main components; i) an attention layer, ii) a pooling layer, and iii) a fully connected layer. Attention module learns to promote the informative parts of given image sequences. Through the pooling layer, the overall sequence information is aggregated and fed into a fully connected layer. This simple framework operates over CNN features. More formally, the input $X$ is a $D \times F$ matrix of $D$-dimensional CNN feature vectors coming from $F$ frames. An attention weight matrix $A$ of size $K \times D$ is initialized using Xavier Normal Form method\cite{glorot2010understanding}. $K$ is a hyperparameter that needs to be tuned (we set $K=8$ in our experiments). The attention weights $S$ is calculated by \begin{equation} S = A \times X \end{equation} \noindent which results in a $K \times F$ sized matrix. This matrix is then fed to a softmax function that operates over the temporal dimension. The $k$-th row of the resulting $K \times F$ matrix can be considered as a weight distribution over the frames, for the pose captured by the $k$-th row of the matrix $A$. We use the estimated attention weights to temporally pool the per-frame feature vectors. More specifically, we extract the video feature vector by computing a weighted sum as follows: \begin{equation} O = X \times S^T \label{eq:Odef} \end{equation} where the resulting matrix $O$ is of size $D \times K$. The output is then aggregated with max-pooling and fed into the fully connected layer which is used for classification with a cross-entropy loss. The model is implemented in PyTorch \cite{paszke2017pytorch}. The network parameters are optimized using SGD with a learning rate of 0.0001 and a momentum of 0.9. The batch size is set to 1. We note that this approach can be considered as a generalization of the aggregation scheme proposed in \cite{yang2017neural}, which is equivalent to Eq.~\ref{eq:Odef} for $K=1$. \vspace{-4mm} \section{Wildest Faces Dataset} \label{dataset_info} \begin{figure}% \centering \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/wildest/AlpachinoWildest1_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/wildest/AlpachinoWildest2_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/wildest/AlpachinoWildest3_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/wildest/AlpachinoWildest4_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/wildest/AlpachinoWildest5_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/wildest/AlpachinoWildest6_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/wildest/AlpachinoWildest7_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/wildest/AlpachinoWildest8_k8.png}} \\ \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/facescrub/AlpachinoFAceScrub1_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/facescrub/AlpachinoFAceScrub2_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/facescrub/AlpachinoFAceScrub3_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/facescrub/AlpachinoFAceScrub4_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/facescrub/AlpachinoFAceScrub5_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/facescrub/AlpachinoFAceScrub6_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/facescrub/AlpachinoFAceScrub7_k8.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/facescrub/AlpachinoFAceScrub8_k8.png}} \\ \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/YTF/AlpachinoYTF1_k5.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/YTF/AlpachinoYTF2_k5.png}}% \subfigure{% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/kmeans/AlPachino/YTF/AlpachinoYTF5_k5.png}}% \caption{K-Means cluster centers for Al Pacino images in \textit{Wildest Faces}, FaceScrub \cite{ng2014data} and YouTube Faces \cite{wolf2011face} are shown in first, second and third row, respectively. k=8 for \textit{Wildest Faces} and FaceScrub , k=3 for YouTube Faces as higher k values produce repetitive images. Average faces from \textit{Wildest Faces} are the least recognizable, indicating a large degree of variance in adverse effects. Images are histogram equalized for convenience.} \label{k_means_images} \vspace{-4mm} \end{figure} Human faces are in their wildest form during violence or fight with their expressions uncontrolled. Besides, the fast movements during violence naturally results in challenges for pose, occlusion, and blur. Based on these observations, we constructed \textit{Wildest Faces} dataset from YouTube videos by focusing on violent scenes of celebrities in movies. \vspace{-2mm} \subsection{Data Collection and Annotation} We first identified the celebrities who are known to be acting in movies with violence. We then picked their videos from YouTube in a variety of scene settings; car chase, indoor fist fights, gun fights, heated arguments and science fiction/fantasy battles. This abundance in scene settings provide an inherent variety of possible occluding objects, poses, background clutter and blur (see Fig.\ref{fig:exampleImages}). Majority of the frames of each video have celebrity face in them, though in some frames celebrities may not be present. Videos, with an average 25 FPS are then divided into shots with a maximum duration of 10 seconds. In total, we choose 64 celebrities and collect 2,186 shots from 410 videos, which results in 67,889 frames with 109,771 manually annotated bounding boxes. In order to test the generalization ability thoroughly, we split the dataset based on videos and do not include any shots from a training video in the other splits. The splits for training, validation and test sets yield the ratios 56\%-23\%-21\% video-wise and 61\%-20\%-19\% frame-wise. Video-based splitting also assesses age difference; e.g. training set includes Sean Connery in his early acting days whereas test set solely includes him in late stages of his career. Ground truth locations of faces have been annotated by 12 annotators using VOTT \footnote{\url{https://github.com/Microsoft/VoTT}}. We also label our celebrities with \textit{target} tag for recognition and label the rest of the faces as \textit{non-target}. We do not omit any adverse effect; we label extremely tiny, occluded, frontal/profile and blurred faces. When creating the recognition set, we simply crop the target label from each frame in the dataset and expand the area with a factor 0.15 to make sure we do not miss any facial parts. An example illustration can be seen in Fig.\ref{fig:exampleImages}. As we do not have celebrity faces in each collected frame, our recognition set consists of 64,242 frames in total. \vspace{-2mm} \subsection{Statistics} \begin{figure}% \centering \subfigure[Detection scales. ]{% \label{scale_det}% \includegraphics[width=0.24\linewidth]{images/detscale2-eps-converted-to.pdf}}% \subfigure[Detection blur.]{% \label{blur_det}% \includegraphics[width=0.24\linewidth]{images/det_blur2-eps-converted-to.pdf}}% \subfigure[Recognition blur.]{% \label{blur_rec}% \includegraphics[width=0.24\linewidth]{images/rec_blur2-eps-converted-to.pdf}}% \subfigure[Recognition age.]{% \label{age_variance}% \includegraphics[width=0.23\linewidth]{images/age_var_sorted-eps-converted-to.pdf}}% \caption{\textit{Wildest Faces} Statistics. In (a), blue and red correspond to width and height, respectively. Detection set offers a severely blurred data, whereas recognition has a more equal distribution. For detection scales, we see an equal emphasis on small and large faces. } \vspace{-6mm} \end{figure} Wildest Faces dataset has a diverse distribution of faces. In Figure~\ref{k_means_images}, k-means cluster centers of Al Pacino's images (dataset-wide) are shown for FaceScrub \cite{ng2014data}, YouTubeFaces \cite{wolf2011face} and Wildest Faces. It is clear that our dataset has a wide spectrum of adverse effects as its cluster centers are far from being recognizable as Al Pacino. Wildest Faces offers a good scale variance for detection, as well as high amount of blur. Recognition set offers a good distribution of several blur levels as well as a noticable average age variance. Occluded shots roughly makes up the half of the available data, which offers a challenge as well. Moreover, pose variance is sufficiently large in each shot, which would promote pose-invariance in video face recognition. In the following, we present the analysis of these effects \noindent\textbf{Scale.} We classify our faces into categories of \textit{small}, \textit{medium} and \textit{large} with respect to the heights of faces: below 100 pixels as \textit{small}, in between 100 to 300 pixels as \textit{medium}, and larger than 300 pixels as \textit{large}. Scale statistics for detection set is shown in Figure \ref{scale_det}. For recognition set, the balance shifts slightly to \textit{medium} from \textit{small}. \noindent\textbf{Blur.} We follow a multi-stage procedure to quantify blur that is present in images. Inspired from \cite{pech2000diatom}, we perform contrast normalization and then convert our images to grayscale. Grayscale images are then convolved with a 3x3 Laplacian Kernel, and variance of the result is used to produce a blurness value, which is then used to empirically find a threshold to divide the images into blur categories.. We then manually edit any wrong blur labels. Blur statistics are shown in Figure~\ref{blur_det} and Figure~\ref{blur_rec}. \noindent\textbf{Age.} For each individual we also measure the distribution of age variances, which is the differences between the dates of their earliest and latest movies in our dataset. We see drastic age variations in certain individuals, up to 40 years (see Figure~\ref{age_variance}). On average, 13 years of age variation per individual is observed. \noindent\textbf{Occlusion.} We provide occlusion information on shot-level for recognition set; we label shots as \textit{no occlusion}, \textit{mixed} or \textit{significant}. Shots labelled \textit{mixed} have occlusion in several frames of the shot, but not more than half of the face is occluded. \textit{Significant} labels indicate there are several frames with heavy occlusion, where at least half of the face is occluded. We randomly select 250 shots in our dataset and analyze them; this leads to a ratio of 20\%, 28\% and 52\% for \textit{significant}, \textit{mixed} and \textit{no occlusion} tags, respectively. \noindent\textbf{Pose.} For selected individuals, we present four average faces (each taken from a shot). We make sure that there is no occlusion or high blur in these shots, so only pose variation is the concern. It can be clearly seen from Figure~\ref{pose_info} that high pose variance leads to unidentifiable average faces supporting the complexity of Wildest Faces dateset. \vspace{-4mm} \begin{figure}% \centering \subfigure[Al Pacino]{% \label{al_lblp}% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/pose/AllPachinoLowposeLowBlur.png}% \label{al_lbhp}% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/pose/AlpachinohighposeLowBlur.png}}% \hspace{2mm}% \subfigure[Dwayne Johnson]{% \label{dj_hphb}% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/pose/DwayneJohnsonLowBlurLowPose.png}% \label{dj_hphb}% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/pose/DwayneJohnsonHighPoseHigBlur.png}}% \hspace{2mm}% \subfigure[Bruce Willis]{% \label{dj_hphb}% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/pose/BruceWillisLowPose.png}% \label{dj_hphb}% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/pose/BruceWillisHighPose.png}}% \hspace{2mm}% \subfigure[Chuck Norris]{% \label{dj_hphb}% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/pose/ChuckNorrisLowpose.png}% \label{dj_hphb}% \includegraphics[width=0.1\linewidth]{images/hist_eq_Fig2_3/pose/ChuckNorrisHighPose.png}}% \caption{Pair of average faces taken from sample shots (taken from low blurred and minimally occluded shots) of example subjects in Wildest Faces. Every first image represents a shot average with minimal pose variation and the second image is an average shot with severe pose variation. Comparison between these images indicates a large pose diversity in our dataset. Images are histogram equalized for convenience.} \vspace{-4mm} \label{pose_info} \end{figure} \section{Conclusion} \label{conclude} Inspired by the lack of a publicly available face detection and recognition dataset that concentrates primarily on violent scenes, we introduce \textit{Wildest Faces} dataset that compasses a large spectrum of adverse effects, such as severe blur, low resolution and a significant diversity in pose and occlusion. The dataset includes annotations for face detection as well as recognition with various tags, such as blur severity, scale and occlusion. To the best of our knowledge, this is the first face dataset that focuses on violent scenes which inherently have extreme facial expressions along with challenging aspects. We also provide benchmarks using prominent detection and recognition techniques and introduce an attention-based temporal pooling technique to aggregate video frames in a simple and effective way. We observe that approaches fall short to tackle the challenges of Wildest Faces. We hope Wildest Faces will boost face recognition and detection research towards edge cases. We will provide continuous improvements and additions to Wildest Faces dataset in the future.\footnote{The dataset with annotations will be made available upon publication} \section{Experimental Results} \label{results} \subsection{Face Detection} We first evaluate the performance of face detection over Wildest Faces dataset. For this purpose, we pick three most recent techniques; Single-Shot Scale Invariant Face Detector \cite{zhang2017s}, Tiny Faces \cite{hu2017finding} and Single Stage Headless Detector \cite{najibi2017ssh}. \footnote{We use the codes released by the papers' authors.} We also evaluate a light-weight, SSD \cite{liu2016ssd}-based face detector available in OpenCV \footnote{\url{https://github.com/opencv/opencv/tree/master/samples/dnn/face_detector}}. We use all these techniques in an "as-is" configuration; we apply available pre-trained models (trained on WIDER Face \cite{yang2016wider}) on all our data (train, test and validation splits combined). Since our main focus in this work is on video face recognition, we do not perform any training on \textit{Wildest Faces}, hence we compute the performance of the detectors over the entire dataset of 67889 images. \vspace{2mm} \noindent\textbf{Overall.} Detection results are shown in Table \ref{det_ap} and Figure \ref{overall_pr}. It can be said that our dataset offers a new challenge for all the detectors. Performance-wise, we see Tiny Faces \cite{hu2017finding} and SSH \cite{najibi2017ssh} performing on par with each other. SFD \cite{zhang2017s} is the third best, whereas the light-weight SSD \cite{liu2016ssd} performs the worst. \vspace{2mm} \noindent\textbf{Blur.} Our blur analysis results are shown in Figure \ref{severe_blur} to \ref{low_blur}. We observe that blur severely degrades each detector; higher the blur worse the detection performance. SSH \cite{najibi2017ssh} seems to be the most robust detector to blur, whereas for low blur cases Tiny Faces \cite{hu2017finding} performs better with a slight margin. \vspace{2mm} \noindent\textbf{Scale.} We test the performance of the detectors in different scales. Results are shown in Figure \ref{large_height} to \ref{small_height}. The same trend in overall performance is visible here as well; Tiny Faces\cite{hu2017finding} takes the lead over images with large size, with SSH \cite{najibi2017ssh} closely trailing behind, whereas the others fall visibly behind. As faces become smaller, SSH \cite{najibi2017ssh} catches up and takes the lead from Tiny Faces\cite{hu2017finding}. All the detectors have degraded performance when faces become smaller. We perform the same assessment for width and obtain a reminiscent trend. These findings indicate that there is still considerable room for improvement for face detection in challenging cases like extreme blur or small size. \vspace{-4mm} \begin{table}[] \normalsize \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|l|l|l|l|l|l|l|l} \hline \textbf{Method} & \textbf{Large} & \textbf{Medium } & \textbf{Small} & \textbf{Severe Blur} & \textbf{High Blur} & \textbf{Medium Blur} & \textbf{Low Blur} & \textbf{Overall} \\ \hline \hline \textbf{SSD \cite{liu2016ssd}-based detector} & 73.2\% & 47.1\% & 19.9\% & 36\% & 56.7\% & 68\% & 70.2\% & 51.6\% \\ \hline \textbf{SFD \cite{zhang2017s}} & 84.6\% & 75.9\% & 69.5\% & 74.3\% & 78.4\% & 84\% & 87\% & 77.3\% \\ \hline \textbf{Tiny Faces \cite{hu2017finding}} & \textbf{95.6\%} & 89.3\% & 80.7\% & 85.2\% & 89.6\% & 92.5\% & \textbf{94.6\%} & 90.5\% \\ \hline \textbf{SSH \cite{najibi2017ssh}} & 94.1\% & \textbf{90.7\%} & \textbf{82.4\%} & \textbf{88.4\%} & \textbf{92\%} & \textbf{93.7\%} & 94\% & \textbf{90.7\%} \\ \hline \end{tabular}% } \vspace{-4mm} \caption{Detection AP values. \textit{Small}, \textit{Medium} and \textit{Large} refer to height scale categories.} \label{det_ap} \vspace{-6mm} \end{table} \begin{figure \centering \subfigure[Severe blur.]{% \label{severe_blur}% \includegraphics[width=0.24\linewidth]{images/severe_blur_pr-eps-converted-to.pdf}}% \subfigure[High blur.]{% \label{high_blur}% \includegraphics[width=0.24\linewidth]{images/high_blur_pr-eps-converted-to.pdf}}% \subfigure[Medium blur.]{% \label{medium_blur}% \includegraphics[width=0.24\linewidth]{images/medium_blur_pr-eps-converted-to.pdf}}% \subfigure[Low blur.]{% \label{low_blur}% \includegraphics[width=0.24\linewidth]{images/low_blur_pr-eps-converted-to.pdf}} \\ \vspace{-4mm} % \subfigure[Large height.]{% \label{large_height}% \includegraphics[width=0.24\linewidth]{images/large_height_pr-eps-converted-to.pdf}}% \subfigure[Medium height.]{% \label{medum_height}% \includegraphics[width=0.24\linewidth]{images/medium_height_pr-eps-converted-to.pdf}}% \subfigure[Small height.]{% \label{small_height}% \includegraphics[width=0.24\linewidth]{images/small_height_pr-eps-converted-to.pdf}}% \subfigure [Overall.]{% \label{overall_pr}% \includegraphics[width=0.24\linewidth]{images/overall_pr-eps-converted-to.pdf}}% \caption{Smaller scales and high blur levels severely degrade results of all face detectors.} \label{detection_results} \vspace{-6mm} \end{figure} \subsection{Face Recognition} \subsubsection{Image-based Face Recognition} For image-based face recognition, we use the train, validation and test splits of Wildest Faces dataset that consist of 39459, 12088, and 12695 face images respectively. We use two prominent face recognition approaches; VGG Face \cite{parkhi2015deep} and Center Loss\cite{wen2016discriminative} (trained on LFW \cite{huang2007labeled}). We first train these models from scratch over the \textit{Wildest Faces}, but we observe that they achieve significantly better results with pretrained models (trained on fairly larger datasets). We resize face regions to 96x96 and perform the relevant preprocessing steps in line with each technique's implementation using Caffe \cite{jia2014caffe}. We make minimal changes to original hyperparameters during training to improve convergence. The image-based recognition results are shown in Table \ref{rec_results1}. Besides the comparison of face recognition techniques, we also test the effect of using alignment. For this purpose, we utilize MTCNN alignment technique \cite{zhang2016joint}. We bypass the detector of MTCNN and use the ground-truth locations of faces during training. We add fully connected layers to the end of both networks of \cite{parkhi2015deep} and \cite{wen2016discriminative} to cast them as classifiers, since original models were for identification. The experimental results show that when no alignment is used, CenterLoss\cite{wen2016discriminative} method yields superior results. On the contrary, VGGFace\cite{parkhi2015deep} method benefits significantly from alignment and yields on par performance in the presence of alignment. \vspace{-4mm} \subsubsection{Video Face Recognition} Our dataset consists of video clips of celebrities, so it is well-suited as a benchmark for video face recognition. The train, validation and test splits consist of 1347, 387 and 452 shots, respectively. The simplest baseline is majority voting using the techniques presented for standard face recognition. Results are shown in Table \ref{rec_results2}. We measure the recognition performance both at frame-level and at shot-level. Frame-level performance is evaluated as the accuracy over 12695 images and and shot-level over 452 shots, respectively. For video face recognition, we also train several LSTM \cite{hochreiter1997long} architectures. Using the finetuned VGG features that are aligned with MTCNN, we implement single-layer LSTM, 2-layer LSTM (LSTM2), bi-directional LSTM (BiLSTM) and compare their performances with the attentive temporal pooling method described above. RMSprop optimizer with a learning rate of 0.0001 is used in all configurations of LSTMs for a fair comparison. Hidden sizes are fixed to 4096. Results are shown in Table \ref{rec_results2}. As expected, majority voting of standard image based techniques fails to yield competitive results at the shot-level, whereas frame-level accuracy of VGGFace\cite{parkhi2015deep} is competitive with video-based recognition techniques. At the shot-level, the best performing LSTM model is one-layer LSTM, whereas two-level LSTMs perform better at the shot-level. Overall, we observe that the proposed attentive temporal pooling model performs the best on average. Note that the accuracies are all around 50\% mark, indicating that the violent face recognition research can benefit from more tailored models. \begin{table}[] \centering \resizebox{0.45\textwidth}{!}{% \begin{tabular}{l|l|l} \hline \textbf{Method} & \textbf{Alignment} & \textbf{Accuracy} \\ \hline \hline \textbf{VGGFace \cite{parkhi2015deep}} & none & 37.8\% \\ \textbf{Center Loss \cite{wen2016discriminative}} & none & 39.8\% \\ \hline \textbf{VGGFace \cite{parkhi2015deep}} & \cite{zhang2016joint} & 39.7\% \\ \textbf{Center Loss \cite{wen2016discriminative}} & \cite{zhang2016joint} & \textbf{39.9\%} \\ \hline \end{tabular}% } \caption{Image-based face recognition results. } \label{rec_results1} \vspace{-5mm} \end{table} \begin{table}[h] \vspace{-2mm} \centering \resizebox{0.5\textwidth}{!}{% \begin{tabular}{l|l|l} \hline \textbf{Method} & \textbf{Frame-Level}& \textbf{Shot-Level} \\ \hline\hline \textbf{VGGFace\cite{parkhi2015deep}} & 51.98\% & 49.5\% \\ \textbf{CenterLoss\cite{wen2016discriminative}} & 49.6\% & 46.6\% \\ \textbf{LSTM} & 52.1\% & 51.9\%\\ \textbf{LSTM2} & \textbf{52.3\%} & 49.3\% \\ \textbf{BiLSTM} & 49.6\% & 50.6\%\\ \textbf{AttTempPool} & 52.2\% & \textbf{52.6}\%\\ \hline \end{tabular} } \caption{Accuracy values for video face recognition. In \textit{shot-level} evaluation, the accuracy is calculated over shots, whereas in \textit{frame-level}, accuracy is calculated over frames by assigning all the frames in the shot to the label of the sequence.} \label{rec_results2} \vspace{-5mm} \end{table} \vspace{-5mm} \section{Introduction} \label{sec:intro} Detection and recognition of faces have a wide range of application areas, such as surveillance, consumer products and security systems. With the emergence of deep learning, impressive accuracies have been reported in face detection~\cite{li2015convolutional,farfade2015multi,najibi2017ssh, hu2017finding} compared to earlier results obtained by hand-crafted feature pipelines such as~\cite{viola2004robust, yang2014aggregate,li2013learning, mathias2014face}. Example approaches include cascade systems for multi-scale detection~\cite{li2015convolutional,yang2016wider,zhang2016joint}, facial-part scoring~\cite{yang2015facial,samangouei2018face}, proposal-stage anchor design~\cite{zhang2017s,zhu2018seeing}, ensemble systems~\cite{hu2017finding,yang2017face}, optimized single-stage detectors~\cite{najibi2017ssh,tang2018pyramidbox} and integrated attention mechanisms\cite{wang2017face} (see the survey \cite{zafeiriou2015survey}). Likewise, there has been a plethora of studies on face recognition. Compared to the pioneering works of \cite{turk1991face, ahonen2006face, xie2010fusing,edwards1998face, wright2009robust, wiskott1997face}, face recognition models that benefit from deep learning-based techniques and concentrate on better formulation of distance metric optimization raised the bar~\cite{schroff2015facenet,taigman2014deepface,parkhi2015deep, wen2016discriminative,sun2013hybrid,sun2014deep,sun2015deepid3}. In addition to face recognition in still images, video-based face recognition studies have also emerged (see \cite{ding2016comprehensive} for a recent survey). Ranging from local feature-based methods~\cite{li2013probabilistic,parkhi2014compact,li2014eigen} to manifolds \cite{huang2015log} and metric learning~\cite{cheng2018duplex,huang2017cross,goswami2017face}, recent studies have focused on finding informative frames in image sets \cite{goswami2014mdlface} and finding efficient and fast ways of feature aggregation~\cite{chowdhury2016one,yang2017neural,rao2017learning, rao2017attention}. Nevertheless, the real-life conditions still challenge the state-of-the-art algorithms due to variations in scale, background, pose, expression, lighting, occlusion, age, blur and image resolution. As shown in \cite{yang2016wider}, several leading algorithms produce severely degraded results in rather unconstrained conditions. Recently, there have been many attempts in building large scale datasets with variety of real-life conditions. FDDB \cite{jain2010fddb}, AFW \cite{zhu2012face}, PASCAL Faces \cite{yan2014face}, Labeled Faces in the Wild (LFW) \cite{huang2007labeled}, Celeb Faces \cite{sun2013hybrid}, Youtube Faces (YTF) \cite{wolf2011face}, IJB-A \cite{klare2015pushing}, MS-Celeb-1M \cite{guo2016ms}, VGG-Face \cite{parkhi2015deep}, VGG2-Face \cite{cao2017vggface2}, MegaFace \cite{kemelmacher2016megaface} and WIDER Face \cite{yang2016wider} datasets have been made publicly available for research purposes. Datasets with extreme scales, such as \cite{schroff2015facenet} and \cite{taigman2014deepface} have also been used but have not been disclosed to the public. However, these datasets can still be considered as "controlled" in several regards, such as resolution, the presence of motion blur and the very quality of the image. Moreover, these datasets mostly omit noisy samples and are not representative of extreme expressions, such as anger and fear in violent scenes. \begin{figure}% \centering \subfigure{% \label{}% \includegraphics[width=0.7\linewidth]{images/anno_procedure-eps-converted-to.pdf}}% \vspace{-4mm} \\ \subfigure{% \label{}% \includegraphics[width=0.09\linewidth]{images/variance_images/TomHardy_scene_0002_shot_0002_frame_0006.png}}% \subfigure{% \label{}% \includegraphics[width=0.09\linewidth]{images/variance_images/UmaThurman_scene_0001_shot_0002_frame_0028.png}}% \subfigure{% \label{}% \includegraphics[width=0.09\linewidth]{images/variance_images/WesleySnipesscene_0004_shot_0001_frame_0011.png}}% \subfigure{% \label{}% \includegraphics[width=0.09\linewidth]{images/variance_images/WillSmith_scene_0005_shot_0005_frame_0016.png}}% \subfigure{% \includegraphics[width=0.09\linewidth]{images/variance_images/scene_0004_shot_0001_frame_0001JeanClaudeVanDamme.jpg}}% \subfigure{% \label{}% \includegraphics[width=0.09\linewidth]{images/variance_images/scene_0004_shot_0001_frame_0008JackieChan.jpg}} \subfigure{% \label{}% \includegraphics[width=0.09\linewidth]{images/variance_images/scene_0004_shot_0001_frame_0009AnnaHathaway.jpg}}% \subfigure{% \label{}% \includegraphics[width=0.09\linewidth]{images/variance_images/scene_0005_shot_0001_frame_0006JimCarrey.jpg}}% \subfigure{% \label{}% \includegraphics[width=0.09\linewidth]{images/variance_images/scene_0005_shot_0001_frame_0009JohnGoodman.jpg}}% \subfigure{% \includegraphics[width=0.09\linewidth]{images/variance_images/scene_0006_shot_0002_frame_0009DonWilson.jpg}}% \label{fig:exampleImages}% \vspace{-3mm} \caption{ Our dataset creation pipeline is shown in the first row. Faces with green bounding boxes indicate the celebrities that are used for recognition. Second row shows sample recognition images from \textit{Wildest Faces} dataset which include variety of real-life conditions. Note the amount of pose variations, blur and low image quality. Moreover, \textit{Wildest Faces} offers a considerable age variance, extreme facial expressions as well as severe occlusion.} \vspace{-6mm} \end{figure} In this paper, we present a new benchmark dataset, namely \textit{Wildest Faces}, where we put the emphasis on violent scenes with virtually unconstrained scenarios. In addition to previously studied adverse conditions, \textit{Wildest Faces} dataset contains images from a large spectrum of image quality, resolution and motion blur (see Fig. \ref{fig:exampleImages}). The dataset consists of videos of celebrities in which they are practically fighting. There are $ \sim 68$K images (a.k.a frames) and 2186 shots of 64 celebrities, and all of the video frames are manually annotated to foster research both for detection and recognition of \textit{``faces in the wildest''}. It is especially important from the surveillance perspective to identify the people who are involved in crime scenes and we believe that the availability of such a dataset of violent faces would stir further research towards this direction as well. We provide a detailed discussion of the statistics and the evaluation of state-of-the-art methods on the proposed dataset. We exploit the dataset both in the context of face detection, image-based and video-based face recognition. For video face recognition, we also introduce an attention-based temporal pooling technique to aggregate videos in a simple and effective way. Our experimental results demonstrate that such a technique can be preferable amongst others, whilst there is still a large room for improvement in this challenging dataset that is likely to facilitate further research. \vspace{-4mm} \section{Discussion on available datasets} \label{related_work} \textbf{Face Detection Datasets:} AFW \cite{zhu2012face} contains background clutter with different face variations and associated annotations include bounding box, facial landmarks and pose angle labels. FDDB~\cite{jain2010fddb} is built using Yahoo!, where images with both eyes in clear sight are neglected, which leads to a rather constrained distribution in terms of pose and occlusion. IJB-A \cite{klare2015pushing} is one of the few datasets that contains annotations for both recognition and detection tasks. MALF \cite{yang2015fine} incorporates rich annotations in the sense that they contain pose, gender and occlusion information as well as expression information with a certain level of granularity. PASCAL Faces \cite{yan2014face} contains images selected from PASCAL VOC \cite{everingham2010pascal}. In AFLW \cite{koestinger11a} annotations come with rich facial landmark information available. WIDER Face \cite{yang2016wider} is one of the largest datasets released for face detection. Collected using categories chosen from LSCOM \cite{naphade2006large}, each annotation is categorized due to its scale, occlusion, pose, overall difficulty and events, which facilitates in-depth analysis. Detailed information on these datasets can be found in Table \ref{detection_datasets}. \begin{table}[] \tiny \resizebox{\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|l|} \hline \textbf{Dataset} & \textbf{\# Images} & \textbf{\# Faces} & \textbf{Source} & \textbf{Type} & \textbf{Public} \\ \hline\hline AFW \cite{zhu2012face} & 205 & 473 & Flickr & Images & Yes \\ \hline FDDB \cite{jain2010fddb} & 2,845 & 5,171 & Yahoo! News & Images & Yes \\ \hline IJB - A \cite{klare2015pushing} & 24,327 & 49,579 & Internet & Images / Videos & Yes \\ \hline MALF \cite{yang2015fine} & 5,250 & 11,931 & Flickr, Baidu Inc. & Images & Yes \\ \hline AFLW \cite{koestinger11a} & 21,997 & 25,993 & Flickr & Images & Yes \\ \hline PASCAL Faces \cite{yan2014face} & 851 & 1,335 & PASCAL VOC & Images & Yes \\ \hline WIDER Face \cite{yang2016wider}& 32,203 & 393,703 & Google, Bing & Images & Yes \\ \hline \textbf{Wildest Faces} & 67,889 & 109,771 & YouTube & Videos & Yes \\ \hline \end{tabular}% } \vspace{-4mm} \caption{Face detection datasets.} \label{detection_datasets} \vspace{-4mm} \end{table} \vspace{2mm} \noindent \textbf{Face Recognition Datasets:} Labeled Faces in the Wild (LFW) \cite{huang2007labeled} is one of the widely used datasets in the recognition literature. Viola-Jones detector \cite{viola2001rapid} is used to detect faces during the dataset collection phase, and then manual correction on annotations is performed. PubFig \cite{kumar2009attribute} is created as a complement to the LFW. The faces in this set are the images of the public celebrities and are collected using Google and Flickr. Celebrity Faces \cite{sun2013hybrid} is constructed using public figures. In one of the turning points of face recognition, large-scale VGG face dataset \cite{parkhi2015deep} is released with the help of automated face detection and a stunning number of 200 human annotators. During its collection phase, care is taken to avoid having the same individuals with LFW and YTF datasets. Recently, this dataset is further expanded in \cite{cao2017vggface2} as VGG Face-2, which is fairly larger than its predecessor. FaceScrub \cite{ng2014data} is another dataset comprised of individuals who are primarily celebrities. CASIA-WebFace \cite{yi2014learning} is another popular dataset, though authors note that they can't be sure that all images are annotated correctly. MS-Celeb-1M \cite{guo2016ms} contains approximately 10 million images of 100,000 individuals where 1,500 of them are celebrities. In one of the latest benchmarks released publicly, MegaFace \cite{kemelmacher2016megaface} contains a large set of pictures from Flickr with a size of 50 pixels in both dimensions, where faces are detected using Headhunter \cite{mathias2014face}. Authors of \cite{kemelmacher2016megaface} also presented an improved version of MegaFace, dubbed MF2 \cite{nech2017level}, that builds on its predecessor. Additionally, tech giants have utilized their proprietary datasets in Facebook's DeepFace \cite{taigman2014deepface}, Google's FaceNet \cite{schroff2015facenet} and NTechLab's \footnote{\url{https://ntechlab.com }}. For video face recognition, YouTube Faces \cite{wolf2011face} uses \cite{viola2001rapid} to automatically detect faces. Each face in the data is centered, expanded with 2.2 magnification factor and the size of the annotation is fixed with 100 pixels in both dimensions. Other two prominent video face recognition datasets are COX \cite{huang2015benchmark} and PasC \cite{beveridge2013challenge}. Despite their relatively large size, PasC \cite{beveridge2013challenge} suffers from video location constraints and COX \cite{huang2015benchmark} suffers from demographics as well as video location constraints. Detailed information on these datasets can be found in Table~\ref{recognition_sets}. \vspace{2mm} \noindent\textbf{Limitations of the available datasets:} Except WIDER, the available datasets generally focus on high resolution and high quality images. Moreover, several of these datasets filter low quality, occluded and blurred images, thus do not represent what is out there in the real world. Although there are video recognition datasets which inherently consist of motion blurred or comparably low quality images (e.g. \cite{wolf2011face}), majority of the datasets are likely to suffer from automatically performed face detector bias. In addition, to the best of our knowledge, none of these datasets primarily focus on violent scenes where unconstrained scenarios might actually introduce unconstrained effects. \vspace{-4mm} \begin{table}[t] \tiny \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|} \hline \textbf{Dataset} & \textbf{\# Images (or videos)} & \textbf{\# Individuals} & \textbf{Source} & \textbf{Type} \\ \hline\hline \textbf{Wildest Faces} & 2,186 (64,242 frames) & 64 & YouTube & Videos \\ \hline COX \cite{huang2015benchmark} & 3,000 & 1000 & Custom & Videos \\ \hline PasC \cite{beveridge2013challenge} & 2,802 + 9376 frames &293 & Custom & Videos \\ \hline YTF \cite{wolf2011face} & 3,425 & 1,595 & Youtube & Videos \\ \hline \hline LFW \cite{huang2007labeled} & 5,749 & 13,233 & Yahoo! News & Images \\ \hline PubFig \cite{kumar2009attribute} & 60,000 & 200 & Google, Flickr & Images \\ \hline CelebA \cite{yang2015facial} & 202,599 & 10,177 & Google, Bing & Images \\ \hline CelebFaces \cite{sun2013hybrid} & 87,628 & 5,436 & Flickr, Baidu Inc. & Images \\ \hline VGG Face \cite{parkhi2015deep} & 2.6M & 2,622 & Google, Bing & Images \\ \hline FaceScrub \cite{ng2014data} & 106,863 & 530 & Internet & Images \\ \hline CASIA-WebFace \cite{yi2014learning} & 494,414 & 10,000 & IMDB & Images \\ \hline MegaFace \cite{kemelmacher2016megaface} & 1M & 690,572 & Flickr & Images \\ \hline VGG-2 \cite{cao2017vggface2} & 3.2M & 9,131 & Google & Images \\ \hline MF2 \cite{nech2017level} & 4.7M & 672,000 & Flickr & Images \\ \hline MS-Celeb-1M \cite{guo2016ms} & 10M & 100,000 & Internet,Bing & Images \\ \hline DeepFace \cite{taigman2014deepface}\dag & 4M & 4,000 & Internal & Images \\ \hline FaceNet \cite{schroff2015facenet} \dag & 500 M & 8 M & Internal & Images \\ \hline NTechLab \dag{} & 18.4 M & 200.000 & Internal & Images \\ \hline \end{tabular} } \end{center} \vspace{-2mm} \caption{Face recognition datasets. \dag{} indicates private dataset. Among the available video face recognition datasets, \textit{Wildest Faces} have the highest video count per individual.} \label{recognition_sets} \vspace{-6mm} \end{table}
1,108,101,563,808
arxiv
\section{Introduction} In a controversial essay, \citet{marcus2017} draws the distinction between two types of generalisation: \emph{interpolation} and \emph{extrapolation}; with the former being predictions made \emph{between} the training data points, and the latter being generalisation \emph{outside} this space. He goes on to claim that deep learning is only effective at interpolation, but that human like learning and behaviour requires extrapolation. On Twitter, Thomas Diettrich rebutted this claim with the response that no methods extrapolate; that \emph{what appears to be extrapolation from X to Y is interpolation in a representation that makes X and Y look the same.}~\footnote{\href{https://twitter.com/tdietterich/status/948811920001282049}{https://twitter.com/tdietterich/\\status/948811920001282049}} It is certainly true that extrapolation is hard, but there appear to be clear real-world examples. For example, in 1705, using Newton's then new inverse square law of gravity, Halley predicted the return of a comet 75 years in the future. This prediction was not only possible for a new celestial object for which only a limited amount of data was available, but was also effective on an orbital period twice as long as any of those known to Newton. Pre-Newtonian models required a set of parameters (deferents, epicycles, equants, \etc) for each body and so would struggle to generalise from known objects to new ones. Newton's theory of gravity, in contrast, not only described celestial orbits but also predicted the motion of bodies thrown or dropped on Earth. In fact, most scientists would regard this sort of extrapolation to new phenomena as a vital test of any theory's legitimacy. Thus, the question of what is required for extrapolation is reasonably important for the development of NLP and deep learning. \begin{figure}[t] \centering \resizebox{0.9\columnwidth}{!}{ \begin{tikzpicture} \draw [dotted] plot [smooth, tension=1] coordinates {(0,-0.5) (1,-0.5) (1.5,-0.8) (1,-1.2) (0,-1) (0,-1.5) (0.75,-1.4) (1.05,-1.4) (1.5,-1.6)}; \draw [lightgray,-latex] (1.5,-0.7) -- (1.8,-0.7); \draw [-latex] (1.5,-0.7) -- (1.5,-1.0); \draw [lightgray,-latex] (0.75,-1.4) -- (0.75,-1.7); \draw [-latex] (0.75,-1.4) -- (1.05,-1.4); \end{tikzpicture} }\caption{Generalising to unseen data: dotted line = training manifold; black arrows = interpolation; grey arrows = extrapolation. Both directions are represented globally in the training data, but local interpolation is only effective in one of them at each point. }\label{extrafig} \end{figure} \citet{marcus2017} proposes an experiment, consisting of learning the identity function for binary numbers, where the training set contains only the even integers but at test time the model is required to generalise to odd numbers. A standard multilayer perceptron (MLP) applied to this data fails to learn anything about the least significant bit in input and output, as it is constant throughout the training set, and therefore fails to generalise to the test set. Many readers of the article ridiculed the task and questioned its relevance. Here, we will argue that it is surprisingly easy to solve Marcus' even-odd task and that the problem it illustrates is actually endemic throughout machine learning. \citet{marcus2017} links his experiment to the systematic ways in which the meaning and use of a word in one context is related to its meaning and use in another~\citep{fodorpylyshyn1988,lakebaroni2017}. These regularities allow us to extrapolate from sometimes even a single use of a word to understand all of its other uses. In fact, we can often use a symbol effectively with no prior data. For example, a language user that has never have encountered the symbol \emph{Socrates} before may nonetheless be able to leverage their syntactic, semantic and inferential skills to conclude that \emph{Socrates is mortal} contradicts \emph{Socrates is not mortal}. Marcus' experiment essentially requires extrapolating what has been learned about one set of symbols to a new symbol in a systematic way. However, this transfer is not facilitated by the techniques usually associated with improving generalisation, such as L2-regularisation \cite{l2reg1963}, drop-out \cite{dropout2014} or preferring flatter optima \cite{flatopt1995}. In the next section, we present four ways to solve this problem and discuss the role of global symmetry in effective extrapolation to the unseen digit. Following that we present practical examples of global structure in the representation of sentences and words. Global, in these examples, means a model form that introduces dependencies between distant regions of the input space. \section{Four Ways to Learn the Identity Function} The problem is described concretely by \citet{marcus1998}, with inputs and outputs both consisting of five units representing the binary digits of the integers zero to thirty one. The training data consists of the binary digits of the even numbers $(0, 2, 4, 8, \ldots, 30)$ and the test set consists of the odd numbers $(1, 3, 5, 7, \ldots, 31)$. The task is to learn the identity function from the training data in a way that generalises to the test set. The first model (\textsc{slp}) we consider is a simple linear single layer perceptron from input to output. In the second model (\textsc{flip}), we employ a change of representation. Although the inputs and outputs are given and fixed in terms of the binary digits \textbf{1} and \textbf{0}, we will treat these as symbols and exploit the freedom to encode these into numeric values in the most effective way for the task. Specifically, we will represent the digit \textbf{1} with the number \texttt{0} and the digit \textbf{0} with the number \texttt{1}. Again, the network will be a linear single layer perceptron without biases. Returning to the original common-sense representation, \textbf{1} $\rightarrow$ \texttt{1} and \textbf{0} $\rightarrow$ \texttt{0}, the third model (\textsc{ortho}) attempts to improve generalisation by imposing a global condition on the matrix of weights in the linear weights. In particular, we require that the matrix is orthogonal, and apply the absolute value function at the output to ensure the outputs are not negative. For the fourth model (\textsc{conv}), we use a linear Convolutional Neural Network (ConvNet, \citealp{Lecun98gradient-basedlearning}) with a filter of width five. In other words, the network weights define a single linear function that is shifted across the inputs for each output position. Finally, in our fifth model (\textsc{proj}) we employ another change of representation, this time a dimensionality reduction technique. Specifically, we project the 5-dimensional binary digits $\mathbf{d}$ onto an $n$ dimensional vector $\mathbf{r}$ and carry out the learning using an $n$-to-$n$ layer in this smaller space. \begin{equation} \label{dimred} \mathbf{r} = \mathbf{A} \mathbf{d} \end{equation} \noindent where the entries of the matrix $\mathbf{A}$ are $A_{ij} = e^{\beta (j - i)}$. In each case, our loss and test evaluation is based on squared error between target and predicted outputs. \begin{table}[t] \centering {% \begin{tabular}{lcc} \toprule {\bf Model} & {\bf Train} & {\bf Test} \\ \midrule \textsc{slp} & 8.12e-06 & 0.99 \\ \textsc{flip} & 6.79e-05 & 1.04e-05 \\ \textsc{ortho} & 1.27e-04 & 4.09e-05 \\ \textsc{conv} & 1.71e-04 & 3.20e-05 \\ \textsc{proj} & 5.15e-06 & 8.07e-06 \\ \bottomrule \end{tabular} }% \caption{Mean Squared Error on the Train (even numbers) and Test (odd numbers) Sets.}\label{msetab} \end{table} \paragraph{Training.} Each model is implemented in TensorFlow~\citep{tensorflow2015-whitepaper} and optimised for 1,000 epochs. In \cref{dimred}, we find that values of $\beta=\ln(2)$ and $n=1$ work well in practice. \paragraph{Results.} As can be seen in \cref{msetab}, {\textsc{slp}} fails to learn a function that generalises to the test set. In contrast, all the other models (\textsc{flip}, \textsc{ortho}, \textsc{conv}, \textsc{proj}) generalise almost perfectly to the test set. Thus, we are left with four potential approaches to learning the identity function. Is lowest test set error the most appropriate means of choosing between them? \paragraph{Discussion.} This decision probably isn't as momentous as the choice discussed by Galileo in his Dialogue Concerning the Two Chief World Systems, where he presented the arguments for and against the heliocentric and geocentric models of planetary motion. These pre-Newtonian models could, in principle, attain as much predictive accuracy as desired, given enough data, by simply incorporating more epicycloids for each planet. On the other hand, they could not extrapolate beyond the bodies in that training data. Here, we will try to extract something useful from our results by considering how each model might generalise to other data and problems. Although {\textsc{flip}} has the second lowest test set error, it is at best a cheap hack\footnote{Nonetheless, such tricks are hardly unknown in machine learning research.} which works only in the limited circumstance of this particular problem. If there were more than a single fixed digit in the training data, this trick would not work. {\textsc{ortho}} suffers from the same problem, though it does embody the principle that everything in the input should end up in the output which seems to be part of this task. {\textsc{conv}} on the other hand will generalise to any size of input and output, and will even generalise to multiplication by powers of 2, rather than just learning the identity function. {\textsc{proj}}, with the values $\beta=\ln(2)$ and $n=1$, boils down to converting the binary digits into the equivalent single real value and learning the identity function via linear regression. This approach will extrapolate to values of any magnitude\footnote{Generalisation to values outside the training set would not be so successful had we used an MLP rather than a uniform linear function. Fitting to the training set using sigmoids will not yield a function that continues to approximate the identity very far beyond its range in the training set.} and generalise to learning any linear function, rather than just the identity. As such, it is probably the only practically sensible solution, although it cheats by avoiding the central difficulty in the original problem. At its most general, this central difficulty is the problem of extrapolating in a direction that is perpendicular to the training manifold. The even number inputs lay on a 4 dimensional subspace, while the odd numbers were displaced in a direction at right angles to that subspace. In this general form, the problem of how to respond to variation in the test set that is perpendicular to the training manifold lacks a well-defined unique solution, and this helps to explain why many people dismissed the task entirely. However, this problem is in fact pervasive in most of machine learning. Training instances will typically lie on a low dimensional manifold and effective generalisation to new data sources will commonly require handling variation that is orthogonal to that manifold in an appropriate manner, e.g. \cref{extrafig}. If prediction is based on local interpolation using a highly non-linear function, then no amount of smoothing of the fit will help. Convolution is able to extrapolate from even to odd numbers because it exploits the key structure of the ordering of digits that a human would use. A human, given this task, would recognise the correspondence between input and output positions and then apply the same copying operation at each digit, which is essentially what convolution learns to do. It implicitly assumes that there is a global translational symmetry\footnote{Coincidentally, the rejection of the Earth centred model in favour of planetary motions orbiting the Sun played an important role in the recognition that the laws of physics also have a global translational symmetry, i.e. that no point in space is privileged or special.} across input positions, and this reduces the number of parameters and allows generalisation from one digit to another. Returning to the linguistic question that inspired the task, we can think of systematicity in terms of symmetries that preserve the meaning of a word or sentence \cite{kiddondomingos2015}. Ideally, our NLP models should embody or learn the symmetries that allow the same meaning to be expressed within multiple grammatical structures. Unfortunately, syntax is complex and prohibits a short and clear investigation here. On the other hand, relations between sentences (e.g.\@\xspace contradiction) sometimes have much simpler symmetries. In the next section, we examine how global symmetries can be exploited in an inference task. \section{Global Symmetries in Natural Language Inference} The Stanford Natural Language Inference (SNLI, \citealp{snli2015}) dataset attempts to provide training and evaluation data for the task of categorising the logical relationship between a pair of sentences. Systems must identify whether each hypothesis stands in a relation of \emph{entailment}, \emph{contradiction} or \emph{neutral} to its corresponding premise. A number of neural net architectures have been proposed that effectively learn to make test set predictions based purely on patterns learned from the training data, without additional knowledge of the real world or of the logical structure of the task. Here, we evaluate the Decomposable Attention Model (DAM, \citealp{dam2016}) in terms of its ability to extrapolate to novel instances, consisting of contradictions from the original test set which have been reversed. For a human that understands the task, such generalisation is obvious: knowing that A contradicts B is equivalent to knowing that B contradicts A. However, it is not at all clear that a model will learn this symmetry from the SNLI data, without it being imposed on the model in some way. Consequently we also evaluate a modification, S-DAM, where this constraint is enforced by design. \paragraph{Models.} Both models build representations, $\mathbf{v}_p$ and $\mathbf{v}_h$, of premise and hypothesis in attend and compare steps. The original DAM model then combines these representations by concatenating them and then transforming and aggregating the result to produce a final representation $\mathbf{u}_{ph}$, forming the input to a 3-way softmax: \begin{equation} \begin{aligned} \mathbf{u}_{ph} & = t( \mathbf{v}_p ; \mathbf{v}_h ), \\ p(i) & = s(\mathbf{u}_{ph} \cdot \mathbf{W}_i), \quad \text{with } i \in \{ c, e, n\}. \\ \end{aligned} \end{equation} In S-DAM, we break the prediction into two decisions: contradiction vs. non-contradiction followed by entailment vs. neutral. The first decision is symmetrised by concatenating the vectors in both orders and then summing the output of the same transformation applied to both concatenations: \begin{equation} \begin{aligned} \tilde{\mathbf{u}}_{ph} & = t( \mathbf{v}_p ; \mathbf{v}_h ) + t( \mathbf{v}_h ; \mathbf{v}_p ), \\ p(j) & = s(\tilde{\mathbf{u}}_{ph} \cdot \tilde{\mathbf{W}}_j), \quad \text{with } j \in \{ c, \neg c \}. \\ \end{aligned} \end{equation} Predictions for entailment and neutral are then made conditioned on $\neg c$: \begin{equation} \begin{aligned} \bar{\mathbf{u}}_{ph} & = t( \mathbf{v}_p ; \mathbf{v}_h ), \\ p(k|\neg c) & = s(\bar{\mathbf{u}}_{ph} \cdot \bar{\mathbf{W}}_k), \quad \text{with } k \in \{ e, n \}. \\ \end{aligned} \end{equation} \paragraph{Results.} \begin{table}[t] \centering \begin{tabular}{lcc} \toprule {\bf Instances} & {\bf DAM} & {\bf S-DAM} \\ \midrule Whole Test Set & 86.71\% & 85.95\% \\ Contradictions & 85.94\% & 85.69\% \\ Reversed Contradictions & 78.13\% & 85.20\% \\ \bottomrule \end{tabular} \caption{Accuracy on all instances, contradictions and reversed contradictions from the SNLI test set.}\label{snlitab} \end{table} \cref{snlitab} gives the accuracies for both models on the whole SNLI test set, the subset of contradictions, and the same set of contradictions reversed. In the last row, the DAM model suffers a significant fall in performance when the contradictions are reversed. In comparison, the S-DAM's performance is almost identical on both sets. Thus, the S-DAM model extrapolates more effectively because its architecture exploits a global symmetry of the relation between sentences in the task. In the following section, we investigate a global symmetry within the representation of words. \section{Global Structure in Word Embeddings} Word embeddings, such as GloVe~\citep{glove2014} and word2vec~\citep{word2vec2013}, have been enormously effective as input representations for downstream tasks such as question answering or natural language inference. One well known application is the $king = queen - woman + man$ example, which represents an impressive extrapolation from word co-occurrence statistics to linguistic analogies~\citep{DBLP:conf/conll/LevyG14}. To some extent, we can see this prediction as exploiting a global structure in which the differences between analogical pairs, such as $man-woman$, $king-queen$ and $father-mother$, are approximately equal. Here, we consider how this global structure in the learned embeddings is related to a linearity in the training objective. In particular, linear functions have the property that $f(a+b) = f(a) + f(b)$, imposing a systematic relation between the predictions we make for $a$, $b$ and $a+b$. In fact, we could think of this as a form of translational symmetry where adding $a$ to the input has the same effect on the output throughout the space. We hypothesise that breaking this linearity, and allowing a more local fit to the training data will undermine the global structure that the analogy predictions exploit. \paragraph{Models.} These embedding models typically rely on a simple dot product comparison of target and context vectors as the basis for predicting some measure of co-occurrence $s$: \begin{equation} s = f \left( \sum_i \text{target}_i \cdot \text{context}_i\right). \end{equation} We replace this simple linear function of the context vectors, with a set of non-linear broken-stick functions $g_i({}\cdot{})$. \begin{equation*} \begin{aligned} s & = f \left( \sum_i g_i \left( \text{context}_i \right) \right), \\ g_i \left( x \right) & = \begin{cases} m_i x & \text{if } n_i x + c_i < 0, \\ \left(m_i + n_i \right) x + c_i & \text{otherwise.} \end{cases} \end{aligned} \end{equation*} We modify the CBOW algorithm in the publicly available word2vec code to incorporate this non-linearity and train on the commonly used \emph{text8} corpus of 17M words from Wikipedia. As this modification doubles the number of parameters used for each word, we test models of dimensions 100, 200 and 400. \paragraph{Results.} \begin{table}[t] \centering \begin{tabular}{rll} \toprule {\bf D} & {\bf Linear} & {\bf Non-Linear} \\ \midrule 100 & 50.38\% & 42.96\% \\ 200 & 53.18\% & 40.66\% \\ 400 & 50.77\% & 32.43\% \\ \bottomrule \end{tabular} \caption{Accuracy on the analogy task.}\label{msetab2} \end{table} \cref{msetab2} reports the performance on the standard analogy task distributed with the word2vec code. The non-linear modification of CBOW is substantially less successful than the original linear version on this task. This is true on all the sizes of models we evaluated, indicating that this decrease is not simply a result of over-parameterisation. Thus, destroying the global linearity in the embedding model undermines extrapolation to the analogy task. \section{Conclusions} Language is a very complex phenomenon, and many of its quirks and idioms need to be treated as local phenomena. However, we have also shown here examples in the representation of words and sentences where global structure supports extrapolation outside the training data. One tool for thinking about this dichotomy is the \emph{equivalent kernel} \cite{silverman1984}, which measures the extent to which a given prediction is influenced by nearby training examples. Typically, models with highly local equivalent kernels - e.g. splines, sigmoids and random forests - are preferred over non-local models - e.g. polynomials - in the context of general curve fitting \cite{hastieetal2001}. However, these latter functions are also typically those used to express fundamental scientific laws - e.g. $E=mc^2$, $F=G\frac{m_1 m_2}{r^2}$ - which frequently support extrapolation outside the original data from which they were derived. Local models, by their very nature, are less suited to making predictions outside the training manifold, as the influence of those training instances attenuates quickly. We suggest that NLP will benefit from incorporating more global structure into its models. Existing background knowledge is one possible source for such additional structure \cite{marcus2018,minervinietal2017}. But it will also be necessary to uncover novel global relations, following the example of the other natural sciences. We have used the development of the scientific understanding of planetary motion as a repeated example of the possibility of uncovering global structures that support extrapolation, throughout our discussion. Kepler and Newton found laws that went beyond simply maximising the fit to the known set of planetary bodies to describe regularities that held for every body, terrestrial and heavenly. In our SNLI example, we showed that simply maximising the fit on the development and test sets does not yield a model that extrapolates to reversed contradictions. In the case of word2vec, we showed that performance on the analogy task was related to the linearity in the objective function. More generally, we want to draw attention to the need for models in NLP that make meaningful predictions outside the space of the training data, and to argue that such extrapolation requires distinct modelling techniques from interpolation within the training space. Specifically, whereas the latter can often effectively rely on local smoothing between training instances, the former may require models that exploit global structures of the language phenomena. \section*{Acknowledgments} The authors are immensely grateful to Ivan Sanchez Carmona for many fruitful disagreements. This work has been supported by the European Union H2020 project SUMMA (grant No. 688139), and by an Allen Distinguished Investigator Award. \bibliographystyle{acl_natbib}
1,108,101,563,809
arxiv
\section{Notation} We use the notation $\rho=\beta+i \gamma$ for the non-trivial zeros of the zeta function. Following Riemann, we define $\alpha=-i (\rho-\frac12)$. Observe that $\rho=1/2+i \alpha$ with ${\rm Re}(\alpha)=\gamma$ and ${\rm Im}(\alpha)=1/2-\beta$, and that the Riemann Hypothesis is the statement $\alpha={\rm Re}(\alpha)$. It is known that $0<\beta<1$ (critical band), and therefore $-1/2<{\rm Im}(\alpha)<1/2$. If we let $\mu=\frac12-\beta$ then $\alpha=\gamma+i \mu$, and $-1/2 < \mu < 1/2$. This notation simplifies the appearance of our formulas. As usual in Number Theory, $\log$ denotes the neperian logarithm. \section{Introduction} In $1911$ E. Landau proved that for any fixed $t>1$ \begin{equation}\label{landau1} \sum_{0<\gamma\leq T} t^{\rho} = \frac{-T}{2\pi} \Lambda(t) + \mathcal{O}(\log T), \end{equation} where $\rho$ runs over the non-trivial zeros of the Riemann zeta function $\zeta(s)$ and $\Lambda(t)$ is the Mangoldt function which is equal to $\log p$ if $x$ is a power of a prime number $p$ and $0$ otherwise. Since the use of (\ref{landau1}) is limited by its lack of uniformity in $t$, Gonek was interested in a version of it uniform in both variables and in \cite{Gonek1,Gonek2}, he gives the remarkable formula \[ \sum_{0<\gamma\leq T} t^{\rho} = \frac{-T}{2\pi} \Lambda(t) + E(t,T), \] where the error term $E(t,T)$ has the estimation \[ E(t,T)=\mathcal{O} \left(t\log 2tT \log \log 3t \right)+\mathcal{O} \left(\log t \, {\rm min} ( T ; \frac{t}{\langle t \rangle} ) \right) + \mathcal{O} \left(\log 2T \, {\rm min} (T ; \frac{1}{\log t} ) \right), \] with $\langle t \rangle$ denoting the distance between $t$ and the nearest prime power other than $t$. Gonek's formula is also commented in \cite{Kalape}. The aim of this paper is to approximate $\Lambda(t)$ in a good way. Of course we can do it with the Landau-Gonek's formula: \begin{equation}\label{Landau-Gonek} \Lambda(t)=\frac{-2\pi}{T} \sqrt{t} \sum_{0< \gamma \leq T} \cos(\alpha \log t) +\frac{E(t,T)}{T}, \end{equation} where we have used the Riemann's notation $\alpha=-i (\rho-1/2)$. Observe that the formula (\ref{landau1}) or either (\ref{Landau-Gonek}) imply \begin{equation}\label{Landau-limit} \Lambda(t)=-2\pi \sqrt{t} \lim_{T \to +\infty} \frac{1}{T} \sum_{0< \gamma \leq T} \cos(\alpha \log t). \end{equation} which has the surprising property that neglecting a finite number of zeros of zeta we still recover the Mangoldt's function. Also surprising are the self-replicating property of the zeros of zeta observed recently in the statistics of \cite{perez-marco}, and later proved in \cite{ford-zaha}; and the property of the zeros discovered by Y. Matiyasevich \cite{Matiya}. In this paper we will prove the new formula: \[ \Lambda(t) = -4 \pi \sqrt{t} \cot \frac{x}{2} \sum_{\gamma >0} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t)+2\pi \cot \frac{x}{2} \left( t-\frac{1}{t^2-1} \right)+\varepsilon(t,x). \] and find bounds for the error term $\varepsilon(t,x)$. In addition, letting $\cot(x/2)=(\log T)/T$ we will prove that for integers $t>2$, the following truncated version of it holds \begin{align*} \Lambda(t) &= -4 \pi \sqrt{t} \left( \sum_{0< \gamma < T} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right) \frac{\log T}{T} + 2\pi \left( t-\frac{1}{t^2-1} \right) \frac{\log T}{T} \\ &+ \mathcal{O} \left( t^2 (\log t) \frac{\log^2 T}{T^2} \right), \end{align*} and we also will get the estimation of the error for non-integers $t$. Finally, observing that \[ \Lambda(t)=-4 \pi \sqrt{t} \lim_{x \to \pi^{-}} \left( \cot \frac{x}{2} \sum_{\gamma>0} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right), \] we see that it shares with (\ref{Landau-limit}) the property of invariance when we neglect a finite number of zeros. In the last section we give the new function \[ \Phi_2(t)=-\sum_{m=1}^{T} T^{-m/T} \frac{\Lambda(m)}{\sqrt{m}} \cos(t \log m) + C \, \sqrt{t}, \] where $C \approx 0.12$ is a constant. This function has cusps at the non-trivial zeros of zeta. It looks like that this function is interesting and I will continue investigating it. \section{Series involving the Mangoldt function} The formulas that we will prove in this section involve the Mangoldt's function and a sum over the non-trivial zeros of the Riemann-zeta function. \begin{theorem}\label{main-thm} Let $\Omega = \mathbb{C}-(-\infty,0]$ (the plane with a cut along the real negative axis). We shall denote by $\log z$ the main branch of the $\log$ function defined on $\Omega$ taking $|\arg(z)|<\pi$. We also denote by $z^s=\exp(s\log(z))$, the usual branch of $z^s$ defined also on $\Omega$. For all $z \in \Omega$ we have \begin{equation}\label{for-main-thm} \sum_{n=1}^{\infty} \frac{\Lambda(n)z}{\pi \sqrt{n} (z+n)}-\sum_{n=1}^{\infty} \frac{\Lambda(n)}{\pi \sqrt{n} (1+nz)}=\sqrt{z}-\frac{\zeta'(\frac12)}{\pi \zeta (\frac12)}-2\sum_{\gamma>0} \frac{\sin (\alpha \log z) }{\sinh \pi \alpha} + h(z), \end{equation} where \begin{equation}\label{h-of-z} h(z)=\frac{1}{\sqrt{z}(z^2-1)}-\frac{1}{2z-2}+\frac{\log(8\pi)+C}{\pi} \frac{1}{z+1}-\frac{2}{\pi} \frac{\sqrt{z}}{z+1}\arctan \frac{1}{\sqrt{z}}. \end{equation} \end{theorem} \begin{proof} We consider the function \[ f(s)=\frac{\zeta'(s+\frac12)}{\zeta(s+\frac12)} \frac{\pi}{\sin \pi s} z^s, \] and let $I_0$, $I_{r}$, $I_{\ell}$, where $I_{r}=I_1+I_2+I_3$ and $I_{\ell}=I_4+I_5+I_6$, be the analytic continuation of the integral \[ I=\frac{1}{2\pi i} \int f(s) ds, \] along the indicated sides of the contour of the figure. It is a known result that all the zeros of $\zeta(s+1/2)$ are in the band among the lines red and green. \begin{center} \begin{tikzpicture} \draw[thick,color=blue] (1,0) rectangle (11,5); \filldraw[color=black,fill=black] (7.25,2.5) circle (0.05); \filldraw[color=black,fill=black] (1,0) circle (0.05); \filldraw[color=black,fill=black] (1,5) circle (0.05); \filldraw[color=black,fill=black] (11,0) circle (0.05); \filldraw[color=black,fill=black] (11,5) circle (0.05); \filldraw[color=black,fill=black] (6.5,0) circle (0.05); \filldraw[color=black,fill=black] (6.5,5) circle (0.05); \draw[red, thick] (6.5,0) -- (6.5,5); \draw[->,red, very thick] (6.5,2.5) -- (6.5,2.5); \draw[->,red, very thick] (3.74,0) -- (3.75,0); \draw[->,red, very thick] (3.75,5) -- (3.74,5); \draw[->,red, very thick] (1,2.6) -- (1,2.5); \draw[->,red, very thick] (8.75,5) -- (8.76,5); \draw[->,red, very thick] (8.75,0) -- (8.74,0); \draw[->,red, very thick] (11,2.6) -- (11,2.5); \draw[green,thick] (8,0) -- (8,5); \node (n1) at (1,-0.5) {$-\infty-iT$}; \node (n2) at (1,5.5) {$-\infty +iT$}; \node (n3) at (11,-0.5) {$+\infty-iT$}; \node (n4) at (11,5.5) {$+\infty +iT$}; \node (n5) at (6.5,-0.5) {$-\frac12-iT$}; \node (n6) at (6.5,5.5) {$-\frac12+iT$}; \node (n7) at (9,2.5) {$|z|<1$}; \node (n8) at (4,2.5) {$|z|>1$}; \node (ne1) at (6.35, 1.5) {{\tiny Integrals extended to all $z$ by analytic continuation}}; \node (n9) at (8.75,-0.5) {$I_3$}; \node (n10) at (8.75,5.5) {$I_1$}; \node (n11) at (3.75,-0.5) {$I_6$}; \node (n12) at (3.75,5.5) {$I_4$}; \node (n13) at (11.4,2.5) {$I_2$}; \node (n14) at (6.1,2.5) {$I_0$}; \node (n15) at (0.65,2.5) {$I_5$}; \end{tikzpicture} \end{center} We will follow this scheme of proof: The integral along the line $\sigma=-1/2$ is calculated for $|z|>1$ integrating to the left and for $|z|<1$ integrating to the right. Both expressions are different but valid for $z \in \Omega$ by analytic continuation. Finally, equating both expressions we will arrive at (\ref{for-main-thm}). \par Indeed, if $|z|<1$, integrating to the right hand side, we get by applying the residues theorem that \begin{align} & I_0+ I_{r}=-{\rm res}_{s=\frac12} \left( \frac{\zeta'(s+\frac12)}{\zeta(s+\frac12)} \frac{\pi}{\sin \pi s} z^s \right)-\sum_{n=0}^{\infty} {\rm res}_{s=n} \left( \frac{\zeta'(s+\frac12)}{\zeta(s+\frac12)} \frac{\pi}{\sin \pi s} z^s \right) - \nonumber \\ & \sum_{|\gamma|<T} {\rm res}_{s=\rho-\frac12} \left( \frac{\zeta'(s+\frac12)}{\zeta(s+\frac12)} \frac{\pi}{\sin \pi s} z^s \right) \!=\! \pi \sqrt{z}-\sum_{n=0}^{\infty} (-1)^n \frac{\zeta'(n+\frac12)}{\zeta(n+\frac12)}z^n-\pi\sum_{|\gamma| < T} \frac{z^{\rho-\frac12}}{\sin \pi (\rho-\frac12)}. \nonumber \end{align} Hence, by analytic continuation, we have that for all $z \in \Omega$ \begin{equation}\label{int-right} I_0+I_{r} = \pi \sqrt{z}-\frac{\zeta'(\frac12)}{\zeta(\frac12)}-\sum_{n=1}^{\infty} \frac{\Lambda(n) z}{\sqrt{n}(z+n)} - \pi \sum_{|\gamma| < T} \frac{z^{\rho-\frac12}}{\sin \pi (\rho-\frac12)} \end{equation} If $|z|>1$, then following the way to the left hand side, we deduce that \begin{align} I_0+I_{\ell} &= \sum_{n=1}^{\infty} {\rm res}_{s=-2n-\frac12} \left( \frac{\zeta'(s+\frac12)}{\zeta(s+\frac12)} \frac{\pi}{\sin \pi s} z^s \right) + \sum_{n=1}^{\infty} {\rm res}_{s=-n} \left( \frac{\zeta'(s+\frac12)}{\zeta(s+\frac12)} \frac{\pi}{\sin \pi s} z^s \right) \nonumber \\ &= \sum_{n=1}^{\infty} \frac{\zeta'(-2n)}{\zeta(-2n)} \sin(2 \pi n) z^{-2n-\frac12}+\sum_{n=1}^{\infty} (-1)^n \frac{\zeta'(\frac12-n)}{\zeta(\frac12-n)}z^{-n}, \label{lastsums} \end{align} where we understand the expression inside the first sum of (\ref{lastsums}) as a limit based on the identity \[ \lim_{s \to -2n} (s+2n) \frac{\zeta'(s)}{\zeta(s)} = \lim_{s \to -2n} \frac{\zeta'(s)}{\zeta(s)} \frac{\pi (s+2n)}{\sin \pi s} \frac{\sin \pi s}{\pi} = \lim_{s \to -2n} \frac{\sin{\pi s}}{\pi} \frac{\zeta'(s)}{\zeta(s)}. \] We use the functional equation (which comes easily from the functional equation of $\zeta(s)$): \begin{equation}\label{zpz} \frac{\zeta'(1-s)}{\zeta(1-s)}=\log 2\pi-\psi(s)+\frac{\pi}{2}\tan \frac{\pi s}{2}-\frac{\zeta'(s)}{\zeta(s)}. \end{equation} to simplify the sums in (\ref{lastsums}). For the first sum in (\ref{lastsums}), we obtain \[ \sum_{n=1}^{\infty} \frac{\zeta'(-2n)}{\zeta(-2n)} \sin(2\pi n) z^{-2n-\frac12}=\pi \sum_{n=1}^{\infty} z^{-2n-\frac12}, \] and for the last sum in (\ref{lastsums}), we have \[ \log 2 \pi \sum_{n=1}^{\infty} (-1)^n z^{-n}-\sum_{n=1}^{\infty} (-1)^n \psi \left(\frac12+n \right)z^{-n}+\frac{\pi}{2}\sum_{n=1}^{\infty} z^{-n}-\sum_{n=1}^{\infty} (-1)^n \frac{\zeta'(n+\frac12)}{\zeta(n+\frac12)}z^{-n}, \] where $\psi$ is the digamma function, which satisfies the property \[ \psi \left(\frac12+n \right) = 2 h_n - C - 2 \log 2, \qquad h_n = \sum_{j=1}^n \frac{1}{2j-1}. \] Using the identity, due to Hongwei Chen \cite[p.299, exercise 34]{bailey-et-al} \[ 2 \sum_{n=1}^{\infty} (-1)^n h_n z^{-n} = i \frac{\sqrt{z}}{z+1} \log \frac{\sqrt{z}+i}{\sqrt{z}-i}=-2 \frac{\sqrt{z}}{z+1} \arctan \frac{1}{\sqrt{z}}, \] we get that for $|z|>1$ \[ I_0+I_{\ell} = - \frac{\pi}{\sqrt{z}(z^2-1)}-\frac{\log 2 \pi}{z+1}-\frac{C+\log 4}{z+1}+\frac{\pi}{2z-2} +2 \frac{\sqrt{z}}{z+1} \arctan \frac{1}{\sqrt{z}}-\sum_{n=1}^{\infty} (-1)^n \frac{\zeta'(n+\frac12)}{\zeta(n+\frac12)}z^{-n} \] Then, by analytic continuation, we obtain that for all $z \in \Omega$: \begin{equation}\label{int-left} I_0+I_{\ell} =-\frac{\pi}{\sqrt{z}(z^2-1)}-\frac{C+\log 8\pi}{z+1}+\frac{\pi}{2z-2} + 2 \frac{\sqrt{z}}{z+1} \arctan \frac{1}{\sqrt{z}}-\sum_{n=1}^{\infty} \frac{\Lambda(n)}{\sqrt{n}(1+zn)}. \end{equation} It is easy to deduce that $I_2=I_5=0$, and we will prove that $I_{r}$ and $I_{\ell}$ tend to $0$ as $T \to \infty$ in the Section \ref{sec-bounds} of this paper. Hence, by identifying (\ref{int-right}) and (\ref{int-left}), and observing that the pole at $z=1$ is removable, we complete the proof. \end{proof} \begin{theorem} The following identity \begin{equation}\label{main2-thm} \sum_{\gamma>0} \frac{\sinh z \alpha}{\sinh \pi \alpha} - \sum_{n=1}^{\infty} \frac{\Lambda(n)}{2\pi \sqrt{n}} \left(\frac{i e^{iz}}{e^{iz}+n}-\frac{i e^{-iz}}{e^{-iz}+n} \right) = f(z), \end{equation} where \[ f(z)=\sin \frac{z}{2} - \frac{1}{8}\tan \frac{z}{4} -\frac{C+\log 8\pi}{4\pi}\tan \frac{z}{2} - \frac{1}{4\pi \cos \frac{z}{2}} \log \frac{1-\tan \frac{z}{4}}{1+\tan \frac{z}{4}}, \] holds for $|{\rm Re}(z)|<\pi$. \end{theorem} \begin{proof} Let \[ H(z)=\sqrt{z}-\frac{\zeta'(\frac12)}{\pi \zeta(\frac12)}+h(z). \] That is \[ H(z)=\sqrt{z}-\frac{\zeta'(\frac12)}{\pi \zeta(\frac12)}+\frac{1}{\sqrt{z}(z^2-1)}-\frac{1}{2z-2}+\frac{\log(8\pi)+C}{\pi} \frac{1}{z+1}+\frac{i}{\pi} \frac{\sqrt{z}}{z+1}\log \frac{\sqrt{z}+i}{\sqrt{z}-i}. \] From (\ref{for-main-thm}), we see that the function $H(z)$ has the property $H(z)=-H(z^{-1})$. Hence \begin{align} H(z)=\frac{H(z)-H(z^{-1})}{2} =& \frac12 \left( \sqrt{z}-\frac{1}{\sqrt{z}} \right) + \frac12 \frac{1}{z^2-1} \left( \frac{1}{\sqrt{z}}+z^2 \sqrt{z} \right) - \frac14 \frac{z+1}{z-1} \nonumber \\ &- \frac{\log 8 \pi +C}{2\pi} \frac{z-1}{z+1} + \frac{i}{\pi} \frac{\sqrt{z}}{z+1} \log \frac{i\sqrt{z}-1}{i\sqrt{z}+1}+\frac12 \frac{\sqrt{z}}{z+1}. \label{H-of-z} \end{align} When $|{\rm Re}(z)|<\pi$ we have $e^{iz} \in \Omega$ so that, we may put $e^{iz}$ instead of $z$ in Theorem \ref{main-thm}. If in addition we multiply by $-i/2$, we get \[ \sum_{\gamma>0} \frac{\sinh z \alpha}{\sinh \pi \alpha} - \sum_{n=1}^{\infty} \frac{\Lambda(n)}{2 \pi \sqrt{n}} \left(\frac{i e^{iz}}{e^{iz}+n}-\frac{i e^{-iz}}{e^{-iz}+n} \right)=-\frac{i}{2} H(e^{iz}). \] From (\ref{H-of-z}), we have \begin{align} -\frac{i}{2}H(e^{iz}) =& -\frac{i}{4} \left( e^{iz/2}-e^{-iz/2} \right) -\frac{i}{4} \left( \frac{e^{-iz/2}}{e^{2iz}-1}-\frac{e^{iz/2}}{e^{-2iz}-1} \right) +\frac{i}{8} \frac{e^{iz}+1}{e^{iz}-1} \nonumber \\ &+\frac{i}{4} \frac{\log 8\pi + C}{\pi} \frac{e^{iz}-1}{e^{iz}+1}+ \frac{1}{2\pi} \frac{e^{iz/2}}{e^{iz}+1} \log \frac{e^{i\frac{z+\pi}{2}}-1}{e^{i \frac{z+\pi}{2}}+1} -\frac{i}{4} \frac{e^{iz/2}}{e^{iz}+1}, \nonumber \end{align} which we can write as \begin{align} -\frac{i}{2}H(e^{i z}) =& -\frac{i}{4} \left( e^{i z/2}-e^{-i z/2} \right) -\frac{i}{4} \frac{e^{3i z/2}+e^{-3i z/2}}{e^{i z}-e^{-i z}} \nonumber \\ &+\frac{i}{8} \frac{e^{i z/2}+e^{-i z/2}}{e^{i z/2}-e^{-i z/2}} +\frac{i}{4} \frac{\log 8\pi + C}{\pi} \frac{e^{i z/2}-e^{-i z/2}}{e^{i z/2}+e^{-i z/2}} \nonumber \\ &+\frac{1}{2\pi} \frac{1}{e^{i z/2}+e^{-iz/2}} \log \frac{e^{i\frac{z+\pi}{4}}-e^{-i\frac{z+\pi}{4}}}{e^{i \frac{z+\pi}{4}}+e^{-i\frac{z+\pi}{4}}}-\frac{i}{4} \frac{1}{e^{iz/2}+e^{-iz/2}}, \nonumber \end{align} which simplifies to \begin{align} -\frac{i}{2}H(e^{iz}) =& \frac12 \sin \frac{z}{2}-\frac14 \frac{\cos \left(z+\frac{z}{2}\right)}{\sin z}+\frac18 \cot \frac{z}{2}-\frac{\log 8\pi + C}{4\pi} \tan \frac{z}{2} \nonumber \\ &+ \frac{1}{4\pi} \frac{1}{\cos \frac{z}{2}} \log \left(i \tan \frac{z+\pi}{4} \right)-\frac{i}{8} \frac{1}{\cos \frac{z}{2}}. \nonumber \end{align} As \[ \frac{1}{4\pi} \frac{1}{\cos \frac{z}{2}} \log \left(i \tan \frac{z+\pi}{4} \right)-\frac{i}{8} \frac{1}{\cos \frac{z}{2}} = \frac{1}{4\pi} \frac{1}{\cos \frac{z}{2}} \log \tan \frac{z+\pi}{4}, \] and using elementary trigonometric formulas we arrive at (\ref{main2-thm}). \end{proof} \section{New formulas for the Mangoldt function} In this section we relate the Mangoldt's function to a sum over all the non-trivial zeros of the Riemann-zeta function and find bounds of the error term. \begin{theorem}\label{theor-mang} If $x \in [0, \pi)$ and $t>1$, then \begin{align*} & - 4\pi \sqrt{t} \cot\frac{x}{2} \sum_{\gamma>0} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) + 4\pi \sqrt{t} \, g(x,t) \cot \frac{x}{2} \\ & = 4 t \sqrt{t} \sum_{n=1}^{\infty} \frac{\Lambda(n) \sqrt{n} \, \cos^2 \frac{x}{2}}{(t-n)^2 + 4nt \cos^2 \frac{x}{2}} - 4 t \sqrt{t} \sum_{n=1}^{\infty} \frac{\Lambda(n) \sqrt{n} \, \cos^2 \frac{x}{2}}{(nt-1)^2 + 4nt \cos^2 \frac{x}{2}}, \end{align*} where $g(x,t)$ is the function \begin{multline}\label{fun-g} g(x,t)=\frac{(1+t)\sin\frac{x}{2}}{2\sqrt{t}}-\frac{\sqrt{t}\sin \frac{x}{2}}{8\sqrt{t}\cos \frac{x}{2}+4(1+t)}-\frac{t\sin x (C+\log 8\pi)}{2\pi(1+t^2+2t \cos x)} - \\ \frac{(1+t)\sqrt{t} \cos \frac{x}{2}}{4\pi(1+t^2+2t \cos x)} \, \log \frac{1+t-2\sqrt{t}\sin \frac{x}{2}}{1+t+2\sqrt{t}\sin \frac{x}{2}} - \frac{(t-1)\sqrt{t} \sin \frac{x}{2}}{2\pi(1+t^2+2t \cos x)} \arctan \frac{t-1}{2\sqrt{t}\cos\frac{x}{2}}. \end{multline} \end{theorem} \begin{proof} Replace $z$ with $x-i\log t$ and take real parts. The function $g(x,t)$ is the real part of $f(x-i\log t)$. \end{proof} It is interesting to expand $g(x,t)$ in powers of $\pi-x$, and we get \begin{align}\label{simply-g} g(x,t) &=\frac12 \left(\frac{t+1}{t} - \frac{t}{t^2-1} \right) \sqrt{t} \nonumber \\ &+ \left( \frac{-t}{4(1+t)^2} + \frac{t(C+\log 8\pi)}{2\pi(t-1)^2} - \frac{t}{2\pi(t-1)^2}+ \frac{(1+t)\sqrt{t}}{4\pi(t-1)^2} \log \frac{\sqrt{t}-1}{\sqrt{t}+1}. \right)(\pi-x) \nonumber \\ & \qquad +\mathcal{O}(\pi-x)^2=\frac12 \left(\frac{t+1}{t} - \frac{t}{t^2-1} \right) \sqrt{t}+ \mathcal{O}\left( \frac{\pi-x}{t} \right), \end{align} which shown that $g(x,t)$ tends to a simple function as $x \to \pi^{-}$. \begin{theorem}\label{mang-teor} If $x \in [0, \pi)$, then \begin{align}\label{mang-bound} 0 & < \left( -4\pi \sqrt{t} \cot\frac{x}{2} \sum_{\gamma>0} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) + 4\pi \sqrt{t} \, g(x,t) \cot \frac{x}{2} \right) - \Lambda(t) \nonumber \\ & < F(t) + 4 \cos^2 \frac{x}{2} \left(3 t^2\log t + \frac{\pi^2}{2} t + \frac14 \log t + 0.6 \right). \end{align} where $g(x,t)$ is the function (\ref{fun-g}), and \[ F(t) = E(t) \cdot 4 t \sqrt{t} \frac{\Lambda(\lfloor t \rfloor) \sqrt{\lfloor t \rfloor} \cos^2 \frac{x}{2} }{ (\{t\})^2 + 4 t \lfloor t \rfloor \cos^2 \frac{x}{2}} + 4 t \sqrt{t} \frac{\Lambda(\lfloor t \rfloor + 1) \sqrt{\lfloor t \rfloor + 1} \cos^2 \frac{x}{2} }{ (1-\{t\})^2 + 4 t (\lfloor t \rfloor + 1) \cos^2 \frac{x}{2}}, \] where $E(t)=0$ if $t$ is and integer and $1$ otherwise. \end{theorem} \begin{proof} Let \[ S=4 t \sqrt{t} \sum_{n=1}^{\infty} \frac{\Lambda(n) \sqrt{n} \, \cos^2 \frac{x}{2}}{(t-n)^2 + 4nt \cos^2 \frac{x}{2}} - 4 t \sqrt{t} \sum_{n=1}^{\infty} \frac{\Lambda(n) \sqrt{n} \, \cos^2 \frac{x}{2}}{(nt-1)^2 + 4nt \cos^2 \frac{x}{2}}. \] First, we see that \[ S < 4 t \sqrt{t} \sum_{n=1}^{\infty} \frac{\Lambda(n) \sqrt{n} \, \cos^2 \frac{x}{2}}{(t-n)^2 + 4nt \cos^2 \frac{x}{2}}. \] The contribution of the values $n=\lfloor t \rfloor$ and $n=\lfloor t \rfloor +1$ to the above summation is equal to $\Lambda(t)+F(t)$, and the contribution of $n=\lfloor t \rfloor - 1$ is bounded by $4 t^2 \log t \, \cos^2 \frac{x}{2}$. Hence \[ S < \Lambda(t) + F(t) + 4 \left(t^2 \log t + t \sqrt{t} \sum_{n=2}^{\lfloor t \rfloor -2} \frac{\Lambda(n) \sqrt{n}}{(t-n)^2} + t \sqrt{t} \sum_{n=\lfloor t \rfloor +2}^{\infty} \frac{\Lambda(n) \sqrt{n}}{(t-n)^2} \right) \cos^2 \frac{x}{2}. \] Then, as the Mangoldt function is bounded by the logarithm, we obtain \[ S < \Lambda(t) + F(t) + 4 \left(t^2 \log t + t \sqrt{t} \sum_{n=1}^{\lfloor t \rfloor -2} \frac{\log(n) \sqrt{n}}{(t-n)^2} + t \sqrt{t} \sum_{n=\lfloor t \rfloor +2}^{\infty} \frac{\log(n) \sqrt{n}}{(t-n)^2} \right) \cos^2 \frac{x}{2}. \] Then we can deduce that \[ S < \Lambda(t) + F(t) + 4 \left(t^2 \log t + t \sqrt{t} \int_{1}^{t-1} \frac{\log(u)\sqrt{u}}{(t-u)^2} \, du + t \sqrt{t} \int_{t+1}^{\infty} \frac{\log(u)\sqrt{u} }{(t-u)^2}\, du \right) \cos^2 \frac{x}{2}, \] by observing that the integrands are increasing and decreasing functions of $u$ respectively. With the help of Maple, we get \begin{align} \sqrt{t} & \int_{1}^{t-1} \frac{\log(u)\sqrt{u}}{(t-u)^2} \, du = \sqrt{t} \, \sqrt{t-1} \log(t-1) + \frac12 \log(t-1) \log \frac{\sqrt{t}-\sqrt{t-1}}{\sqrt{t}+\sqrt{t-1}} \nonumber \\ & \, + \log \left( 1+\frac{1}{\sqrt{t}} \right)- \log \left( 1-\frac{1}{\sqrt{t}} \right) + \log \left( 1-\sqrt{1-\frac{1}{t}} \right) - \log \left( 1+\sqrt{1-\frac{1}{t}} \right) \label{integral-1} \\ & \, + {\rm dilog} \left( 1+\frac{1}{\sqrt{t}} \right) - {\rm dilog} \left( 1- \frac{1}{\sqrt{t}}\right)+ {\rm dilog} \left( 1-\sqrt{1-\frac{1}{t}} \right) - {\rm dilog} \left( 1 + \sqrt{1-\frac{1}{t}} \right), \nonumber \end{align} and \begin{align} \sqrt{t} & \int_{t+1}^{\infty} \frac{\log(u)\sqrt{u} }{(t-u)^2}\, du = \sqrt{t} \, \sqrt{t+1} \log(t+1) + \frac14 \log^2 t - \frac14 \log t \log(t+1) + \frac{\pi^2}{3} \nonumber \\ & \, + \frac12 \log(t+1) \log(\sqrt{t}+\sqrt{t+1}) - \frac12 (\log t) \log(\sqrt{t+1}-\sqrt{t}) + \log \frac{\sqrt{t+1}+\sqrt{t}}{\sqrt{t+1}-\sqrt{t}} \label{integral-2} \\ & \, + {\rm dilog} \left( 1+\sqrt{1+\frac{1}{t}} \right) + {\rm dilog} \left( \sqrt{1+\frac{1}{t}} \right), \nonumber \end{align} where $\rm dilog$ denotes the dilogarithm. Finally, by expanding asymptotically and bounding each of the terms of (\ref{integral-1}) and (\ref{integral-2}), we can derive that \[ t \sqrt{t} \int_{1}^{t-1} \frac{\log(u)\sqrt{u}}{(t-u)^2} \, du + t \sqrt{t} \int_{t+1}^{\infty} \frac{\log(u)\sqrt{u} }{(t-u)^2}\, du = 2t^2\log t +\frac{\pi^2}{2}t +\frac14 \log t + h(t), \] where $h(t)$ is a positive decreasing function. Therefore $h(t)<h(2)<0.6$ for $t > 2$. \end{proof} \begin{corollary}\label{mang-cotas} If $x \in [0, \pi)$ and $t \geq 2$ is an integer, then \begin{align}\label{mang-integer} 0 & < \left( -4\pi \sqrt{t} \cot\frac{x}{2} \sum_{\gamma>0} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) + 4\pi \sqrt{t} \, g(x,t) \cot \frac{x}{2} \right) - \Lambda(t) \nonumber \\ & < 4 \cos^2 \frac{x}{2} \left(4 t^2\log t + \frac{\pi^2}{2} t + \frac14 \log t + 0.6 \right). \end{align} where $g(x,t)$ is the function (\ref{fun-g}), \end{corollary} \begin{lemma}\label{lemma-T} If $x$ and $T$ are related by \[ \cot \frac{x}{2} = \frac{\log T}{T}, \] then for $T \geq 2$, we have \begin{equation}\label{cota} \left| \sum_{\gamma \geq T} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right| < 3 \sqrt{t} \frac{2+\log T}{T}. \end{equation} \end{lemma} \begin{proof} Let $\eta={\rm Im}(\alpha)$. It is well known that $-1/2<\eta<1/2$ (critical band). As $\alpha=\gamma+i \eta$, we see that $|\cos(\alpha \log t) | < \cosh(|\eta| \log t) +\sinh(|\eta| \log t)$, and we get \begin{align*} \left| \sum_{\gamma \geq T} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right| & \leq \sum_{\gamma \geq T} \frac{\sinh x \gamma}{\sinh \pi \gamma} | \cos (\alpha \log t) | \leq \sum_{\gamma \geq T} \frac{\sinh x \gamma}{\sinh \pi \gamma} t^{|\eta|} \\ & \leq \sqrt{t} \sum_{\gamma \geq T} \frac{\sinh x \gamma}{\sinh \pi \gamma} \leq \sqrt{t} \sum_{\gamma \geq T} e^{-(\pi-x)\gamma}. \end{align*} As $x$ and $T$ are related by \[ x=2 \, {\rm arccot} \frac{\log T}{T}, \] we see that \[ \pi-x=\frac{2 \log T}{T} - \frac23 \frac{\log^3 T}{T^3} + \mathcal{O} \left( \frac{1}{T^4} \right). \] Hence \[ \left| \sum_{\gamma \geq T} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right| < \sqrt{t} \sum_{\gamma \geq T} \exp{\frac{-2 \gamma \log T}{T}}. \] We subdivide the interval into intervals of length $1$. Hence, the left hand side is also less or equal that \[ \sqrt{t} \left( \sum_{\gamma \in [T, T+1]} \exp{\frac{-2 \gamma \log T}{T}} + \sum_{\gamma \in [T+1, T+2]} \exp{\frac{-2 \gamma \log (T+1)}{T+1}} + \cdots \right). \] From \cite[Corollary 1]{trudgian} we get that for $T \geq 2$ the number of zeros in an interval $[T, T+1]$ is less than $3 \log T$. Hence \begin{align*} \left| \sum_{\gamma \geq T} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right| &< \sqrt{t} \sum_{n=T}^{\infty} 3 (\log n) \exp(-2 \log n) \\ &< 3 \sqrt{t} \int_{T-1}^{+\infty} \frac{\log u}{u^2} du < 3 \sqrt{t} \frac{2+\log T}{T}, \end{align*} which is the stated bound. \end{proof} \begin{corollary} If $T \geq 2$ and $t \geq 2$ is a positive integer number, then \begin{align} & \left| -4\pi \sqrt{t} \left(\sum_{\gamma < T} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right) \frac{\log T}{T} + 2\pi \left(t-\frac{1}{t^2-1}\right) \frac{\log T}{T} -\Lambda(t) \right| \nonumber \\ & < 4 \left(4 t^2\log t + \frac{\pi^2}{2} t + 3\pi t + \frac14 \log t + 0.6 \right) \frac{\log^2 T}{T^2} + 24\pi t \frac{\log T}{T^2}. \label{coro-for3} \end{align} \end{corollary} \begin{proof} It is a consequence of the Corollary \ref{mang-cotas} and Lemma \ref{lemma-T}. \end{proof} \section{Graphics} We have proved the following good approximation of the Mangoldt's function: \begin{equation}\label{mang-approx} \Lambda(t) \approx 4 \pi \sqrt{t} \left( \sum_{0< \gamma < T} T^{-2\gamma/T} \cos(\gamma \log t) \frac{t^{\mu}+t^{-\mu}}{2} \right) \frac{\log T}{T} + 2 \pi \left(t-\frac{1}{t^2-1} \right) \frac{\log T}{T}, \end{equation} where $\mu=1/2-\beta$, so $-1/2<\mu<1/2$. We use Sagemath \cite{stein} to draw the graphics. In Figure \ref{mangoldt} we see the graphic obtained with the formula (\ref{mang-approx}) summing over the $10000$ first non-trivial zeros of zeta, that is taking $T=9877.782654004$. The following estimations \[ \varepsilon(t,T)=\mathcal{O} \left( t^2 \log t \frac{\log^2 T}{T^2} \right), \qquad \varepsilon(t,T)=\mathcal{O} \left( \frac{t\log 2tT \log \log 3t}{T} \right), \] are respectively the errors that we get in the Mangoldt's function for integers $t>1$ if we use either our formula or either the Landau’s formula. \begin{figure}[H] \caption{Mangoldt} \includegraphics[scale=0.75]{mang-314-10000} \label{mangoldt} \end{figure} In this figure we have represented the function $\log(t)$ with the color red and the Mangoldt's function $\Lambda(t)$ with color blue. \section{Another bound}\label{sec-bounds} In this section we get another bound for \[ \left| \sum_{\gamma \geq T} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right|, \] From (\ref{int-right}) and (\ref{int-left}), we get \begin{equation}\label{greaterT} \sum_{\rho} \frac{z^{\rho-\frac12}}{\sin \pi (\rho-\frac12)}-\sum_{|\gamma| < T} \frac{z^{\rho-\frac12}}{\sin \pi (\rho-\frac12)} = \sum_{|\gamma| \geq T} \frac{z^{\rho-\frac12}}{\sin \pi (\rho-\frac12)} = \frac{1}{\pi} \left( I_{r}-I_{\ell}\right), \end{equation} where $I_{r}$ and $I_{\ell}$ are the analytic continuation of the integral \[ I=\frac{1}{2 i} \int \frac{\zeta'(s+\frac12)}{\zeta(s+\frac12)} \frac{z^s}{\sin \pi s} ds, \] along the corresponding routes. \begin{lemma}\label{lema-cota-1} \rm Let $T>1$, then we have \begin{align*}\label{zsin-1} &\left| \frac{z^s}{\sin \pi s} \right| \leq 4 \, e^{\sigma \log |z|} e^{-T(\pi+\arg(z))} \quad \text{if} \quad s=\sigma+iT, \\ &\left| \frac{z^s}{\sin \pi s} \right| \leq 4 \, e^{\sigma \log |z|} e^{-T(\pi-\arg(z))} \quad \text{if} \quad s=\sigma-iT, \end{align*} in case that $\sigma>0$ and $|z|<1$ or in case $\sigma<0$ and $|z|>1$. \end{lemma} \begin{proof} \begin{align} \left| \frac{z^s}{\sin \pi s} \right| &= 2 \left| \frac{ e^{(\sigma+iT)(\log|z|+i\arg(z))} }{ e^{i\pi(\sigma+iT)}-e^{-i\pi(\sigma+iT)} } \right| \leq 2 \, \frac{ e^{\sigma \log |z|-T \arg(z)} }{ |e^{-i \pi \sigma} e^{\pi T}|-|e^{i \pi \sigma} e^{-\pi T}| } \nonumber \\ & \leq 2 \, \frac{e^{\sigma \log |z|}e^{-T \arg(z)} }{ e^{\pi T}-e^{-\pi T} } < 4 \, \frac{e^{\sigma \log |z|}e^{-T \arg(z)}}{e^{\pi T}} < 4 \, e^{\sigma \log |z|} e^{-T(\pi+\arg(z))}. \nonumber \end{align} The proof for $s=\sigma-iT$ is similar. \end{proof} In the following lemma we get bounds of the function $\zeta'(s+1/2)/\zeta(s+1/2)$: \begin{lemma} \rm For $\sigma \geq 2$, we have \[ \left| \frac{\zeta'(\sigma+iT)}{\zeta(\sigma+iT)} \right| = \left| \sum_{n=1}^{\infty} \frac{\Lambda(n)}{n^{\sigma+iT}} \right| \leq \sum_{n=1}^{\infty} \left| \frac{\Lambda(n)}{n^{\sigma+iT}} \right| = \sum_{n=1}^{\infty} \frac{\Lambda(n)}{n^{\sigma}} \leq \sum_{n=1}^{\infty} \frac{\Lambda(n)}{n^2} < 0.57. \] For $\sigma < -1$ and $T>1$, using the above bound for $\sigma \geq 2$, the inequalities \[ \left|\Psi(\sigma+iT)\right| < 3.2 + \frac12 \log (\sigma^2+T^2), \qquad \left|\tan \frac{\pi(\sigma+iT)}{2} \right| < 1.72, \] and the functional equation (\ref{zpz}), we get \[ \left| \frac{\zeta'(\sigma+iT)}{\zeta(\sigma+iT)} \right| < 7.33+\log \sqrt{\sigma^2+T^2} < 7.33 + \log |\sigma| + \log T. \] If $-1 < \sigma \leq 2$, then for every real number $T \geq 2$, there exist $T' \in [T, T+1]$ such that uniformly one has \[ \left| \frac{\zeta'(\sigma+iT')}{\zeta(\sigma+iT')} \right| < 9 \log^2 T + 2\log T < 11 \log^2 T. \] To prove it we first deduce from \cite[Corollary 1]{trudgian} that the number of zeros $\rho$ such that $\gamma \in [T, T+1]$ is less than $ \lfloor 3 \log T \rfloor$. If we subdivide the interval into $1 + \lfloor 3 \log T \rfloor$ equal parts, then the length of each part is $(1+\lfloor 3\log T \rfloor)^{-1}$. As the number of parts exceeds the number of zeros, we deduce applying the Dirichlet pigeon-hole that there is a part that contains no zeros. Hence, for $T'$ lying in this part, we see that \[ |T' - \gamma | > \frac{1}{1+\lfloor 3\log T \rfloor}. \] Hence, we infer that each summand in \cite[Proposition 3.89]{bordelles} is less than $1+\lfloor 3 \log T \rfloor$, and since the number of summands of this kind is less than $\lfloor 3 \log T \rfloor$, we finally get \[ \left| \frac{\zeta'(\sigma+i T')}{\zeta(\sigma+iT')} \right| < 3 (\log T) \, (1+ 3 \log T). \] \end{lemma} \noindent Remark: As \[ \left| \sum_{\gamma \in [T, T+1]} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right| \leq \sqrt{t} \sum_{\gamma \in [T, T+1]} \exp \frac{-2 \gamma \log T}{T} < 3 \sqrt{t} \, \frac{\log T}{T^2}, \] the error that we are making in the above left sum when we take $T$ instead of $T'$ is less than $3 \sqrt{t} \, T^{-2} \log T$. \begin{corollary}\label{lema-cota-integral} If $x$ and $T$ are related by \[ \cot \frac{x}{2} = \frac{\log T}{T}, \] then for $t \geq 2$ and $T \geq 2$, we have \[ |I_{r}-I_{\ell}| < 44 \, \frac{t^{3/2}+1}{\log t} \frac{\log^2 T}{T^2}. \] \end{corollary} \begin{proof} As $x$ and $T$ are related by \[ x=2 \, {\rm arccot} \frac{\log T}{T}, \] we see that \[ \pi-x=\mathcal{O}\left( \frac{2 \log T}{T}\right), \qquad e^{-T(\pi-x)} = \mathcal{O}\left(\frac{1}{T^2}\right). \] Let $x \in [0, \pi)$, replacing $z$ with $e^{x-i\log t}$, we see that $|z|=t$ and $\arg(z)=x$. Hence \[ |I_3| < 44 (\log^2 T) \, e^{-T(\pi-x)} \int_{-\frac12}^{\frac32} \, e^{\sigma \log t} d \sigma + 2.28 \, e^{-T(\pi-x)} \int_{\frac32}^{+\infty} \, e^{\sigma \log t} d \sigma. \] As we can generalize the integral for $|z|=t>1$ by analytic continuation, for $t \geq 2$ and $T \geq 2$, we get \[ |I_3| < e^{-T(\pi-x)} \left[ 44 \log^2 T \left( \frac{t^{3/2}}{\log t} - \frac{t^{-1/2}}{\log t} \right) - 2.28 \frac{t^{3/2}}{\log t} \right]. \] Hence \[ |I_3| < 44 \, \frac{t^{3/2}}{\log t} \, \frac{\log^2 T}{T^2}. \] For $|I_6|$, we have \begin{align*} |I_6| &< 4 e^{-T(\pi-x)} \int_{-\infty}^{-\frac32} \left(7.33 + \log|\sigma|\right) e^{\sigma \log t} d \sigma + 4 e^{-T(\pi-x)} \log T \int_{-\infty}^{-\frac32} e^{\sigma \log t} d \sigma \\ & \quad + 44 e^{-T(\pi-x)} \log^2 T \int_{-\frac32}^{-\frac12} e^{\sigma \log t} d \sigma, \end{align*} and as $\log |\sigma| < |\sigma|$, and extending the integrals by analytic continuation, for $t \geq 2$ and $T \geq 2$, we get \[ |I_6| < e^{-T(\pi-x)} \left[\frac{29.4 t^{-3/2}}{\log t} + \frac{6t^{-3/2}}{\log t} -\frac{t^{-3/2}}{\log^2 t} + \frac{4 (\log T) t^{-3/2}}{\log t} + 44(\log^2 T) \left( \frac{t^{-1/2}}{\log t} -\frac{t^{-3/2}}{\log t} \right) \right]. \] Hence, for $t \geq 2$ and $T \geq 2$, we have \[ |I_6| < \frac{\log^2 T}{T^2} \, \frac{44}{\sqrt{t} \log t} \] In a similar way we can evaluate the order of $|I_1|$ and $|I_4|$, and we get that they are of order much smaller. \end{proof} \begin{corollary} For $T \geq 2$ and integers $t \geq 2$, we have \[ \left| \sum_{\gamma \geq T} \frac{\sinh x \alpha}{\sinh \pi \alpha} \cos(\alpha \log t) \right| < 45 \, \frac{t^{3/2}}{\log t} \frac{\log^2 T}{T^2}. \] \end{corollary} \noindent Compare this bound with that of (\ref{cota}). \section{On the spectrum of the primes} The Fourier transform of the Landau formula leads to the following function with peaks at the non-trivial zeros of zeta \cite{Mazur-Stein}: \[ \Phi_1(t)=-\sum_{m=1}^{T} \frac{\Lambda(m)}{\sqrt{m}} \cos(t \log m). \] We have proved the following good approximation for the Mangoldt's function: \begin{equation}\label{mang-appr} \frac{\Lambda(t)}{\sqrt{t}} \approx -4\pi \frac{\log T}{T} \sum_{\gamma>0} T^{-2\gamma/T}\cos(\gamma \log t)\, \frac{t^{\mu}+t^{-\mu}}{2} + 2\pi \frac{\log T}{T} \left( \sqrt{t} - \frac{1}{\sqrt{t}(t^2-1)} \right), \end{equation} for $T$ sufficiently large, where $\mu=1/2-\beta$, so $-1/2 < \mu < 1/2$. Inspired by this formula we construct and study the graphic of the function \[ \Phi_2(t)=-\sum_{m=1}^{T} T^{-m/T} \frac{\Lambda(m)}{\sqrt{m}} \cos(t \log m) + C \, \sqrt{t}, \] where $C \approx 0.12$ is a constant. Below we show together two graphics of $\Phi_1(t)$ (in blue) and $\Phi_2(t)$ (in red). We have taken $T=300$. \begin{figure}[H] \caption{Peaks at the $\gamma$'s, range 3-30} \includegraphics[scale=0.75]{zeros-zeta-3-30} \label{peaks-zeros-zeta-1} \end{figure} \begin{figure}[H] \caption{Peaks at the $\gamma$'s, range 23-50} \includegraphics[scale=0.75]{zeros-zeta-23-50} \label{peaks-zeros-zeta-2} \end{figure} We observe that our function $\phi_2(t)$ looks nice. It looks like that this function is interesting and I will continue investigating it. \section*{Final Remark} In this paper we have continued our research initiated in \cite{gui} concerning the Mangoldt's function. However this paper is self-contained. In \cite{gui} we also got some new formulas for the Moebius' $\mu$ and Euler's $\varphi$ functions but we only gave the error in the variable $T$ and not its dependence on the variable $t$. In our opinion finding $\varepsilon(T,t)$ could be interesting. This has been done in this paper but only for the Mangoldt's function. In addition we have discovered a new function for the spectrum of the primes, which looks nice. \section*{Acknowledgements} Thanks a lot to Olivier Bordellès for inform me that an upper bound for the number of zeros of zeta such that $\gamma \in [T, T+1]$ can be obtained from \cite[Corollary 1]{trudgian}. Also, many thanks to Juan Arias de Reyna for very interesting comments.
1,108,101,563,810
arxiv
\section{Motivation} \label{sec:1} Dissipative dynamical systems with relatively small linear growth rates are usually not expected to show chaotic behaviour. However, chaos investigations of these systems are justified in the light of new discoveries. The period doubling phenomenon was recently observed in RR Lyrae stars with the \textit{Kepler} space telescope (\cite{Szabo}) and has been explained by hydrodynamic calculations (\cite{Kollath}). The period doubling state is usually not ``far'' from chaos. We analyse two peculiar model solutions of the Florida-Budapest turbulent convective hydrodynamic code that suggest that the bifurcation cascade may evolve to chaos in these systems. \begin{figure}[] \begin{center} \includegraphics[scale=.5]{plachy_fig1.eps} \caption{Left panels: Radius variation of the two models. Right panels: Return maps for successive maxima.} \label{fig:1} \end{center} \end{figure} \section{Results} We used the global flow reconstruction technique, a nonlinear analyser tool that is suitable to detect chaos and define quantitative information of the system (\cite{Serre}). We have successfully reconstructed both models and determined the Lyapunov dimension to be $2.22\,\pm\,0.10$ in the case of Model A, and $2.17\,\pm\,0.08$ of Model B. These values are in agreement with the broad structure of the return maps on Fig.~\ref{fig:1}. Return maps display a more complex look compared to the usual quasi-one-dimensional tent or parabolic shape that chaotic systems have with Lyapunov dimension of $2+\epsilon$. We iterated the nonlinear models for $10^5$ cycles to rule out any transients but the return maps remained unaltered. The kinetic energy changes less than a percent between pulsation cycles, in agreement with typical linear growth rates in RR Lyrae models. Radius variations of RR Lyrae hydrodynamic models were suitable for the global flow reconstruction method and thus detection of chaotic behaviour. Luminosity variation was also studied in this manner, but the reconstruction was not successful. We believe that this is probably due to the more complex nature of the light curves. The observations of \textit{Kepler} RR Lyrae stars suggest some irregularity in the period doubling, but a similar analysis can only be performed after a suitable transformation of the light variation. \begin{acknowledgement} The European Union and the European Social Fund have provided financial support to the project under the grant agreement no. T\'AMOP-4.2.1/B- 09/1/KMR-2010-0003. This work has been supported by the Hungarian OTKA grant K83790. \end{acknowledgement}
1,108,101,563,811
arxiv
\section{\large{#1}}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \begin{document} \input{promise} \setcounter{page}{0} \begin{flushright} March 2016 \end{flushright} \vfill \begin{center} {\Large{\bf Regular Black Holes \\ and \\ Noncommutative Geometry Inspired Fuzzy Sources } } \end{center} \vfill \renewcommand{\baselinestretch}{1.0} \begin{center} {\sc Shinpei Kobayashi} \footnote{e-mail: {\tt [email protected]}} ~\\ $^{1}${\sl Department of Physics, Tokyo Gakugei University, \\ 4-1-1 Nukuikitamachi, Koganei, Tokyo 184-8501, JAPAN } \\ \end{center} \vfill \begin{center} {\bf abstract} \end{center} \begin{quote} \small{% We investigated regular black holes with fuzzy sources in three and four dimensions. The density distributions of such fuzzy sources are inspired by noncommutative geometry and given by Gaussian or generalized Gaussian functions. We utilized mass functions to give a physical interpretation of the horizon formation condition for the black holes. In particular, we investigated three-dimensional BTZ-like black holes and four-dimensional Schwarzschild-like black holes in detail, and found that the number of horizons is related to the spacetime dimensions, and the existence of a void in the vicinity of the center of the spacetime is significant, rather than noncommutativity. As an application, we considered a three-dimensional black hole with the fuzzy disc which is a disc-shaped region known in the context of noncommutative geometry as a source. We also analyzed a four-dimensional black hole with a source whose density distribution is an extension of the fuzzy disc, and investigated the horizon formation condition for it. } \end{quote} \vfill \renewcommand{\baselinestretch}{1.4} \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \addtocounter{page}{1} \newpage \resection{Introduction} Quantum features of spacetime have been discussed for a long time and there have been so many trials to depict their physics. Instead of enthusiastic studies, we still do not know which manner can be the most natural criterion to quantize a spacetime. Consequently, our starting point is to investigate such a phenomenon that definitely appears when a spacetime is consistently quantized. Spacetime noncommutativity is one of such features, even though there might be diverse ways to impose noncommutativity\cite{Aschieri:2005zs, Aschieri:2005yw, Aschieri:2009qh, Asakawa:2009yb, Kobayashi:2009baa}. We can naively expect that noncommutativity makes various changes in the structures of spacetimes, in particular, black hole spacetimes and the very early universe. In this context, the authors of \cite{Nicolini:2005vd} investigated a four-dimensional spacetime with a source inspired by noncommutative geometry. As a consequence of noncommutativity, they proposed a source that has a Gaussian distribution $e^{-r^2/(2\theta)}$ instead of a delta function $\delta^{(3)}(\vecvar{r})$ since we have to abandon a picture of zero size object like a point particle and should replace it by something smeared. Here $\theta$ is a noncommutative parameter that represents spacetime noncommutativity, e.g., $[x, y]=i\theta$ in a two-dimensional space. They found that there can exist a black hole with such a source at its center. It is a regular black hole in the sense that the curvature singularity at the center is resolved since the matter source is diffused by noncommutativity. What we want to focus on here about the black hole in \cite{Nicolini:2005vd} is that it can have two horizons as long as an appropriate condition is satisfied, even though it is not charged, nor does it have an angular momentum. The existence of a black hole with two horizons means that there would be an extreme black hole where two horizons coincide and there would appear a remnant after the Hawking radiation starting from a non-extreme black hole. This may change a story of the black hole evaporation. Inspired by this fascinating scenario, a lot of works on black holes with such Gaussian sources have been done so far \cite{Ansoldi:2006vg, Nicolini:2009gw, Nicolini:2011fy, Spallucci:2008ez, Smailagic:2010nv, Modesto:2010rv, Nicolini:2008aj, Mureika:2011py, Larranaga:2014uca}. One more thing we want to note is that there is always a solution for any density distribution because the corresponding energy-momentum tensor of anisotropic fluid compensates for a consistent solution to exist. This is another reason that many authors have been able to consider these noncommutative geometry inspired black holes, which also has been referred in the context of another type of regular black holes \cite{Dymnikova:1992ux, Dymnikova:2003vt, Dymnikova:2004qg}. It is thereby natural that this research has been extended to three-dimensional black holes. Though a three-dimensional spacetime is intrinsically different from a four-dimensional spacetime, as is well known, there exists the BTZ black hole in a three-dimensional spacetime with a negative cosmological constant. The authors of \cite{Rahaman:2013gw, Tejeiro:2010gu, Larranaga:2010tt, Rahaman:2014pha, Myung:2008kp} analyzed three-dimensional black holes with Gaussian sources which have similar structures to the BTZ black hole, in the sense that the spacetimes are asymptotically anti de Sitter, but at the same time, there are de Sitter cores around the centers on the contrary to the BTZ black hole. As we will see later, there is no black hole with two horizons due to the core \cite{Myung:2008kp}. Motivated by this fact, the authors of \cite{Myung:2008kp} and \cite{Jun:2014jqa} introduced generalized Gaussian sources whose density distributions are proportional to $r e^{-r^2/(2\theta)}$ and $r^2 e^{-r^2/(2\theta)}$, respectively. The change of sources makes a black hole have two horizons as long as an appropriate condition is satisfied. The aim of this paper is to clarify what is physically essential for a spacetime with such a fuzzy source to have a horizon. In particular, we are interested in how noncommutativity changes the number of horizons. Here we want to move away from the specifics and consider general properties. To this end, we utilize a mass function that denotes the mass within a given radius. Such a mass function determines the condition for a spacetime to have a horizon, since the necessary mass that must be included within the horizon radius is automatically determined, once the radius of a black hole is given. In the rest of this paper, we will investigate the existence of horizons and the number of them for a three-dimensional black hole with a source described by a generalized Gaussian $r^n e^{-r^2/(2\theta)}$, using a mass function and a characteristic function which denotes the horizon formation condition. In order to do so, we will solve the Einstein equation with anisotropic fluid corresponding to the source and the negative cosmological constant. Also, we will see that, for a three-dimensional black hole, the existence of a void around its center is crucial to have two horizons. We use a toy model whose density distribution is not related to noncommutativity to check our statement. Since the characteristic function we will propose here to judge the horizon formation is intuitive and graphically versatile, we can apply it to various cases. In fact, we consider a three-dimensional black hole with a source whose density distribution is originally motivated by the fuzzy disc in noncommutative geometry. The fuzzy disc is a disc-shaped region in a two-dimensional Moyal plane and its corresponding function is a sum of density distributions represented by the generalized Gaussian functions. We will also investigate an extension of the density distribution of the fuzzy disc type and a black hole around it in a four-dimensional spacetime. This paper is organized as follows. In Sec.2, we show how a mass function is used to determine the horizon formation condition, using the Reissner-Nortstr{\o}m black hole as an example, and we apply the same manner to the four-dimensional black hole argued in \cite{Nicolini:2005vd}. In Sec.3, we will analyze three-dimensional black holes with fuzzy sources whose density distributions are given by the generalized Gaussian functions. We will investigate the characteristic function for the horizon formation condition in detail, and will see what is essential for a horizon to be formed. In Sec.4, noncommutative geometry inspired black holes with sources motivated from the fuzzy disc are considered. Sec.5 is devoted to conclusion and discussion. We also refer a black hole spacetime with multi-horizon and the fuzzy annulus as its source. \resection{Mass function and horizon formation condition} The existence of a black hole, in other words, the existence of a horizon, depends on how much mass is condensed in a given region. Even if there is a large amount of mass, but it is too diffused, a black hole horizon can not be formed. Since the sources we will treat in this paper are smeared by replacing the delta function to the Gaussian functions, how much mass exists within a given radius is essential for a spacetime to have a horizon. A mass function is intuitively useful to express such a necessary mass. \subsection{Reissner-Nortstr{\o}m black hole and horizon formation condition} In order to judge when a horizon is formed for a noncommutative geometry inspired black hole, we can utilize a mass function. It is the profile of the mass distribution that is calculated by the volume integration of a density. It also can be regarded as an effective mass that is obtained by an analogue to the Schwarzschild mass. For example, let us consider the Reissner-Nortstr{\o}m (RN) solution. In $G=c=1$ unit, the line element of the four-dimensional RN black hole is given by \begin{equation} ds^2 = -\left(1-\frac{2M}{r}+\frac{Q^2}{r^2}\right)dt^2 +\left(1-\frac{2M}{r}+\frac{Q^2}{r^2}\right)^{-1}dr^2 +r^2 d\Omega_{(2)}^2, \end{equation} where $M$ is the total mass in the spacetime, and $Q$ is the electric charge of the black hole. The existence of a horizon is determined by the divergent behavior of the $(rr)$-component of the metric. In other words, the number of roots for the equation \begin{equation} f(r) = 1-\frac{2M}{r}+\frac{Q^2}{r^2} = 0, \label{RNhorizon} \end{equation} corresponds to the number of the horizons. If $M > |Q|$, the RN metric describes the black hole spacetime with two horizons. They are located at $r_{\pm} = M \pm \sqrt{M^2-Q^2}$. If $M=|Q|$, there is a special type of a black hole with one horizon. This is an extreme RN black hole in which $r_+$ and $r_-$ coincide. If $M < |Q|$, there is no black hole, but a naked singularity that does not have a horizon. We can graphically clarify if a horizon exists or not by introducing a mass function. The mass function $m(r)$ for the RN black hole is defined as \begin{equation} m(r) = M -\frac{Q^2}{2r}. \end{equation} Using $m(r)$, the line element of the RN black hole is rewritten as \begin{equation} ds^2 = -\left(1-\frac{2m(r)}{r}\right)dt^2 +\left(1-\frac{2m(r)}{r}\right)^{-1}dr^2 +r^2 d\Omega_{(2)}^2, \end{equation} which can be regarded as the line element of the Schwarzschild black hole with the effective mass $m(r)$. Eq.(\ref{RNhorizon}) is also rewritten as \begin{equation} f(r) = 1-\frac{2m(r)}{r} = 0, \end{equation} which gives the horizon formation condition. One of the advantages of this perspective is that it enables us to understand why an infalling observer can avoid hitting the singularity at the center of the RN black hole. Actually, inside the inner horizon ($0 \leq r \leq r_-$), the mass function is always negative, which makes the gravitational force effectively repulsive there \cite{Poisson}. Another advantage of introducing the mass function, which will become more significant in the following analyses in this paper, is that it makes us possible to argue the horizon formation condition based on the analogue of a well-known black hole. Though there is no geometrical basis to define a mass function, we can choose a simpler and more useful one for a spacetime we want to consider. Clearly, for four-dimensional regular black holes, we can use the Schwarzschild black hole as such. The Schwarzschild horizon depends on its mass as \begin{equation} r_h = 2M, \end{equation} which means that if there is a Schwarzschild black hole with radius $r_h$, the total mass $M_h = r_h/2$ must be included within radius $r_h$. More precisely, if a mass included inside a sphere of radius $r_h$ is equal to or larger than $r_h/2$, a black hole is formed. Applying this idea to the RN case, we can interpret the horizon formation condition using the mass function as the existence of $r_h$ that satisfies \begin{equation} m(r_h) \geq M_h = \frac{r_h}{2}. \end{equation} This condition states that once a horizon radius is given, the total mass that must be included within the radius will be determined automatically. Of course, this condition obviously coincide with $f(r)=0$, but our point of view is physically more apparent. For the RN case, the condition for a horizon to be formed can be rewritten as \begin{equation} M-\frac{Q^2}{2r_h} \geq \frac{r_h}{2} \quad \Leftrightarrow \quad \frac{1}{M} \leq \frac{2r_h}{r_h^2 + Q^2}. \label{HorizonCondiRN} \end{equation} The existence of a horizon is determined by the number of intersections between the following characteristic function \begin{equation} h(x)=\displaystyle \frac{2x}{x^2 + q^2}, \label{RNhorizonfunc} \end{equation} and the constant function that represents the value of $L/M$. Here $L$ is a typical length of this spacetime, which is introduced to define dimensionless parameters $L/M$, $x \equiv r_h/L$ and $q \equiv Q/L$. Now the condition (\ref{HorizonCondiRN}) is translated to \begin{equation} \frac{L}{M} \leq h(x)=\displaystyle \frac{2x}{x^2 + q^2}. \end{equation} The profile of $h(x)$ is shown in Fig.\ref{RNhorizonFig}. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.6, bb=0 0 300 214]{RNhorizon.pdf} \caption{Plot of $h(x)$ with $q=1$ defined in (\ref{RNhorizonfunc}). The horizontal lines denote the values of $L/M$. The number of intersections corresponds to the numbers of horizons.} \label{RNhorizonFig} \end{center} \end{figure} By the way, we are writing the condition for the inverse of mass $1/M$, not for $M$. This is because $h(x)$ does not diverge both around $x=0$ and for $x \to \infty$, which makes it simpler to analyze the behavior of the horizon formation condition around $x=0$ and $x\to \infty$. This usage of the mass function have not been seen in the previous works. The characteristic function $h(x)$ takes its maximum value $1/q$ at $x= q$. $L/M$ must therefore be equal to or smaller than $1/q$ in order that at least one horizon exists. When $M=Q\ (\Leftrightarrow L/M=1/q)$, there is one horizon, which corresponds to the extreme black hole. For $M > Q$, there are two horizons. These facts on the RN black hole are well known. We will investigate the horizon formation conditions for various sources in the same manner in the rest of this paper. As mentioned before, there is no generally natural definition of a mass function for an arbitrary spacetime, and we can use an suitable form for a spacetime we want to consider. In fact, we will consider an analogue of the BTZ black hole to define a mass function in three dimensions, on the contrary to the Schwarzschild black hole in four dimensions.\footnote{ Furthermore, we can choose different types of mass functions if a black hole is charged and/or rotating, similar to the RN black hole. } \subsection{Horizon formation condition for a four-dimensional noncommutative geometry inspired Schwarzschild black hole} We want to apply the method in the previous subsection to investigate the horizon formation condition for a four-dimensional regular black hole inspired by noncommutative geometry considered in \cite{Nicolini:2005vd}. The density distribution of the source of the black hole has a Gaussian shape\footnote{ $\theta$ in this paper is twice as large as the one used in \cite{Nicolini:2005vd}. } $\rho(r) \propto e^{-r^2/(2\theta)}$. Here $\theta$ is a noncommutative parameter that defines the canonical commutation relation between space coordinates as \begin{equation} [x, y] = i\theta. \end{equation} When this relation is imposed to a space, we can naively expect that there is no `zero-size' object. For example, a source of the delta function type would be smeared and fuzzy. Then one of the simplest realizations is to replace the delta function to a Gaussian function \begin{equation} \delta^{(3)}(\vecvar{r}) \to \exp\left(-\frac{r^2}{2\theta}\right). \end{equation} The authors of \cite{Nicolini:2005vd} made use of the fact that for any density distribution, there exist the corresponding solution for the Einstein equation because of compensating by an appropriate component of the energy-momentum tensor of anisotropic fluid. In \cite{Nicolini:2005vd}, the tangential pressure $T_{\phi\phi}$ plays the role. This has been extend to various black holes, e.g., charged \cite{Ansoldi:2006vg}, rotating \cite{Smailagic:2010nv, Modesto:2010rv}, or lower-dimensional \cite{Mureika:2011py} and higher-dimensional ones \cite{Spallucci:2009zz}, and so on. The reference \cite{Nicolini:2008aj} is a review of noncommutative geometry inspired black holes written by one of the authors of \cite{Nicolini:2005vd}. The solution shown in \cite{Nicolini:2005vd} is given by \begin{equation} ds^2 = -\left(1-\frac{2m_{4d}(r)}{r}\right)dt^2 +\left(1-\frac{2m_{4d}(r)}{r}\right)^{-1}dr^2 +r^2\left(d\theta^2 + \sin^2\theta d\phi^2\right), \end{equation} where \begin{eqnarray} m_{4d}(r) &=& 4\pi \int_0^r dr' r'^2 \rho_{4d}(r') = 4\pi\int_0^r dr' r'^2\frac{M}{(2\pi\theta)^{3/2}} \exp\left(-\frac{r'^2}{2\theta}\right) \nonumber\\ &=& \frac{2M}{\sqrt{\pi}} \gamma\left(\frac{3}{2}, \frac{r^2}{2\theta}\right), \end{eqnarray} is the mass function for this system.\footnote{ For a more general profile of density \cite{Nicolini:2011fy} \begin{equation} \rho_{4d}(r) = \frac{M}{4\pi\theta(2\theta)^{\frac{n+1}{2}} \Gamma\left(\frac{n+3}{2}\right)} r^n \exp\left(-\frac{r^2}{2\theta}\right), \end{equation} the corresponding mass function is given by \begin{equation} m_{4d}(r) = 4\pi \int_0^r dr' r'^2 \rho(r') = \frac{M}{\Gamma\left(\frac{n+3}{2}\right)} \gamma\left(\frac{n+3}{2}, \frac{r^2}{2\theta}\right). \end{equation} We can apply the manner in this paper to this generalized distribution. } $\gamma(a,x)$ is the lower incomplete gamma function related to the upper incomplete gamma function as \begin{equation} \gamma(a, x) = \Gamma(a) -\Gamma(a, x). \end{equation} The normalization is determined by $m_{4d}(r=\infty)=M$, which gives the total mass in the whole space. Repeating the same argument for the RN black hole, we can interpret the horizon formation condition as the existence of $r_h$ that satisfies \begin{equation} m_{4d}(r_h) \geq M_h = \frac{r_h}{2} \quad \Leftrightarrow \quad \frac{2M}{\sqrt{\pi}} \gamma\left(\frac{3}{2}, \frac{r^2}{2\theta}\right) \geq \frac{r_h}{2}. \end{equation} \begin{figure}[tb] \begin{center} \includegraphics[scale=0.6, bb=0 0 350 214]{NicoliniCondi.pdf} \caption{Plot of $h_{4d}(x)$. The horizontal lines denote the values of $\sqrt{2\theta}/M$. $h_{4d}(x)$ takes the maximum value $h_{4d}^* = 0.525$ at $x=1.51$. When $M > \sqrt{2\theta}/h_{4d}^*$, there are two horizons. When $M = \sqrt{2\theta}/h_{4d}^*$, there is one horizon, which corresponds to the extreme black hole. When $M < \sqrt{2\theta}/h_{4d}^*$, there is no horizon, which means that no black hole is formed but a regular lump of mass like a star exists. } \label{NicoliniCondi} \end{center} \end{figure} Introducing a dimensionless parameter $x = r_h/\sqrt{2\theta}$, the condition is interpreted as the existence of $x$ that satisfies \begin{equation} h_{4d}(x) \equiv \frac{2\gamma\left(\frac{3}{2}, x^2\right)}{\sqrt{\pi}x} \geq \frac{\sqrt{2\theta}}{M}. \label{h4d} \end{equation} The plot of $h_{4d}(x)$ is shown in Fig.\ref{NicoliniCondi}. $h_{4d}(x)$ takes the maximum value $\thickapprox 0.525$ at $x=1.51$, which means that the extreme black hole exists when \begin{equation} M \thickapprox \frac{\sqrt{2\theta}}{0.525}. \end{equation} For $M$ which is larger than $\sqrt{2\theta}/0.525$, there is a black hole with two horizons. This result coincides with \cite{Nicolini:2005vd}, multiplying two to $\theta$ to replace the noncommutative parameter used in \cite{Nicolini:2005vd}. \resection{Three-dimensional black hole with fuzzy source} \subsection{Three-dimensional rotating regular black hole with anisotropic fluid} We can easily extend the analysis in the previous section to three-dimensional cases. To begin with, let us show that there is a black hole which corresponds to any density distribution $\rho(r)$ also in three dimensions. \begin{table}[tb] \begin{center} \begin{tabular}{ccccc}\hline $\mbox{}$ & $n$ & $Q$ & $J$ & $\Lambda$ \\ \hline \hline Rahaman {\it et al.} \cite{Rahaman:2013gw} & $ \quad \ \quad 0 \quad \ \quad $ & $ \quad \ \quad 0 \quad \ \quad $ & $ \quad \ \quad 0 \quad \ \quad $ & $ \quad \ \quad <0 \quad \ \quad $ \\ \hline Tejeiro \& Larranaga \cite{Tejeiro:2010gu} & 0 & 0 & \mbox{} & $< 0$ \\ \hline Tejeiro \& Larranaga \cite{Larranaga:2010tt} & 0 & $\propto e^{-r^2/(4\theta)}$ & 0 & $< 0$ \\ \hline Rahaman {\it et al} \cite{Rahaman:2014pha} & 0 & $\propto r^{n+2}$ & 0 & $< 0$ \\ \hline Myung \& Yoon \cite{Myung:2008kp} & 0, 1 & 0 & 0 & $<0$ \\ \hline Liang, Liu \& Zhu \cite{Jun:2014jqa} & 2 & 0 & \mbox{} & $< 0$ \\ \hline Park \cite{Park:2008ud} & $n$ & 0 & 0 & $> 0$ \\ \hline \end{tabular} \caption{Three-dimensional black holes with fuzzy sources} \label{BHs} \end{center} \end{table} As summarized in Table.\ref{BHs}, various types of three-dimensional, noncommutative geometry inspired black holes have been proposed so far. All of them were modifications of the BTZ black hole by replacing densities of the delta function type to the Gaussian type \begin{equation} \rho(r) \propto e^{-\frac{r^2}{2\theta}}, \end{equation} or the generalized Gaussian type \begin{equation} \rho(r) \propto r^n e^{-\frac{r^2}{2\theta}} \quad (n \geq 1). \end{equation} To consider a concrete spacetime, we first derive a three-dimensional, circular symmetric solution of the Einstein equation with the negative cosmological constant \begin{eqnarray} && G_{\mbox{}\ \nu}^{\mu} = 8\pi T_{\mbox{}\ \nu}^{\mu} + \Lambda\delta_{\mbox{}\ \nu}^{\mu}. \end{eqnarray} $\Lambda$ is the cosmological constant which is related to the curvature length $\ell$ as $\Lambda = -1/\ell^2$. In this paper we use $c=G_3=1$ unit, where $G_3$ is the three-dimensional gravitational constant. We want to consider a circular symmetric spacetime described by the following metric \begin{equation} ds^2 = -f(r) dt^2 +f^{-1}(r)dr^2 + r^2 \left[d\phi + N_{\phi}(r)dt \right]^2. \label{metric} \end{equation} The spacetime denoted by this metric has an angular momentum that is similar to the BTZ black hole.\footnote{ Though the most general form of the metric with circular symmetry is given by \cite{Yamazaki:2001ue} \begin{equation} ds^2 = -e^{2\alpha(r)} f(r) dt^2 +f^{-1}(r)dr^2 + r^2 \left[d\phi + N_{\phi}(r)dt \right]^2, \end{equation} we focus on the type of metric (\ref{metric}) in this paper for simplicity. } For the energy-momentum tensor $T_{\mbox{}\ \nu}^{\mu}$, we impose the following ansatz \begin{eqnarray} && T_{\mbox{}\ t}^t = -\rho (r), \quad T_{\mbox{}\ r}^r = p_r (r), \quad T_{\mbox{}\ \phi}^{\phi} =p_{\phi}(r) \nonumber \\ && T_{\mbox{}\ t}^{\phi} = \sigma_a (r), \quad T_{\mbox{}\ \phi}^t = \sigma_b (r), \quad \mbox{others are zero}. \end{eqnarray} When we consider a rotating solution, i.e., $N_{\phi} \neq 0$, the energy-momentum tensor can be no longer diagonal. In fact, we can not set $T_{\mbox{}\ t}^{\phi} = \sigma_a (r) =0$ to solve the equations of motion consistently, though $\sigma_b$ can be zero as we will see explicitly. This point is not referred in \cite{Tejeiro:2010gu} and \cite{Jun:2014jqa} though the existence of the $(\phi, t)$-component of the energy-momentum tensor does not affect their conclusions. Since $T_{\phi t}$ and $T_{t \phi}$ must be same, we find that $\sigma_a$ and $\sigma_b$ obey \begin{equation} \sigma_a = \left(-\frac{f}{r^2}+N_\phi^2\right)\sigma_b +N_{\phi} (p_{\phi} +\rho), \label{sym} \end{equation} which can be used to check the consistency. Now we find that the Einstein equation $G_{\mbox{}\ \nu}^{\mu} = 8\pi T_{\mbox{}\ \nu}^{\mu} + \Lambda\delta_{\mbox{}\ \nu}^{\mu}$ reduces to \begin{eqnarray} 2f'+r^2[rN_{\phi}^{\prime 2} +2N_{\phi}(3N_{\phi}^{\prime}+rN_{\phi}^{\prime\prime})] &=& 4r\left(-8\pi \rho +\frac{1}{\ell^2} \right), \label{eom1}\\ 2f'+r^3N_{\phi}^{\prime 2} &=& 4r\left(8\pi p_r +\frac{1}{\ell^2}\right), \label{eom2}\\ -3r^2N_{\phi}^{\prime 2} +2f'' -2N_{\phi}(3N_{\phi}^{\prime}+rN_{\phi}^{\prime\prime}) &=&4\left(8\pi p_{\phi} +\frac{1}{\ell^2} \right), \label{eom3}\\ r(3N_{\phi}^{\prime} +rN_{\phi}^{\prime\prime}) &=& 16\pi \sigma_b \label{eom4} \\ N_{\phi}(f'+2r^3N_{\phi}^{\prime 2} -rf'') +(f+r^2N_{\phi}^{2})(3N_{\phi}^{\prime}+rN_{\phi}^{\prime\prime}) &=& -16\pi r \sigma_a \label{eom5} \end{eqnarray} They are $(t,t), (rr), (\phi\phi), (t, \phi)$ and $(\phi, t)$-components of the Einstein equation, respectively. The prime denotes the derivative with respect to $r$. Besides them, we have to consider the covariant conservation of the energy-momentum tensor $T_{\mbox{}\ \mbox{}\ ;\nu}^{\mu\nu}=0$. For $\mu =r$, it gives a non-trivial equation \begin{multline} r\left\{ f'(p_r+\rho) +r^2N_{\phi}^{\prime} (\sigma_a -N_{\phi}^{2}\sigma_b) +N_{\phi} \left[ \sigma_b f' -r^2(p_r+\rho) N_{\phi}^{\prime} \right] \right\} \\ +f(2p_r -2p_{\phi} -2N_{\phi}\sigma_b -r\sigma_b N_{\phi}^{\prime} +2rp'_r) =0. \label{eom6} \end{multline} There are six equations (\ref{eom1})-(\ref{eom6}) and one condition (\ref{sym}) for the symmetry of the energy-momentum tensor to determine five unknown functions $f, N_{\phi}, \sigma_a, \sigma_b, p_r$ and $p_{\phi}$. In deriving solutions, we will put an ansatz to reduce the totally seven equations into six. The redundant equation among the rest six equations is due to the Bianchi identity. When we impose a simple ansatz $\sigma_b =0$, Eq.(\ref{eom4}) can easily be integrated as \begin{equation} N_{\phi}(r) = -\frac{J}{2r^2}, \label{RotPart} \end{equation} which coincides with the BTZ case. $J$ corresponds to the angular momentum of a black hole. Substituting this into the other equations, we see that all the other unknown functions $f, \sigma_a, p_r$ and $p_{\phi}$ are determined as the functions of the energy density $\rho$ \begin{eqnarray} f(r) &=& -16\pi \int_0^r dr' \ r'\rho(r') +\frac{r^2}{\ell^2} +\frac{J^2}{4r^2}, \nonumber \\ &=& -8m(r) +\frac{r^2}{\ell^2}+\frac{J^2}{4r^2} \label{Mpart} \\ \sigma_a(r) &=& \frac{J}{2r}\rho'(r) \\ p_{r}(r) &=& -\rho(r), \\ p_{\phi}(r) &=& -(r\rho(r))', \label{sln} \end{eqnarray} where we set an integration constant in $f(r)$ to zero for this solution to coincide with the BTZ black hole for a large $r$. $m(r)$ is the mass function for a given density $\rho$, which is defined by \begin{equation} m(r) = 2\pi \int dr' r' \rho(r'). \end{equation} The Ricci scalar for this solution in terms of $f$ and $N_{\phi}$ is \begin{equation} R = -f''(r) -\frac{2}{r}f'(r)+\frac{1}{2}r^2N_{\phi}^2(r). \label{3dimRicci} \end{equation} Substituting (\ref{Mpart}) and (\ref{RotPart}) to (\ref{3dimRicci}), we obtain \begin{equation} R = 16\pi (3\rho(r)+r\rho'(r)) -\frac{6}{\ell^2} + \frac{J^2}{8r^2}. \label{Ricci1} \end{equation} The energy-momentum tensor with lower indices is given by \begin{equation} (T_{\mu\nu}) = \left( \begin{array}{ccc} T_{tt} & T_{tr} & T_{t\phi} \\ T_{rt} & T_{rr} & T_{r\phi} \\ T_{\phi t} & T_{\phi r} & T_{\phi\phi} \end{array} \right) = \left( \begin{array}{ccc} f\rho + \displaystyle\frac{J}{2}(r\rho)' & 0 & \displaystyle\frac{J}{2}(r\rho)' \\ 0 & \displaystyle -\frac{\rho}{f} & 0 \\ \displaystyle\frac{J}{2}(r\rho)' & 0 & -r^2(r\rho)' \end{array} \right), \end{equation} which is diagonal only for $J=0$ as mentioned before. \subsection{Generalized non-Gaussian sources in three dimensions} We investigate spacetimes with various sources that appear in the context of noncommutative geometry. As an instructive example, let us first see the spacetime with the generalized Gaussian source, $\rho \propto r^n e^{-r^2/(2\theta)}$. \footnote{ As summarized in Table.\ref{BHs}, the black holes with $n=0, 1$ and with $n=2$ were investigated in \cite{Myung:2008kp} and \cite{Jun:2014jqa}, respectively. } To be more concrete, we consider the following density distribution described by the generalized Gaussian function \begin{equation} \rho_n(r) = \frac{M}{2\pi\theta (2\theta)^{\frac{n}{2}}\Gamma\left(\frac{n}{2}+1\right)} r^n e^{-\frac{r^2}{2\theta}}. \label{NGD} \end{equation} The corresponding mass function is \begin{eqnarray} m_{n}(r) &=& 2\pi \int_0^r r'\rho(r')dr' = \frac{M}{\Gamma\left(\frac{n}{2}+1\right)} \gamma\left(\frac{n}{2}+1, \frac{r^2}{2\theta}\right) \nonumber \\ &=& M \left[1-\frac{\Gamma\left(\frac{n}{2}+1, \frac{r^2}{2\theta}\right)}{\Gamma\left(\frac{n}{2}+1\right)}\right]. \label{massfunc} \end{eqnarray} Similar to the four-dimensional case, the mass function is normalized as $m_n(\infty) = M$ using $\Gamma(\frac{n}{2}+1, \infty)=\Gamma(\frac{n}{2}+1)$. The ratio of $M$ to the noncommutative parameter $\sqrt{2\theta}$ determines the horizon formation condition. \subsection{Black holes with a generalized Gaussian source and physical interpretation of their horizons} \label{PhysInterpretation} Hereafter we set $J=0$ for simplicity, but the essence of our analysis does not depend on it and we can extend this to the case with nonzero $J$. Putting the density distribution (\ref{NGD}) to (\ref{sln}) and setting $J=0$, we obtain \begin{equation} ds^2 = -f_n(r)dt^2 + f_n^{-1}(r)dr^2 +r^2 d\phi^2, \label{3dimSln1} \end{equation} where \begin{eqnarray} f_n(r) &=& -8m_n(r) +\frac{r^2}{\ell^2} \nonumber \\ &=& \frac{M}{\Gamma(n+1)} \left[ \gamma\left(n+1, \frac{r^2}{2\theta}\right) + \frac{r^2}{2\theta}\gamma\left(n, \frac{r^2}{2\theta}\right) \right] +\frac{r^2}{\ell^2}, \label{3dimSln2}\\ p_{nr}(r) &=& -\rho_{n}(r) \nonumber \\ &=& -\frac{M}{2\pi\theta (2\theta)^{\frac{n}{2}}\Gamma\left(\frac{n}{2}+1\right)} r^n e^{-\frac{r^2}{2\theta}}, \label{3dimSln3}\\ p_{n\phi}(r) &=& -(r\rho_n(r))' \nonumber \\ &=& -\frac{M}{2\pi\theta (2\theta)^{\frac{n}{2}}\Gamma\left(\frac{n}{2}+1\right)} \left(n+1-\frac{r^2}{\theta}\right)r^n e^{-\frac{r^2}{2\theta}}. \label{3dimSln4} \end{eqnarray} In order to obtain the physical interpretation of the three-dimensional spacetime described above, let us go back to see the BTZ black hole spacetime. The non-rotating BTZ solution is represented by the following line element \begin{equation} ds^2 = -\left(-8M + \frac{r^2}{\ell^2}\right)dt^2 +\left(-8M + \frac{r^2}{\ell^2}\right)^{-1}dr^2 +r^2 d\phi^2. \end{equation} The horizon radius is given by \begin{equation} r_h = \sqrt{8M \ell^2 }, \end{equation} which is determined by $g_{tt}=g_{rr}^{-1}=0$. As shown in the previous sections, we can see this equation as the condition for the mass that is necessary for a horizon with radius $r_h$ to be formed. In this case \begin{equation} M = \frac{r_h^2}{8\ell^2}, \end{equation} is the necessary mass inside a circle with radius $r_h$ for the BTZ black hole to have the horizon. We use this condition to judge whether the three-dimensional black hole with the generalized Gaussian source can have a horizon or not. For the spacetime described by (\ref{3dimSln1})-(\ref{3dimSln4}), the mass function is calculated as (\ref{massfunc}). The horizon formation condition is thereby interpreted as the existence of $r_h$ that satisfies \begin{equation} m_n(r_h) = M \left[1-\frac{\Gamma\left(\frac{n}{2}+1, \frac{r_h^2}{2\theta}\right)}{\Gamma\left(\frac{n}{2}+1\right)}\right] \geq \frac{r_h^2}{8\ell^2}, \end{equation} or equivalently, the existence of $x$ that satisfies \begin{eqnarray} h_n(x) &\equiv& \frac{1}{x^2} \left[1-\frac{\Gamma\left(\frac{n}{2}+1, x^2\right)}{\Gamma\left(\frac{n}{2}+1\right)}\right] =\frac{1}{x^2}\frac{\gamma\left(\frac{n}{2}+1, x^2\right)}{\Gamma\left(\frac{n}{2}+1\right)} \nonumber \\ &\geq& \frac{(\sqrt{2\theta})^2}{8M\ell^2}, \label{Condi3dim} \end{eqnarray} where $x=r_h/\sqrt{2\theta}$ as before. The maximum value of $h_n(x)$ determines the existence of a horizon. The behavior of the characteristic function $h_n(x)$ is very simple because it is just the multiplication of $x^{-2}$ and the incomplete Gamma function. $h_n(x)$ asymptotically approaches to zero when $x \to \infty$ since the upper incomplete gamma function $\Gamma(a, x^2)$ approaches $\Gamma(a)$ for $x\to \infty$. However, note that there is a difference between $n=0$ and $n\geq 1$ in the behaviors of $h_n(x)$ around $x=0$. Using the expansion of the upper incomplete gamma function \begin{equation} \Gamma(a, t) = \Gamma(a) + t^a \left(-\frac{1}{a}+\frac{t}{a+1}-\frac{t^2}{2(a+2)}+\cdots \right), \end{equation} we find that $h_n(x)$ behaves around $x=0$ as \begin{equation} h_n(x) = \frac{x^n}{\Gamma\left(\frac{n}{2}+2\right)} + O\left(x^{n+2}\right). \end{equation} Therefore we obtain \begin{eqnarray} h_n(0) = \begin{cases} 0 & (n =0), \\ 1 & (n\geq 1). \end{cases} \end{eqnarray} Also, $x^{-2}$ is a monotonically decreasing function and diverges at $x=0$, on the contrary to the lower incomplete gamma function $\gamma(a, t)$, which is monotonically increasing and asymptotically approaches to constant. Taking the behaviors of both functions around $x=0$ into consideration, we find that $h_0(x)$ is monotonically decreasing with $h_0(0)=1$, and $h_n(x)$ for $n \geq 1$ has an extremum at a finite $x$ with $h_n(0)=0$. In both cases, $h_n(x)$ asymptotically approaches to zero for $x\to 0$. Their behaviors are compared in Fig.\ref{h_n}. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.5, bb=0 0 259 214]{h0.pdf} \hspace{3cm} \includegraphics[scale=0.5, bb=0 0 259 214]{h1.pdf} \caption{Profiles of $h_0(x)$ (left) and $h_1(x)$ (right). The horizontal lines denote the values of $(\sqrt{2\theta})^2/8M\ell^2$. $h_1(x)$ is chosen as a typical example for $n \geq 1$. For comparison, $x^{-2}$ and $\gamma(n/2+1,x^2)/\Gamma(n/2+1)$ are plotted in the dashed and the dotted curves, respectively.} \label{h_n} \end{center} \end{figure} If the dimensionless constant $(\sqrt{2\theta})^2/8M\ell^2$ is smaller than the maximum value of $h_n(x)$, there exist a horizon. For $n=0$, since $0 \leq h_0(x) < 1$, a horizon is formed when \begin{equation} 0 < \frac{(\sqrt{2\theta})^2}{8M\ell^2} < 1. \label{n=0HorCondi} \end{equation} If $8M\ell^2/(\sqrt{2\theta})^2 = 1$, there is a ``black hole'' at $r=0$ whose radius is zero. So this condition can be read as there must be enough mass within a radius for a horizon to be formed compared with the noncommutative parameter $\theta$ which determines how much the mass is diffused and is leaked out of the radius . This is a reasonable claim.\footnote{ In \cite{Rahaman:2013gw} the authors added $M$ to $f_n(r)$ in order to make a spacetime anti-de Sitter around $r=0$ using the ambiguity of integration constant. By this modification, the mass function becomes \begin{equation} m(r) = M\left(\frac{1}{2}-e^{-\frac{r^2}{2\theta}}\right). \end{equation} The characteristic function $h_0(x)$ diverges negatively for $x \to 0$ and does not take a finite value at $x=0$. Therefore $h_0(x)$ in \cite{Rahaman:2013gw} is not a monotonically decreasing function, but have a maximum at a finite $x$, which makes it possible for the black hole to have two horizons as long as the mass $M$ is large enough compared with the diffusion determined by the noncommutativity defined there. } For $n\geq 1$, the horizon formation condition is given by \begin{equation} 0 < h_n^{*} \leq 1 \quad \Leftrightarrow \quad M \geq \frac{(\sqrt{2\theta})^2}{8h_n^{*} \ell^2} = M_*, \end{equation} where $h_n^*$ is the maximum value of $h_n(x)$. When $M > M_*$, there are two horizons. The existence of two horizons is one of the peculiar features for $n \geq 1$. If $M=M_*$, it is extremal and there exist the black hole with one horizon, whose Hawking temperature is zero. We can expect that, starting from a state with $M > M_*$, the mass $M$ will decrease by the Hawking radiation to the extremal. The existence of the extremal state means that there will be a remnant after the Hawking radiation even for such a uncharged, non-rotating black hole. One more difference between $n=0$ and $n\geq 1$ is about the energy conditions they satisfy. As mentioned in \cite{Nicolini:2005vd} where the four-dimensional case with $n=0$ is considered, the strong energy condition is violated for the energy-momentum tensor given in \cite{Nicolini:2005vd}, but the weak energy condition is satisfied. In the three-dimensional cases, the weak energy condition is satisfied in the whole spacetime only for $n=0$. For $n\geq 1$, the weak energy condition ($\rho \geq 0$ and $\rho + p_i \geq 0$) is translated to \begin{equation} \rho_n \geq 0, \quad \rho_n +p_{nr} \geq 0, \quad \rho_n +p_{n\phi} \geq 0. \end{equation} We can explicitly check that the first and second condition are always satisfied. The third one is rewritten as \begin{equation} \rho_n +p_{\phi} \propto r^n \left(r^2 -n\theta\right)e^{-\frac{r^2}{2\theta}} \geq 0. \end{equation} Therefore $\rho_n+p_{n\phi}$ is not necessarily positive in the whole space. We leave the detail analysis and physical meaning of it for a sequent paper. Although the existence of the black holes with two horizons for $n \geq 1$ naively appears that the noncommutativity works as a repulsive force in the vicinity of the centers of the black holes just like the RN case, it is not completely true. In fact, in the three-dimensional case we have seen above, there is no black hole with two horizons for $n=0$. This was pointed out in \cite{Myung:2008kp} and the authors analyzed the difference of the behaviors for $n=0$ and $n=1$. Actually, the regularity in the whole space is realized because of the fuzziness of the source. This can be understood by the Ricci scalar at $r=0$. Using (\ref{Ricci1}), we can calculate the Ricci scalar at the center of the spacetime as \begin{eqnarray} R|_{r=0} &=& \begin{cases} \displaystyle \frac{6}{\ell^2}\left(-1+\frac{8M\ell^2}{(\sqrt{2\theta})^2}\right) & (n=0), \vspace{0.3cm}\\ \displaystyle -\frac{6}{\ell^2} & (n \geq 1). \end{cases} \end{eqnarray} For $n\geq 1$, the Ricci scalar becomes a negative constant at $r=0$, which is consistent with the fact the mass function with $n \geq 1$ is zero at $r=0$ and the negative cosmological constant $\Lambda = -1/\ell^2$ is dominant there. For $n=0$, there are three cases depending on the value of $8M\ell^2/(\sqrt{2\theta})^2$. To be more concrete, we find \begin{equation} R|_{r=0} = \frac{8M\ell^2}{(\sqrt{2\theta})^2} \begin{cases} > 1 & : \mbox{de Sitter}, \\ = 1 & : \mbox{flat}, \\ < 1 & : \mbox{anti de Sitter}. \end{cases} \end{equation} As shown in (\ref{n=0HorCondi}), when a horizon is formed, $8M\ell^2/(\sqrt{2\theta})^2$ is always larger than 1. There is a de Sitter core in the center of the spacetime, which is similar to the four-dimensional case \cite{Nicolini:2005vd}. It is true that in four dimensions, there exist a black hole with two horizons even for $n=0$ as long as the mass is large enough. To understand the difference between the three- and the four-dimensional cases, we have to compare the characteristic functions for their horizon formation conditions. In the four-dimensional case, the condition is shown in (\ref{h4d}). The essential part of the characteristic function is given by \begin{equation} h_{4d}(x) \sim \frac{1}{x}\gamma\left(\frac{n+3}{2}, x^2\right), \end{equation} for an arbitrary $n$, where $\sim$ denotes that we are extracting the relevant term. On the contrary, in the three-dimensional case, the counterpart is given by \begin{equation} h_n(x) \sim \frac{1}{x^2}\gamma\left(\frac{n+1}{2}, x^2\right). \end{equation} The essential difference is the power of $x$ in front of the lower incomplete gamma function that controls the behavior around $x=0$. It is clearly originated from the difference of dimensions and intrinsic structures due to them that appears in $g_{tt} = g_{rr}^{-1}$ rather than noncommutativity. To see it more clearly, let us consider a simple toy model in three dimensions whose density is given by \begin{equation} \rho(r) = \begin{cases} \displaystyle \frac{3M}{2\pi R^3}r & (0 \leq r \leq R), \vspace{0.3cm} \\ 0 & (R < r), \end{cases} \end{equation} where $R$ is a characteristic scale of length of the system. The profile of $\rho(r)$ is shown in Fig.\ref{Toy}. The mass function for this density is \begin{equation} m(r) = 2\pi\int_0^r dr' r' \rho(r') = \begin{cases} \displaystyle M\left(\frac{r}{R}\right)^3 & (0 \leq r \leq R), \\ M & (R < r). \end{cases} \end{equation} Note that this model is not realistic in the sense that there is a gap the density and the mass function at $r=R$, however, it is not crucial in the following argument on the existence of a horizon. Actually, though we can consider a density that is smooth at $r=R$ and has an almost same profile as this toy model, it would not give an essential improvement to understand the horizon formation condition. Then repeating the same argument for the three-dimensional black hole with the generalized Gaussian source, we find that the horizon formation condition is given by \begin{equation} \frac{R^2}{8M\ell^2} \leq h_{toy}(y) \equiv \begin{cases} \displaystyle y & (0 < y \leq 1), \nonumber \vspace{0.3cm}\\ \displaystyle \frac{1}{y^2} & (1 \leq y), \end{cases} \end{equation} where $y$ is a dimensionless parameter defined by $y=r_h/R$. The characteristic function for the horizon formation condition is shown in Fig.\ref{Toy}. \begin{figure} \begin{center} \includegraphics[scale=0.5, bb=0 0 259 214]{ToyDensity.pdf} \hspace{2cm} \includegraphics[scale=0.5, bb=0 0 259 214]{ToyMass.pdf} \caption{Plots of the toy density (left) and the characteristic function $h_{toy}(y)$ (right).} \label{Toy} \end{center} \end{figure} The extreme case corresponds to $y=1 \ \Leftrightarrow \ R = \sqrt{8M\ell^2}$. For $R < \sqrt{8M\ell^2}$, there are two horizons. This existence of two horizons, in particular the existence of the inner horizon in this case, is a resultant of the void of the mass distribution around the center. This implies that there might exist a black hole with two horizons as long as there is a void around the center and enough mass is condensed in a given region, even if the spacetime noncommutativity does not work directly. \resection{Regular black hole and fuzzy disc} \subsection{Fuzzy disc as a source of a three-dimensional black hole} The analysis so far can be applied to other type of sources inspired from noncommutative geometry in three dimensions. In \cite{Kobayashi:2012ek}, we considered the fuzzy disc, which is a disc-shaped region in a two-dimensional Moyal plane \cite{Lizzi:2003ru, Lizzi:2003hz, Lizzi:2006bu}. A Moyal plane is a flat space defined by noncommutative coordinates satisfying the commutation relation $[x, y]=i\theta$. The algebra of functions on this noncommutative plane is an operator algebra $\hat{\cal A}$ generated by $\hat{x}$ and $\hat{y}$, acting on a Hilbert space ${\cal H}=l^2={\rm span}\{\ket{0},\ket{1},\cdots\}$. Here $\ket{n}$ is an eigenstate of ``the number operator'' \begin{equation} \hat{N} \ket{n} = n\ket{n}, \quad \hat{N} \equiv \hat{a}^{\dagger} \hat{a}, \end{equation} defined by the creation and the annihilation operators, $\hat{a} = (\hat{x}+i\hat{y})/\sqrt{2\theta},\ \hat{a}^{\dagger} = (\hat{x}-i\hat{y})\sqrt{2\theta}$, respectively. The fuzzy disc is defined by using an operator algebra $\hat{\cal A}$ on a Moyal plane by restricting to $N\times N$ matrices in the number basis. It is obtained by the projection $\hat{\cal A}_N =\hat{P}_N \hat{\cal A} \hat{P}_N$ through the rank $N$ projection operator, \begin{equation} \hat{P}_{\scriptscriptstyle N}= \sum_{n=0}^{N-1}\hat{p}_n =\hat{p}_0 + \cdots + \hat{p}_{\scriptscriptstyle N-1}, \label{completenessofP} \end{equation} where \begin{equation} \hat{p}_n= \ket{n}\bra{n} \quad (n=0, 1, \cdots). \end{equation} Instead of working with the operators, one can switch to the corresponding functions called symbols by means of the Weyl-Wigner correspondence. The symbol map based on this correspondence associates an operator $\hat{f}$ with a function $f$ as \begin{equation} f(z, \overline{z}) = \bra{z}\hat{f}\ket{z}, \end{equation} where $z = re^{i\phi}$ and $\ket{z}$ is a coherent state defined by \begin{equation} \hat{a}\ket{z} = \frac{z}{\sqrt{2\theta}}\ket{z}. \end{equation} Then, the corresponding function to the projection operator $\hat{p}_n$ is given by \begin{equation} p_n(r) = \bracket{z}{n}\bracket{n}{z} =e^{-\frac{r^2}{2\theta}}\frac{r^{2n}}{n! (2\theta)^n}, \label{ProjFunc} \end{equation} which is one of the realizations of the density distribution described by the generalized Gaussian function in the context of noncommutative geometry. Here we used \begin{equation} \bracket{z}{n} = e^{-\frac{r^2}{4\theta}}\frac{\overline{z}^n}{\sqrt{n! (2\theta)^n}}, \quad \bracket{n}{z} = e^{-\frac{r^2}{4\theta}}\frac{z^n}{\sqrt{n! (2\theta)^n}}. \end{equation} One can obtain the corresponding function for the fuzzy disc as well. Since the fuzzy disc is a sum of the $N$ projection operators from $n=0$ to $n=N-1$, the corresponding function for the fuzzy disc is given by \begin{equation} P_{\scriptscriptstyle N} (r) = \sum_{n=0}^{N-1} e^{-\frac{r^2}{2\theta}}\frac{r^{2n}}{n! (2\theta)^n} =\frac{\Gamma (N, \frac{r^2}{2\theta})}{\Gamma (N)}. \label{incomplete gamma} \end{equation} This function is roughly a radial step function that picks up a disc-shaped region around the origin $r=0$ with radius $R=\sqrt{2N\theta}$. On more details of how to find the corresponding function to an operator, one can find in \cite{Kobayashi:2012ek}. We now use the fuzzy disc as a source for a noncommutative inspired black hole in three dimensions. As a density motivated by the fuzzy disc (\ref{incomplete gamma}), we consider a spacetimes with a density distribution \begin{equation} \rho_{\scriptscriptstyle N}^{\scriptscriptstyle FD} (r) = \frac{M}{2\pi \theta N}P_{\scriptscriptstyle N}(r) =\frac{M}{2\pi\theta}\frac{\Gamma (N, \frac{r^2}{2\theta})}{\Gamma (N+1)}. \end{equation} The density distributions (that is, the shapes of the fuzzy discs) and the mass functions $m_{\scriptstyle N}^{\scriptscriptstyle FD}(r)$ for $N=1 ,2, 3$ are shown in Fig.\ref{density} and Fig.\ref{MassFunction}, respectively. They are normalized as \begin{equation} m_{\scriptscriptstyle N}^{\scriptscriptstyle FD}(\infty) = 2\pi\int_0^{\infty} dr' r'\rho_N(r) = M, \end{equation} as before. Since the radius of the fuzzy disc is almost $\sqrt{2N\theta}$, the radii are about $0.44, 0.63, 0.77$ for $N=1, 2, 3$ and $\theta =0.1$. The edge of the fuzzy disc becomes shaper as $N \to \infty$ with $N\theta =$ fixed. In Fig.\ref{density_sharpened}, we draw $N=100$ and $N=1000$ with $N\theta = 1$ cases, respectively. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5, bb=0 0 259 214]{density.pdf} \caption{Plot of the density functions of the fuzzy disc type for $N=1$ (solid), $N=2$ (dashed) and $N=3$ (dotted). Here we set $M=1$ and $\theta =0.1$. } \label{density} \end{center} \end{figure} We can see that their radii are almost $\sqrt{2N\theta} \simeq 1.4$ in Fig.\ref{density_sharpened}. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5, bb=0 0 300 214]{MassFunction.pdf} \caption{Plot of the mass functions corresponding to the fuzzy disc type source for $N=1$ (solid), $N=2$ (dashed) and $N=3$ (dotted). Here $M=1$ and $\theta =0.1$. } \label{MassFunction} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.4, bb=0 0 259 214]{density_sharpened1.pdf} \hspace{4cm} \includegraphics[scale=0.4, bb=0 0 259 214]{density_sharpened2.pdf} \caption{Plot of the density function for $N=100$ (left) and $N=1000$ (right) with $N\theta =1$. Their radii are $\simeq \sqrt{2N\theta}$. } \label{density_sharpened} \end{center} \end{figure} Using the mass function \begin{eqnarray} m_{\scriptscriptstyle N}^{\scriptscriptstyle FD}(r) &=& 2\pi \int_0^r dr' r' \rho_N(r')\\ &=&\frac{M}{\Gamma(N+1)}\left[ \gamma\left(N+1,\frac{r^2}{2\theta}\right) +\frac{r^2}{2\theta}\Gamma\left(N,\frac{r^2}{2\theta}\right) \right], \end{eqnarray} we explicitly write the horizon formation condition for the fuzzy disc as \begin{equation} m_{\scriptscriptstyle N}^{\scriptscriptstyle FD}(r_h) \geq \frac{r_h^2}{8M\ell^2}. \end{equation} Introducing $x = r_h/\sqrt{2\theta}$, this condition is rewritten as \begin{equation} h_{\scriptscriptstyle N}^{\scriptscriptstyle FD}(x) = \frac{1}{x^2}\left[1-\frac{\Gamma(N+1, x^2)}{\Gamma(N+1)}\right] +\frac{\Gamma(N, x^2)}{\Gamma(N+1)} \geq \frac{(\sqrt{2\theta})^2}{8M\ell^2}. \end{equation} The profiles of $h_{\scriptscriptstyle N}^{\scriptscriptstyle FD}(x)$ for $N=1, 2, 3$ are shown in Fig.\ref{hFD}. \begin{figure}[t] \begin{center} \includegraphics[scale=0.6, bb=0 0 300 215]{hFD.pdf} \caption{Plot of $h_{\scriptscriptstyle N}^{\scriptscriptstyle FD}(x)$ for $N=1$ (solid), $N=2$ (dashed) and $N=3$ (dotted).} \label{hFD} \end{center} \end{figure} Here $N$ denotes how many annuli are summed. The fuzzy disc with $N=1$ is constituted of $\ket{0}\bra{0}$ only, and the fuzzy disc with $N=2$ is the sum of two annuli that corresponds to $\ket{0}\bra{0}+\ket{1}\bra{1}$, ..., and so on. If we choose an appropriate $(\sqrt{2\theta})^2/(8M\ell^2)$, there can exist a black hole. For any $N$, as the characteristic function $h_{\scriptscriptstyle N}^{\scriptscriptstyle FD}(x) $ is monotonically decreasing, we find \begin{equation} \begin{cases} M \geq \displaystyle \frac{(\sqrt{2\theta})^2}{8h_{\scriptscriptstyle N}^{*}\ell^2} & : \mbox{one horizon}, \vspace{0.5cm}\\ M < \displaystyle \frac{(\sqrt{2\theta})^2}{8h_{\scriptscriptstyle N}^{*}\ell^2} & : \mbox{no horizon}, \end{cases} \end{equation} where $h_{\scriptscriptstyle N}^{*}$ is the maximum of $h_{\scriptscriptstyle N}^{\scriptscriptstyle FD}(x)$. The Ricci scalar of this spacetime is positive at $r=0$ for any $N$, which means that there is a de Sitter core there. This is same as the three-dimensional black hole with $\rho_0(r) \propto e^{-r^2/(2\theta)}$ as its source. Since the fuzzy disc source does not have a void in its center, its shape is similar to that described by $\rho_0(r)$. So this is reasonable. \subsection{Extension to a four-dimensional black hole} It is interesting to extend this fuzzy disc source to a four-dimensional spacetime. This extension corresponds to the source that is a sum of the thick matter layers considered in \cite{Nicolini:2011fy} with giving certain weights to the layers. For a three-dimensional case with the fuzzy disc as its source, two horizons can not be formed as we expected from the fact that there is no void around the centers. However, in four dimensions, the situation will change. In four dimensions, we consider the following density motivated by the fuzzy ``disc'', \begin{equation} \rho_{\scriptscriptstyle N}^{\scriptscriptstyle 4dFD}(r) =\frac{3M}{4\pi (\sqrt{2\theta})^3 \Gamma(N+\frac{3}{2})} \Gamma\left(N, \frac{r^2}{2\theta}\right), \end{equation} and the mass function counterpart is given by \begin{equation} m_{\scriptscriptstyle N}^{\scriptscriptstyle 4dFD}(r) = \frac{M}{3\Gamma\left(N+\frac{3}{2}\right)} \left[ \gamma\left(N+\frac{3}{2}, \frac{r^2}{2\theta}\right) +\frac{r^3}{(\sqrt{2\theta})^3} \Gamma\left(N, \frac{r^2}{2\theta}\right) \right]. \end{equation} The horizon formation condition is determined by \begin{equation} h_{\scriptscriptstyle N}^{\scriptscriptstyle 4dFD}(x) =\frac{2}{3\Gamma(N+\frac{3}{2})} \left[ \frac{1}{x}\gamma\left(N+\frac{3}{2}, x^2\right) + x^2 \Gamma(N, x^2) \right], \end{equation} where $x=r_h/\sqrt{2\theta}$ as before. \begin{figure}[t] \begin{center} \includegraphics[scale=0.6, bb=0 0 300 214]{h4dFD.pdf} \caption{Plot of $h_{\scriptscriptstyle N}^{\scriptscriptstyle 4dFD}(x)$ for $N=1$ (solid), $N=2$ (dashed) and $N=3$ (dotted). The horizontal lines denote the values of $(\sqrt{2\theta})^2/8M\ell^2$. Though we draw the horizon formation condition only for $N=1$, the qualitative behavior does not change for an arbitrary $N$. In four dimensions, there is a black hole that can have two horizons if it holds an appropriate condition. } \label{h4dFD} \end{center} \end{figure} The behavior of $h_{\scriptscriptstyle N}^{\scriptscriptstyle 4dFD}(x)$ is shown in Fig.\ref{h4dFD}. As we expected, $h_{\scriptscriptstyle N}^{\scriptscriptstyle 4dFD}(x)$ has only one extremum at finite $x$. $h_{\scriptscriptstyle N}^{\scriptscriptstyle 4dFD}(x)$ asymptotically approaches to zero when $x \to \infty$ and it becomes zero at $x=0$. We find that there is a black hole that can have two horizons as long as there is enough mass $M$ within a given volume. This is because the power of divergence of the characteristic function around $x=0$ is weakened from $x^{-2}$ in three dimensions to $x^{-1}$ in four dimensions. We can conclude that there are three cases, that is, two horizons, one horizon (the extremal) and no horizon cases, respectively. For a black hole with two horizons, there would be a remnant after radiation from the black holes. It is true that the behaviors of the source terms depend on noncommutativity which is denoted by $\theta$, but we may have to say that the possibility of remnant is originated from the difference of dimensions and intrinsic structures of the spacetimes rather than noncommutativity. \resection{Conclusion and discussion} In this paper, we considered the black holes in three and four-dimensional spacetimes. These black holes have the fuzzy sources inspired by noncommutative geometry. Noncommutativity between space coordinates is translated to the Gaussian profiles of matter distributions represented by the noncommutative parameter $\theta$. As investigated by many authors, there can be a black hole with two horizons for such a source when enough mass is included within a given radius. In order to judge whether a horizon is formed or not, we introduced the mass functions. Introduction of them makes it possible to regard those black holes as Schwarzschild black holes with effective masses. As an example, we first showed that how the mass function effectively works to investigate the horizon formation condition in the four-dimensional case with the density distribution represented by the generalized Gaussian function argued in \cite{Nicolini:2005vd, Nicolini:2011fy}. Next we applied this manner to the three-dimensional spacetime with the source described by the generalized Gaussian function. In the case of the three-dimensional spacetimes, the horizon formation condition depends on whether the mass function evaluated at a given radius is larger than the mass of the BTZ black hole. We analyzed the behaviors of the characteristic function for the horizon formation condition in detail and found how the difference between the three- and the four-dimensional spacetimes affects the horizon formation condition. The essential point of the horizon formation is the existence of a void around the center of the spacetime, which is closely relates to the spacetime's dimension rather than noncommutativity that is expected to work as a repulsive force as a quantum effect. We saw this by giving the toy model that is apart from noncommutative geometry inspired models. Since our point of view by means of mass function and characteristic function is graphical and intuitively understandable, we can easily apply it to any sources. In fact, we also considered the black hole with the source whose density distribution is motivated by the fuzzy disc. For such a fuzzy disc source with an arbitrary radius, a black hole can be formed as long as enough mass is included inside a given circle. This behavior is similar to the three-dimensional black hole with the density distribution $\rho \propto e^{-r^2/(2\theta)}$. This is interpreted as both distributions do not have a void at the centers of the spacetimes, but have de Sitter cores. The only difference between them is the length of the plateaus from the origin. We also considered the sources that have the same profile as the fuzzy disc in four dimensions. Since the fuzzy ``disc" is an three-dimensional object, it is just a toy model to check how the difference of dimensions works on the horizon formation condition. In four dimensions, there can exist a black hole with two horizons for the source whose density distribution is motivated by the fuzzy disc. As for a void, we want to state that it might be interesting to consider a source whose density distribution has the same profile as the fuzzy annulus we found in \cite{Kobayashi:2012ek, Kobayashi:2015hea}. The density distribution of the fuzzy annulus can be written by the linear combination of arbitrary number of the generalized Gaussian functions. According the argument so far, we expect that there could exist a black hole with two horizons even in three dimensions. Furthermore, there could be a black hole with more than two horizons because it is possible to put any gap between annuli. It is worth while analyzing the interior structure of such a spacetime relating to the Hawking radiation as a probe \cite{Deng:2016qua}, which is left for a sequent paper. Also, the detail analysis on causal structure, geodesic motion of a particle, thermodynamics and so on would also be interesting. \section*{Acknowledgments} We would like to thank to T. Asakawa and D. Ida for fruitful discussions. \bibliographystyle{JHEP}
1,108,101,563,812
arxiv
\section{Introduction} The spatial calculation of the radiation dose within the patient's body is a central component of computer-aided treatment planning in the general radiotherapy chain. Thereby, accuracy is key - only a precise dose estimate enables a meaningful, patient-specific assessment of the treatment plan before the onset of therapy \cite{Newhauser2005, Newhauser2007, Bauer2014, Mein2019}. At the same time, requirements regarding the dose calculation speed keep rising. Real-time dose calculation for adaptive radiotherapy (ultimately during treatment) \cite{Mein2018, Jia2012, Wang2017}, massively repeated dose calculation for uncertainty quantification \cite{Unkelbach2008, Kraan2013, Park2013, Bangert_2013, Wahl2017}, and complex simulations for biological effectiveness \cite{Mairani2013, Wieser_2017} are still too time-consuming for widespread clinical application. For particle therapy, the trade-off between dose calculation speed and accuracy is defined by pencil beam algorithms on the one end and Monte Carlo algorithms on the other end. While pencil beam algorithms provide faster dose estimates, Monte Carlo algorithms require a higher computational load \cite{Fippel2004, Szymanowski2002}. At the same time, however, Monte Carlo algorithms clearly outperform pencil beam algorithms regarding accurarcy in complex geometries \cite{Schaffner1999, Soukup2005, Taylor2017}. Currently, state-of-the-art deep learning (DL) technology is making an impact at various stages in radiotherapy. This process is most notably in classical machine learning domains such as outcome analysis and image processing \cite{gulliford2004use, chen2018u, nie2016estimating, bahrami20177t,gabrys2018}. Academic studies investigating deep learning for dose calculation are limited, and they primarily investigate the feasibility of deep learning methods in photon therapy \cite{Nguyen2019, Kontaxis_2020,pmlr-v85-mahmood18a, Kearney_2018}. Furthermore, considerations are restricted on training a \num{2}{D}/\num{3}{D} model (e.g. U-Net \cite{Ronneberger2015}) on the accumulative dose distribution extracted from prior-planned patient data, which poses problems for an application within inverse planning and seamless integration into existing workflows. This holds also true for recent work demonstrating the feasibility of improving protons dose calculation accuracy from the pencil beam algorithms to the level of Monte Carlo simulations, by learning from the prior-planned patient plans \cite{Wu2020}. In this manuscript, we introduce a novel dose calculation approach for proton therapy based on the application of long short-term memory (LSTM) networks \cite{Hochreiter1997}, in an attempt to mimic the physics of proton interactions with matter in a single pencil beam level. We restrict this study to a minimal number of parameter dependence, and establish an end-to-end model that predicts the dose distribution based on the input CT. Therefore, the \num{3}{D} proton dose distribution of a pencil beam within the patient is understood as a sequence of two-dimensional dose slices along the beam direction. LSTM networks, unlike conventional feed-forward networks, have a hidden inner state enabling efficient processing of sequences of data and effective propagation of information along the sequence \cite{Sutskever_2014}. Currently, LSTM networks are applied highly successful for time-series data, e.\,g.\ stemming from speech or video \cite{Graves_2014,Donahue, ng2015short, kay2017kinetics}. To the best of our knowledge, this is the first work to exploit ANNs, and specifically LSTM networks, to perform proton dose calculation. The designed approach will be further motivated in the following section \ref{sec_mNm} along with details on our LSTM architecture and training process. Section \ref{sec_results} presents results from a dose calculation accuracy study on model geometries and real-world lung patient cases. The limitations of our study and general opportunities provided through LSTM network proton dose calculations are discussed in section \ref{sec_discussion}, section \ref{sec_conclusion} concludes the paper. \section{Material and methods} \label{sec_mNm}The elementary task underlying the dose calculation for an entire intensity modulated proton therapy treatment is the calculation of the dose of a single proton pencil beam. In this context, a pencil beam denotes a bunch of protons leaving the treatment nozzle with a reasonably confined momentum distribution, as determined by the beam shaping devices. Consequently, our study focuses on considerations for individual pencil beams. This reduction was chosen to study the fundamental characteristics of LSTM network-based dose calculations without averaging effects in treatment plans comprised of thousands of pencil beams which may conceal important aspects regarding the accuracy of the physical dose deposition. \subsection{Conventional proton dose calculation} With conventional dose calculation approaches, the \num{3}{D} dose distribution of a single pencil beam within the patient body $\boldsymbol{\mathcal{D}}$ is a function of the initial phase space (i.,\,e., the initial position and momentum distribution) of the particles $\boldsymbol{\mathcal{P}}$ and the \num{3}{D} patient geometry $\boldsymbol{\mathcal{G}}$. \begin{align} \boldsymbol{\mathcal{D}} = f(\boldsymbol{\mathcal{P}}, \boldsymbol{\mathcal{G}}) \end{align} The patient geometry is usually determined with a computed tomography (CT) scan where the Hounsfield units (HU) get translated into material composition distributions for Monte Carlo algorithms with custom calibration curves. Based on samples from the initial phase space of the particles, Monte Carlo algorithms simulate the path of individual protons and the associated energy deposition within the patient, as determined by its interactions with the patient geometry. The final dose distribution is then given by the sum of the deposited energy of all simulated particles. While this approach allows for highly precise dose estimates with sufficient histories being simulated, even in challenging geometries, the repeated simulation of individual particles is very time consuming. In our study, the Monte Carlo dose calculations were carried out with the Topas (TOol for PArticle Simulation) wrapper \cite{Perl2012} for Geant4 \cite{Agostinelli:2002hh}. The initial particle energy was \SI{104.25}{\mega\electronvolt} for all simulations providing a reasonable trade-off between a meaningful penetration depth and acceptable dose calculation as well as LSTM network training run-times during prototyping. \subsection{Neural networks for proton dose calculation} In order to train a neural network for proton dose calculations, it is necessary to learn a mapping from the \num{3}{D} patient geometry $\boldsymbol{\mathcal{G}}$ and the initial particle phase space $\boldsymbol{\mathcal{P}}$ to the \num{3}{D} dose distribution $\boldsymbol{\mathcal{D}}$, as laid out in the previous section. In the following two subsections, we are going to explain (1) our parameterization of the proton dose calculation problem for a neural network and (2) the rationale underlying our network architecture. \subsubsection{Problem parameterization} \label{sec:parameterization} In order to minimize the complexity of the training process for the neural network, we restrict the transformation to be learned for dose calculation to a single initial energy. Without the loss of generality (we can simply train one network per initial energy), this effectively reduces the space of possible dose calculation scenarios for the network and enables a denser sampling of the space of possible patient geometries and dose distributions. The space of possible dose distributions can be further confined when switching from the patient coordinate system into the beam's eye view coordinate system. Here, the dose deposition is always oriented along the $z^{\prime}$-axis, as shown in figure \ref{fig:cubeExtract}. As the lateral extent and the particle range can be considered finite and is roughly known a priori for any given initial energy, it is further possible to perform a lateral and longitudinal clipping of the region of interest. In our case we use an isotropic resolution of \SI{2}{\milli\meter} with $m=15$ voxels in lateral direction and $l=150$ voxels in longitudinal direction (for patient setup). \begin{figure}[htb] \begin{center} \centering \subfigure[]{\includegraphics[width=0.6\linewidth]{./imgs/MandM_1a.pdf}\label{fig:MandM_1a}} \\ \subfigure[]{\includegraphics[width=0.63\linewidth]{./imgs/MandM_1b.pdf}\label{fig:MandM_1b}} \\ \subfigure[]{\includegraphics[width=0.63\linewidth]{./imgs/MandM_1c.pdf}\label{fig:MandM_1c}} \\ \caption{(a) Dose distribution of a single pencil beam with initial energy 104.25 MeV impinging from gantry angle 240\textdegree\ overlaying the patient CT. The clipping region is highlighted with a red box. (b) Respective CT slice and (c) dose distribution in beam's eye view coordinate system.} \label{fig:cubeExtract} \end{center} \end{figure} Further, we chose to perform the learning not on HU maps but on maps of the relative stopping power (RSP)\footnote{The RSP denotes the range loss in the geometry relative to a water phantom.} which are also used for conventional pencil beam algorithms \cite{Wieser2017, Schaffner1999}. The RSP values are in turn translated into the respective water density for MC simulations. In the described parameterization, we deal with a supervised regression problem that maps the geometry input data $\boldsymbol{\mathcal{G}}_i\in \mathbb{R}^{l\times m\times m}_{+}$ to real-valued dose outputs $\boldsymbol{\mathcal{D}}_{i}\in\mathbb{R}^{l\times m\times m}_{+}$. \subsubsection{Network architecture considerations} Due to the \num{3}{D} nature of the problem, the intuitive implementation of a neural network for dose calculation is a \num{3}{D} model such as a \num{3}{D} U-net. These models have seen much attention recently with continuous advances in GPU hardware, and specifically GPU memory sizes, allowing processing of big data. However, the particle dose calculation problem exhibits a geometrical peculiarity that motivates a more specialized network architecture: dose deposition is almost exclusively taking place in a sequential \emph{upstream-to-downstream} manner. I.\,e. the highly energetic protons predominantly travel along one direction with moderate lateral scatter until they stop. This characteristic behavior allows for a representation of the \num{3}{D} input and output as a sequence of two-dimensional slices, as illustrated in figure \ref{fig:3Dto2Dseq}. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{./imgs/MandM_2_mod.pdf} \caption{Sequential, spatio-temporal modeling of the proton dose calculation problem. each $m \times m$ slice of the input is flattened into a 1D input array $z^{<t>}$. Each input array is then passed to the RNN/LSTM \protect\footnotemark[1] cell generating a hidden inner state $h^{<t>}$ and an output $a^{<t>}$. The hidden inner state is passed as an input information for subsequent slices ($l$ slices in total), while the output is passed to a fully connected neural network back end to generate an $m \times m$ output slice. The output is then compared to the original ground truth by means of mean squared error loss.} \label{fig:3Dto2Dseq} \end{figure} Thence, the dose calculation problem has strong similarities to conventional video analysis in terms of spatio-temporal features. In action recognition tasks for instance, models have to extract spatial features of objects within each frame, and temporal features to interpret the movement of those objects. Simulating the protons traverse through matter, and consequently their dose deposition, is very similar to this task. It is completely determined by the upstream geometry, i.\,e., the geometry previously "seen" by the protons along their track through the patient. This implies causality from upstream to downstream within the \num{3}{D} volume and it suggest a special role for regions in the input data that have high gradients in their RSP values (e.\,g. material interfaces to bones with high RSP and cavities with low RSP). Thereby, the effect of each heterogeneity on the dose deposition is most pronounced at the end of the proton range as demonstrated in figure \ref{fig:informationPass}. Consequently, any model to simulate dose deposition for particles needs to extract spatio-temporal features and precisely propagate the impact of heterogeneities along the particle tracks. \begin{figure}[htb] \begin{center} \centering \subfigure[]{\input{./imgs/MandM_3a.tikz}\label{fig:MandM_3a}} \\ \subfigure[]{\input{./imgs/MandM_3b.tikz}\label{fig:MandM_3b}} \caption{Effect of a heterogeneity on the shape of a pencil beam dose profile. The pencil beam is formed by \num{e6} protons with an initial energy of \SI{104.25}{\mega\electronvolt} passing through (a) water (\SI{1.0}{RSP}). (b) The cuboid heterogeneity (\SI{2.5}{RSP}) has \SI{10}{\milli\meter} width in $z^{\prime}$ axis, \SI{14}{\milli\meter} width in $x^{\prime}$ axis, and \SI{1}{\milli\meter} distance to the center of the of proton beam. The effect of the cuboid mainly manifests in a bimodal Bragg peak region extending from $\approx$ \SIrange{60}{80}{\milli\meter}, i.\,e., $20mm$ after the heterogeneity.} \label{fig:informationPass} \end{center} \end{figure} While recent studies \cite{Kim2019, Hou, Ye} have shown that \num{3}{D-CNN} models are capable of extracting spatio-temporal features through sequential data, consideration has to be taken with regard to the length of the sequences. In our application domain protons may have high ranges of more than \SI{300}{\milli\meter} and consequently very high number of slices which may lead to issues with an adequate receptive field for detecting such long coherence. The \num{3}{D-CNN} by design has the disadvantage of having high computational complexity and excessive memory usage \cite{Kim2019}. In order to capture temporal coherence, the total number of parameters of \num{3}{D-CNN}s will increase manifolds depending on the size of the temporal receptive field. Processing sequences with long dependencies requires a model capable of passing information through the series. Recurrent Neural Networks (RNNs) with their hidden inner states, are capable of connecting many conventional one-input-to-one-output neural networks resulting in a model suitable to process on a many-input-to-many-output layouts. LSTM networks, an evolved version of simple RNNs, are capable of effectively transmitting relevant information through very long series thanks to their internal mechanism. Moreover, one directional LSTM models can fully adapt to the upstream-to-downstream propagation scheme of protons, eliminating dependencies between downstream to upstream resulting in a substantially reduced number of parameters for the model. \footnotetext[1]{Note that this figure is illustrating the RNN/LSTM network in an \emph{unfolded} fashion \cite{LeCun2015}.} \subsubsection{LSTM networks} \label{sec:LSTM networks} RNN models are distinguished by having a hidden internal state, enabling them to retain temporal information, similar to a memory. Considering our problem parameterization, depicted in figure \ref{fig:3Dto2Dseq} of having an input sequence $\left\{\boldsymbol{z}_{1}, \ldots, \boldsymbol{z}_{t-1}, \boldsymbol{z}_{t}, \boldsymbol{z}_{t+1}, \ldots, \boldsymbol{z}_{l}\right\}$ and output sequence $\left\{\boldsymbol{a}_{1}, \ldots, \boldsymbol{a}_{t-1}, \boldsymbol{a}_{t}, \boldsymbol{a}_{t+1}, \ldots, \boldsymbol{a}_{l}\right\}$ with $\boldsymbol{z}_{t}=\left[z_{1}, \ldots, z_{m^2}\right]$ and $\boldsymbol{a}_{t}=\left[a_{1}, \ldots, a_{n}\right]$, RNNs calculate a hidden inner state $\boldsymbol{h}_{t} \in \mathbb{R}^{K}$ with K hidden units, and the output $\boldsymbol{a_t}$ via the following recurrence equations: \begin{equation}\begin{array}{l} \boldsymbol{h}_{t}=f\left(\boldsymbol{W}_{I H} \boldsymbol{z}_{t}+\boldsymbol{W}_{H H} \boldsymbol{h}_{t-1}+\boldsymbol{b}_{H}\right), \\ \boldsymbol{a}_{t}=f\left(\boldsymbol{W}_{H O} \boldsymbol{h}_{t}+\boldsymbol{b}_{O}\right), \end{array}\end{equation} where f denotes an element-wise non-linear activation function. $\boldsymbol{W}_{IH}$, $\boldsymbol{W}_{HH}$ and $\boldsymbol{W}_{HO}$ are the input-to-hidden, hidden-to-hidden, and hidden-to-output weight matrices, respectively. $\boldsymbol{b}_H$ and $\boldsymbol{b}_O$ denote the hidden and output layer biases. The weight matrices and the biases are shared parameters that, given a properly converging training process during backpropagation, map relations between the input and output as desired. Simple RNNs, however, often suffer from the vanishing or exploding gradient problem during backpropagation with an increasing number of events in the input sequence. These problems arise due to the recursive derivative operations taken place along the sequence which may lead to very small or very big gradients, which in turn disrupt the training process and consequently restrict the RNN's capabilities to memorize long-term dependencies. With the motivation to overcome the long-term dependency problem, LSTM networks were first introduced by Hochreiter et al. \cite{Hochreiter1997}. The key innovation in this context is a memory cell state $\boldsymbol{c}_t$ and associated update mechanisms via gates in addition to the hidden state $\boldsymbol{h}_t$. In particular, the memory cell state has the capability of remaining unaltered, unless the three individually trained neural network layers, namely the \emph{input}, \emph{forget}, and \emph{cell} gate determine to update the information within memory cell state. This mechanism inhibits repetitive multiplication of the gradients in the course of backpropagation algorithm and ensures efficient passage for relevant information through the sequence, as shown in figure \ref{fig:LSTM}. Finally, an additional neural network, the \emph{output gate}, is trained to select the corresponding information to output for the current time interval. Mathematically, each LSTM cell enclosing the mentioned gates can be described via the following equations \begin{equation}\begin{aligned} \boldsymbol{i}_{t} &=\sigma\left(\boldsymbol{W}_{i_1} \boldsymbol{z}_{t}+\boldsymbol{W}_{i_2} \boldsymbol{h}_{t-1}+\boldsymbol{b}_{i}\right) \\ \boldsymbol{f}_{t} &=\sigma\left(\boldsymbol{W}_{f_1} \boldsymbol{z}_{t}+\boldsymbol{W}_{f_2} \boldsymbol{h}_{t-1}+\boldsymbol{b}_{f}\right) \\ \boldsymbol{g}_{t} &=\tanh \left(\boldsymbol{W}_{g_l} \boldsymbol{z}_{t}+\boldsymbol{W}_{g_2} \boldsymbol{h}_{t-1}+\boldsymbol{b}_{g}\right) \\ \boldsymbol{o}_{t} &=\sigma\left(\boldsymbol{W}_{o_1} \boldsymbol{z}_{t}+\boldsymbol{W}_{o_2} \boldsymbol{h}_{t-1}+\boldsymbol{b}_{o}\right) \\ \boldsymbol{c}_{t} &=\boldsymbol{f}_{t} \odot \boldsymbol{c}_{t-1}+\boldsymbol{i}_{t} \odot \boldsymbol{g}_{t} \\ \boldsymbol{h}_{t} &=\boldsymbol{o}_{t} \odot \tanh \left(\boldsymbol{c}_{t}\right), \end{aligned}\end{equation} where $\boldsymbol{W}_{\xi_1}$, $\boldsymbol{W}_{\xi_2}$, $\boldsymbol{b}_\xi$, $\xi \in \{i, f, g, o\}$, are the input-to-hidden weight matrices, hidden-to-hidden weight matrices, and biases that jointly are the learnable parameters constituting the input, forget, cell, and output gates, respectively. $\sigma$ is the Sigmoid function restricting the outputs to values between zero and one, ensuring a functionality analogous to gates. $\odot$ denotes the element-wise Hadamart product. The gates regulate the error propagation in the training process, thereby preventing the vanishing and exploding of the derivatives. Many variants of the original LSTM \cite{Graves2005, Gers, Gers2000RecurrentNT} have been introduced so far, and this study is using the Pytorch\footnote[1]{https://pytorch.org/docs/stable/nn.html\#lstm} implementation of this architecture. \begin{figure}[htb] \begin{center} \centering \includegraphics[width=0.75\linewidth]{./imgs/MandM_4.pdf} \label{fig:MandM_4} \caption{A schematic diagram of the internal module for (a) simple RNN cell and (b) for an LSTM cell.} \label{fig:LSTM} \end{center} \end{figure} \subsubsection{LSTM Training} Training of the network was carried out with an Adam optimizer \cite{kingma2014adam}, with a learning rate of \num{e-5} and a Mean Squared Error (MSE) loss function. The LSTM cells featured one layer with \num{1000} neurons as internal layer, followed by a fully connected neural network for the back end. The back end network features one hidden layer with \num{100} neurons and an output layer of $m^2$ to generate the slices. The dose cubes were normalized to have values in \numrange{0}{1} range. Empirically, we found no improvement in test loss after about \num{100} epochs, and after that overfitting of the training set has been observed. Training of the network takes \numrange{3}{4} hours for the patient data set described in the next section, with a Geforce GTX \num{970} GPU. \subsection{Dosimetric evaluation} \label{sec:dosimetricevaluatoni} \subsubsection{Phantom cases} In order to study the performance of the proposed neural network dose calculation algorithms in an idealized setting, we first carried out simulations on phantom geometries featuring cuboid inhomogeneities of varying dimensions and densities placed randomly within a water phantom, as shown in figure \ref{fig:waterbox}. For this task, \num{2500} phantom samples were generated with corresponding dose distributions from TOPAS Monte Carlo simulations. This number was raised to \num{10000} samples by augmenting rotated (\ang{90} angles) replicas of the cubes. \num{8000} samples were used as training set, and \num{2000} samples were used as test set. All the samples were simulated with $\sim\num{1.1e6}$ histories on average, resulting in less than \SI{1}{\percent} statistical uncertainty\footnote[2]{Uncertainty is calculated by dividing the highest standard deviation by the dose in that voxel}. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{./imgs/MandM_5.pdf} \caption{Phantom case setup; different geometric problems were generated by varying the slabs' dimensions in y and z axis, varying the alignment of the slab in x and y axis, and varying the density of both the water and the slab.} \label{fig:waterbox} \end{figure} \subsubsection{Lung cases} In order to study the performance of the proposed neural network dose calculation algorithms for real-world patient cases, we further considered dose calculation tasks on lung patient cases exhibiting highly pronounced inhomogeneities between normal tissue, lung tissue, and bony anatomy (rib cage \& spine). For this task, \num{1000} lung case samples were generated with corresponding dose distributions from TOPAS Monte Carlo simulations (\num{4000} after data augmentation). All samples stem from the same patient. Different geometric problems could be extracted from one patient by sampling the beam orientation in \ang{5} steps from \ang{0} to \ang{355} in combination with isocenter position samples in \SI{10}{\milli\meter} shifts spanning the lung along the $z$ axis, as shown in Fig. \ref{fig:patientSetup}. All the samples were simulated with $\num{2.5e6}$ histories on average, ensuring a statistical uncertainty between \SIrange{1}{2}{\percent}. \num{3200} samples were used as training set, 800 samples were used as test set. The original CT was downsampled to an isotropic \SI{2}{\milli\meter} resolution. Consequently, the resulting HU map was transformed to an \SI{}{RSP} map via \si{HU} look up tables yielding \si{RSP} values between \num{0} for vacuum and \num{2.5} for denser bone structures. \begin{figure}[htb] \begin{center} \centering \subfigure[\hspace{1mm}$GA = $ \ang{35}, $z = $ \SI{174}{\milli\meter}]{\input{./imgs/lungDose_a.tikz}\label{fig:lungDose_a}} \subfigure[\hspace{1mm}$GA = $ \ang{100}, $z = $ \SI{194}{\milli\meter}]{\input{./imgs/lungDose_b.tikz}\label{fig:lungDose_b}}\\ \subfigure[\hspace{1mm}$GA = $ \ang{170}, $z = $ \SI{244}{\milli\meter}]{\input{./imgs/lungDose_c.tikz}\label{fig:lungDose_c}} \subfigure[\hspace{1mm}$GA = $ \ang{295}, $z = $ \SI{224}{\milli\meter}]{\input{./imgs/lungDose_d.tikz}\label{fig:lungDose_d}} \caption{Lung case setup; generating different geometric problems for preparing training data set by varying the gantry angles ($GA$) and shifting the isocenter along the $z$ axis.} \label{fig:patientSetup} \end{center} \end{figure} \subsubsection{$\gamma$-index analysis} In order to compare \num{3}{D} dose distributions, $\gamma$-analysis \cite{Low1998} was performed with a \SI{0.5}{\percent} distance-to-agreement and \SI{1}{\milli\meter} dose difference criterion([\SI{0.5}{\percent} , \SI{1}{mm}] in short) for the water box phantom, and a [\SI{0.5}{\percent} , \SI{2}{mm}] criterion was chosen for the patient case. The $\gamma$-analysis represents the agreement of the two \num{3}{D} dose distributions with a \num{3}{D} numerical index $\gamma$, having values less than \num{1} for voxels which \emph{pass} the agreement requirements, and values higher than \num{1} for voxels which \emph{fail} the agreement requirements. Consequently, the $\gamma$-index pass rate demonstrates the percentage of voxels with $\gamma < 1$ out of all voxels with $\gamma$-index value higher than zero (we do not consider all voxels to avoid inclusion of passing voxels beyond the range of the dose). Unlike of dose difference maps, the $\gamma$-index provides a more holistic assessment of dose distribution differences, not only considering local dosimetric discrepancies but also spatial shifts (e.\,g. due to range offsets). Although computationally expensive, the $\gamma$-index pass rate reduces the discrepancy of two \num{3}{D} dose distributions to a single number which facilitates the large scale comparisons needed for our study working with several thousand training and test samples. \section{Results} \label{sec_results} \subsection{Phantom cases} The prepared dataset for the water box phantom was used for training of the simple RNN and the LSTM network. Figure \ref{fig:lossPlotBox} shows MSE loss plot for the training of both architectures for \num{100} epochs. \begin{figure}[h] \centering \includegraphics[width=0.65\textwidth]{imgs/lossPlot_Box.pdf} \caption{MSE loss function for training of the water box phantom data set for both the simple RNN architecture (top), and the LSTM architecture (bottom)} \label{fig:lossPlotBox} \end{figure} The performance of the two networks was further evaluated dosimetrically for the test set. Table \ref{tbl:statisticsOfArhcitectures} presents the outcome of the $\gamma$-analysis comparing the estimated dose from the networks with the ground truth MC calculations. While both networks seem generally suited for dose calculation with mean pass rates $>$\SI{97.88}{\percent} (figure \ref{fig:histogramCompare}), the LSTM network outperforms the RNN by \SI{1.5}{\percent}. We have observed that differences between the LSTM network and RNN mainly originate from cases with pronounced heterogeneities as shown in Figure \ref{fig:RNNvsLSTMsimpleExmaple}. In this example, the LSTM model demonstrates an evident improvement in comparison to the RNN model, which fails to predict the bimodal Bragg peak behind the density interface resulting in an $\sim8$ percentage points increase in overall $\gamma$-index pass rate. \begin{table}[h] \begin{minipage}{0.999\textwidth} \centering \caption{\label{tbl:statisticsOfArhcitectures} $\gamma$-index analysis comparing the two trained models in water phantom case ([0.5\%, 1mm]).} \begingroup \setlength{\tabcolsep}{14pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{lcccc} \hline\hline &mean&std&min&max \\ [0.5ex] \hline RNN& 97.88 & 2.12 & 89.42 & 99.8\\ LSTM& 99.29 & 0.8834 & 94.8 & 100\\ [1ex] \hline\hline \end{tabular} \endgroup \end{minipage} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.65\textwidth]{./imgs/histogramCompareModels} \label{fig:histogramCompare} \caption{Comparison between the RNN and LSTM model $\gamma$-index pass rate distribution over all test cases.} \end{figure} \begin{figure}[htb] \begin{center} \centering \subfigure[\scriptsize{ Input CT (top), ground truth MC calculation (bottom)}]{\includegraphics[width=0.49\linewidth]{./imgs/gammaMap2by2_MC}\label{fig:gammaMap2by2_MC}} \\ \subfigure[\scriptsize{ RNN dose estimation (top), $\gamma$ map (bottom) \newline $\gamma$-index pass rate = 89.4 \%}]{\includegraphics[width=0.49\linewidth]{./imgs/gammaMap2by2_RNN}\label{fig:gammaMap2by2_RNN}} \hfill \subfigure[\scriptsize{ LSTM dose estmiation (top), $\gamma$ map (bottom) \newline $\gamma$-index pass rate = 97.1 \%}]{\includegraphics[width=0.49\linewidth]{./imgs/gammaMap2by2_LSTM}\label{fig:gammaMap2by2_LSTM}} \caption{\label{fig:RNNvsLSTMsimpleExmaple}Performance comparison of the (b) RNN and the (c) LSTM network with (a) ground truth MC calculation for a sample test data (\SI{104.25}{\mega\electronvolt}, with a \SI{14}{\milli\meter} width slab and \SI{2.5}{RSP}, $\gamma$-analysis criterion = [\SI{0.5}{\percent} , \SI{1}{mm}]) } \end{center} \end{figure} \subsection{Patient cases} Due to the promising results of the LSTM network on the phantom cases, especially those with pronounced heterogeneities, we also trained this architecture on the lung patient data set, as shown in Figure \ref{fig:lossLSTM_patient}. Table \ref{tbl:gammaIndexPassRatePatient} summarizes the outcome of the $\gamma$-analysis for the set-aside test set. Note that we also included a relaxed $\gamma$ criterion of [\SI{0.5}{\percent} , \SI{2}{mm}] for the patient case. Due to the generation of the training and test set samples for the patient case by varying, among others, gantry angles, we have to deal with interpolation effects in the cube extraction that effect the $\gamma$-analysis at the previously used [\SI{0.5}{\percent} , \SI{1}{mm}] criterion. \begin{table}[h] \begin{minipage}{0.99\textwidth} \centering \caption{\label{tbl:gammaIndexPassRatePatient} $\gamma$-index analysis comparing the trained LSTM model with MC calculations for the patient case} \begingroup \setlength{\tabcolsep}{14pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{lcccc} \hline\hline $\gamma$-analysis criteria&mean&std&min&max \\ [0.5ex] \hline $[\SI{0.5}{\percent} , \SI{1}{mm}]$& 94.47 & 3.78 & 80.43 & 99.58\\ $[\SI{0.5}{\percent} , \SI{2}{mm}]$& 99.33 & 0.92 & 94.91 & 100\\ \hline\hline \end{tabular} \endgroup \end{minipage} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.65\textwidth]{./imgs/lossPlot_Patient.pdf} \caption{MSE loss function for training of the patient case data set with LSTM architecture} \label{fig:lossLSTM_patient} \end{figure} Figure \ref{fig:LSTMpatientShowcase} shows the performance of the trained network on a representative test sample. In particular, we want to point out the capability of the trained network to deal with oblique gantry angles where voxels with vanishing density prior to entering the patient are successfully recognized and not confused with low density lung voxels lying within the patient. Furthermore, the LSTM network correctly predicts a smeared out Bragg peak without a distinct maximum at the end of the particles' range which is due to low density lung tissue at the location of the Bragg peak. Also the irregular shape of the distal fall-off which originates from inhomogeneities in the pencil beam track is qualitatively predicted by the network dose calculation algorithm. Additional samples are showcased in appendix \ref{sec:appendix_a}. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{imgs/patientResult_1.pdf} \caption{Dose estimation result for a sample test data (\SI{104.25}{\mega\electronvolt}). Starting from top is the input patient CT, the ground truth MC dose distribution, the estimated dose by the LSTM network, and the $\gamma$-index map ([\SI{0.5}{\percent} , \SI{2}{mm}])} \label{fig:LSTMpatientShowcase} \end{figure} \subsection{Model generalization} In order to assess the generalization of the LSTM dose calculation engine to previously unseen patients, i.\,e., data from other patients that was not considered during training, the performance of the network was evaluated on five additional lung cancer patients. For each patient, \num{200} pencil beams with randomly selected gantry angles and isocenter shifts were prepared, and the deposited dose was calculated using MC calculations. Table \ref{tab:generalization} lists the result of comparing the MC calculations with the network estimations using $\gamma$-index analysis. \begin{table} \caption{$\gamma$-index analysis on 5 different lung cancer patients} \centering \begingroup \setlength{\tabcolsep}{14pt} \renewcommand{\arraystretch}{1.4} \begin{tabular}{lcccc} \hline\hline &mean&std&min&max \\ [0.5ex] \hline Patient 0$^{1}$& 99.33 & 0.92 & 94.91 & 100\\ \hline Patient 1& 99.10 & 0.93 & 93.01 & 99.99\\ Patient 2& 99.01 & 1.03 & 94.96 & 100\\ Patient 3& 99.15 & 1.00 & 94.27 & 100\\ Patient 4$^{2}$& 97.94 & 2.27 & 85.07 & 100\\ Patient 5$^{2}$& 96.34 & 4.09 & 75.79 & 99.96\\ \hline \multicolumn{5}{l}{$^{1}$\scriptsize{Network has been trained on this patient.}} \\ \multicolumn{5}{l}{$^{2}$\scriptsize{Patients with very low RSP values in lung (further discussed in section \ref{sec_discussion})}} \\ \end{tabular} \endgroup \label{tab:generalization} \end{table} \subsection{Run-times} In order to compare the run-times of incorporating the trained network as apposed to the MC algorithm, table \ref{tab:runtimes} lists the average run-times for estimating the dose for the 5 above-mentioned patients for a single pencil beam, for both methods. The MC simulations were performed with Topas with a calculation node with \num{28} virtual CPUs on an Openstack\footnote{https://www.openstack.org/} cluster. For the trained network, the run times were measured for two systems with different GPUs. Depending on the facilitated hardware, we measure average run-times of \SIrange{6}{23}{\milli\second} for the LSTM approach. Note that this run-times included the time required to send the input CT cube for each pencil beam from CPU to GPU and vice versa for the yielded dose cube. However, in applications such as adaptive radiotherapy which requires repetitive online dose estimations, the input CT cubes can be prepared and sent to the GPU in advance. Consequently, the only relevant run-times would be the network feed forward, i.e. matrix multiplication operations run-times, reported to be \SIrange{1.5}{2.5}{\milli\second} for the two facilitated hardware stacks. The average Topas run-time was \SI{1160}{\second}, performed with $\sim\num{2.5e6}$ histories on average (see section \ref{sec:dosimetricevaluatoni}). \begin{table} \caption{run-time comparison of the MC calculation vs. ANN predictions. Run times reported in parenthesis considers purely the network feed forward time consumption and does not count the time required to send each input/output from CPU to GPU and vice versa} \centering \begingroup \setlength{\tabcolsep}{14pt} \renewcommand{\arraystretch}{1.4} \begin{tabular}{lccc} \hline\hline &MC$^{1}$&ANN$^{2}$&ANN$^{3}$\\ [0.5ex] \hline Average run time (\si{\second})& 1159.5 & 0.023 (0.0025) & 0.006 (0.0015)\\ [1ex] \hline\hline \multicolumn{4}{l}{$^{1}$\scriptsize{28 VCPUs, 64 Gb RAM}}\\ \multicolumn{4}{l}{$^{1}$\scriptsize{Intel Core i7-6700 3.4 GHz - Nvidia GTX 970 - 64 Gb RAM}}\\ \multicolumn{4}{l}{$^{1}$\scriptsize{Intel Xeon W-2135 3.7 GHz - Nvidia Quadro RTX 6000 - 64 Gb RAM}} \end{tabular} \endgroup \label{tab:runtimes} \end{table} \section{Discussion} \label{sec_discussion} In this paper, we have demonstrated the general feasibility of proton dose calculation based on an LSTM neural network. The LSTM network correctly models the proton dose deposition characteristics in the entrance, in the Bragg peak, and in the distall fall-off region - also in heterogeneous geometries. This covers particularly examples where conventional pencil beam algorithms fail, e.\,g.\ predicting a smooth bi-modal Bragg peak behind interfaces. In comparison to RNN networks, LSTM networks proved particularly suited for this task, especially in heterogeneous geometries. This was also reflected in the training behavior, where the RNNs exhibited more pronounced fluctuations in MSE loss (compare figure \ref{fig:lossPlotBox}). Using phantom and lung patient cases, we have observed very good agreement for individual pencil beams with an initial energy of \SI{104.25}{\mega\electronvolt} at run-times of \SIrange{6}{23}{\milli\second} per pencil beam. While the $\gamma$-index pass rates for patients \numrange{1}{3} was $>$\SI{99}{\percent}, the $\gamma$-index pass rates for patients \num{4} and \num{5} ranged between \SI{96}{\percent} and \SI{98}{\percent}. This slight decline was attributed to very low \si{RSP} values in lung which could not be discriminated against air volumes penetrated by the beam before entering the patient. This phenomenon originated from beam angles in the training set where the beam enters and exits the patient arms before impinging on the chest (see figures \ref{fig:LSTMpatientShowcaseApp_a} and \ref{fig:LSTMpatientShowcaseApp_p}). Even though these beam orientations would be probably excluded from clinical considerations, we decided to have them included as challenging test scenario for the networks. Based on the approach to study dose calculation accuracy for an individual energy, we were able to show the generalization of our algorithm to patient cases that were not considered during LSTM training. In order to implement a dose calculation for an entire treatment plan, however, additional networks need to be trained for different energies. Alternatively, and conceptually more appealing, it may be possible to train a network that is able to generalize also over different initial energies. Of course, the run time benefits of several orders of magnitude over MC simulations as shown in table \ref{tab:runtimes} will not manifest in the same way for clinical treatment plans comprised of several thousands of pencil beams. Here, MC simulations can save substantially because the geometry will only be initialized once for the entire simulation. Furthermore it will be possible to reduce the number of histories per pencil beam to achieve sufficient statistical certainty over the entire treatment plan for a simple dose recalculation. For the computation of a dose influence matrix which is needed for dose optimization, however, the MC runtime reductions will be more moderate. On the other side, it will be possible to further accelerate LSTM-based dose calculation through dedicated deep learning hardware and leveraging the embarrassingly parallel nature of the problem. And, as previously indicated, the transfer times between CPU and GPU will only be necessary once per patient for LSTM network dose calculations. In our case this made up \SI{75}{\percent} of the run time for the faster GPU hardware. Moreover, this study was concentrated on the ability to estimate dose in heterogeneous geometries, and no effort was made in improving the model efficiency. In this regard, there exist various model compression techniques, e.g. pruning, quantization, and tensor decomposition methods (achieving low-rank structures in the weight matrices) \cite{Grachev_2019, pmlr-v70-yang17e, ye2017learning}, which can lower the number of parameters in fully connected layers substantially \cite{Yang_2015_ICCV, han2015deep}. The efficiency of the model can be further enhanced through fine-tuning of the model architecture. This study parameterized the size in longitudinal direction as a fixed hyper-parameter (parameter $l$, see section \ref{sec:parameterization}). While the range of mono-energetic protons are more or less fixed in a homogeneous geometry, it can vary substantially when they travel through wide cavities such as the Lung. This issue coerces us to train the model with very long sequences, to encompass all the potential pencil beam ranges. However, the LSTM models can be designed in what is referred to as \emph{sequence to sequence learning}, which can accept a variable length input and outcome with a variable length output, incorporated effectively in Machine Translation problems \cite{NIPS2014_5346}. Utilization of such a model can restrict the number of matrix multiplication operations accustomed to the plan, resulting in even faster estimations. In a different approach, one could also incorporate Autoencoders \cite{baldi2012autoencoders}, as a back-end to the model, compressing the input CT to a latent feature space, leading to a reduction in number of input parameters. The proposed dose estimation approach has not been exploited so far to the best of our knowledge, and we intend to explore this approach in many aspects. We see possible applications in photon dose calculation as well as in heavier ions (Carbon, Oxygen, Helium) dose calculation in an attempt to estimate biologically effective dose distributions. \section{Conclusion} \label{sec_conclusion} In this paper, we have investigated the role of two different neural network architectures for proton dose calculation, i.\,e., an RNN and an LSTM network. For individual pencil beams on varying heterogeneous phantom geometries, the average $\gamma$-index pass rate ([\SI{0.5}{\percent} , \SI{1}{mm}]) was \SI{97.9}{\percent} for the RNN and \SI{99.3}{\percent} for the LSTM network. The LSTM network was further evaluated on a highly heterogeneous lung case where we observed an average $\gamma$-index pass rate of \SI{99.3}{\percent} ([\SI{0.5}{\percent} , \SI{2}{mm}]). Average LSTM network run-times ranged between \SIrange{6}{23}{\milli\second}. Our results indicate that LSTM networks are well suited for particle therapy dose calculation tasks. \section{Acknowledgements} The authors thank Lucas Burigo for providing the TOPAS Monte Carlo interface for matRad. \bibliographystyle{unsrt}
1,108,101,563,813
arxiv
\section{Introduction} The ability to switch polarization by an external electric field makes ferroelectric crystals an essential technological system \cite{LinesGlass}. One difficulty in developing new materials is the complexity of contributions coupled over a range of length scales \cite{Jia2006}. The prominent example is where sample shape and atomic displacements interplay and cause ferroelectricity to diminish with decreasing film thickness, typically seen in perovskites \cite{Fong2004}. The field has evolved with the development of hybrid improper ferroelectrics and multi-ferroelectrics \cite{Benedek2011,Harris2011,Bousquet2008,Garrity2014,Martin2016}. These non-conventional cases exhibit different interactions with other physical quantities, notably the strain field, and are thus anticipated to overcome existing performance bottlenecks \cite{Cheema2020,Spreitzer2021}. Beyond stoichiometric materials, recent efforts have focused on the interplay of defects with ferroelectricity \cite{Ren2004}. Despite the rudimentary electrostatic theory suggesting that the free electrons or holes induced by defects screen the internal polarization and suppress ferroelectricity, there are cases where defects can facilitate ferroelectricity \cite{Li2021,Kolodiazhnyi2010}. One example is stabilization of the ferroelectric phase of \ce{HfO2} by doping \cite{Muller2011,Mueller2012,Hoffmann2015}. Several studies have also suggested the possibility of inducing polar distortions through doping \cite{Ricca2021,Li2021}. For example, Ricca et al. have studied complex of oxygen and strontium vacancy in \ce{SrMnO3} and reported that they are capable of creating a switchable dipole \cite{Ricca2021}. Li and Birol have shown that electron doping could stabilize hybrid-improper octahedral rotation in Ruddlesden-Popper phase compounds \cite{Li2021}. Efforts have also been made to enhance dielectric screening using defects. Although the polarization does not persist, colossal permittivity has attracted considerable attention \cite{Hu2013,Hu2015,Dong2015,Berardan2016}. Microscopically, this behavior is realized by the complex of positively and negatively charged defects. Dipoles can reorient when the defects rearrange their configuration upon radiation of an external electric field \cite{Hu2013}. However, most colossal dielectric materials require three or four types of defects to be located in close proximity, therefore, finding simpler alternatives is desirable. In this Letter, we predict the activation of ferroelectric-like behavior in a non-polar host crystal. This is achieved through an F donor (+ charge) and the corresponding electron polaron ($-$ charge) in \ce{Sr3Ti2O7}. Using Berry phase analysis and density functional theory (DFT), we show that the F-polaron complex could induce a finite switchable dipole. To understand the diversity of Ruddlesden-Popper phases \cite{Mulder2013}, we model possible accessible behavior through tuning of the Hubbard $U$ parameter in \hl{the} exchange-correlation functional. Although there are many similarities to conventional ferroelectrics, the dipole in this work is based on the polaronic state. Therefore, we expect it to have qualitatively different behavior, which may surmount existing technological bottlenecks. \textit{Methodology:} The modern theory of polarization requires the use of wave functions to calculate polarization \cite{Resta1992,King-Smith1993}. The electronic contribution is defined relative to the reference state up to a polarization quanta. Applying similar formalism towards defective systems has two difficulties. Firstly, defects often cause partial occupancy of electronic bands, which makes polarization ill-defined. Secondly, it is difficult to define a reference state. In the bulk, reference structures are often taken as a midpoint between opposite polarization, which may correspond to the parent space group of the polar phase. For the reference structures to be meaningful, the polar structures must be connected with an accessible energy barrier, otherwise the dipole cannot flip. Given these difficulties, many studies assume a nominal point charge in place of atoms to treat polarization. Despite the simplicity of this classical approach, errors arise from ignoring microscopic electronic contributions. In our case, a suitable reference state can be defined. Plane-wave DFT calculations within projector-augmented wave scheme were performed using {\it VASP} \cite{Blochl1994,Kresse1996_PRB15,Kresse1996_PRB11169}. Using the conventional cell structure for \ce{Sr3Ti2O7}, the cell volume and the atomic coordinates were fully relaxed using the HSE06 functional \cite{Heyd2003}. For the defect calculations, $4 \times 4 \times 1$ supercells were calculated with $1 \times 1 \times 1$ reciprocal space sampling, and cut-off energy of 450 eV was employed. The barrier for the dipole switching was calculated with nudged elastic band, where 9 and 12 intermediate images were used for PBE+$U$ with $U$=4.0 and 5.0 eV, respectively \footnote{ The calculations of the defect formation energies, chemical potential phase diagrams and electrostatic convergence checks were done using the {\it pydefect} package \cite{Kumagai2021}. The nudged elastic band calculations were done with {\it VASP} modified with {\it VTST} \cite{Sheppard2008}. The Hubbard $U$ parameter was applied through the rotational invariant form introduced by Dudarev et al.\cite{Dudarev1998} }. \begin{figure}[tb] \includegraphics[width=0.8\columnwidth]{Fig1_struct.pdf} \caption{\label{fig:struct} (a) Ruddlesden-Popper phase of \ce{Sr3Ti2O7}. The green and red circles represent strontium and oxygen atoms, respectively. The \ce{TiO6} octahedra are shaded in blue. The image was prepared with \emph{VESTA}.\cite{Momma2011} (b) Local Hartree potential integrated along the in-plane direction. (c) Electronic band dispersion. The energy zero corresponds to the valence band maximum. } \end{figure} \textit{Defect energetics:} The structure of \ce{Sr3Ti2O7} is shown in Fig.~\ref{fig:struct}, which has a non-polar $I$4/$mmm$ space group. This polymorph is second smallest ($n$=2) amongst Ruddlesden-Popper phases \ce{Sr_{$n$+1}Ti_$n$O_{3$n$+1}}, and has stacking of two \ce{SrTiO3} perovskite-like layers and a \ce{SrO} rocksalt-like layer along the $<001>$. The electrostatic potential for the rocksalt layer is higher than the perovskite layer (Fig.~\ref{fig:struct}(b)), which has been reported to act as an insulating layer in quantum confinement \cite{ReyesLillo2016,Li2019}. This is mirrored in the band structure, where the conduction band dispersion is larger between the in-plane direction, $\Gamma$ and X, but smaller in the out-of-plane direction, $\Gamma$ to M (Fig.~\ref{fig:struct}(c)). To obtain defect energies, we calculated the phase diagram with respect to the chemical potentials (see Figs.~\ref{figSI:chempotHSE06} and \ref{figSI:chempot}). The formation energy of F calculated is 2.19 eV (HSE06), whereas it was 2.59 and 2.63 for $U$=4.0 eV and $U$=5.0 eV, respectively. Since all systems considered in this work are neutral, no charge corrections are required. The electrostatic potential far from the defect in $4 \times 4 \times 1$ cell was well converged (Fig.~\ref{figSI:HSE06_elec}). \begin{figure}[tb] \includegraphics[width=\columnwidth]{Fig2_polaron_HSE.pdf} \caption{\label{fig:polaron} Electron polaron density due to \ce{F_O} doping of \ce{Sr3Ti2O7} calculated by DFT/HSE06 functional viewed from (a) an in-plane direction, and (b) an out-of-plane direction. Green, blue, red, and gray circles are strontium, titanium, oxygen, and fluorine, respectively. Gray arrows are a guide to the eye for the dipole direction. } \end{figure} \textit{Polaron distribution:} As the pristine system is diamagnetic, the spin density is a useful descriptor of the unpaired electron in the polaronic state. The HSE06 analysis is shown in Figs.~\ref{fig:polaron}(a) and (b) (results for PBE+$U$ are shown in Figs.~\ref{figSI:PBEU40_pol} and \ref{figSI:PBEU50_pol}). The polaron exhibited two-dimensional (2D) localization where it was well localized in the out-of-plane direction (Fig.~\ref{fig:polaron}(a)), but was spread widely along the in-plane direction (Fig.~\ref{fig:polaron}(b)). This behavior is similar to the 2D excitons described by the Bethe-Salpeter equation \cite{ReyesLillo2016}. The anisotropic dielectric screening also explains this behavior (Table~\ref{tabSI:dielectric_tensor}). The 2D polaron had a slightly higher density in the proximity of the F donor, suggesting a finite radius. However, even in our $6 \times 6 \times 1$ supercell, it was not possible to fully encompass the spread of the 2D polaron (Fig.~\ref{figSI:PBEU40_pol}). We can estimate the diameter by using the Fr\"ohlich polaron model for isotropic media. Following Schultz, the polaron radius $r_f$ was calculated as \cite{Schultz1959}: \begin{equation} r_f = \frac{3 v^2}{2 v m (v^2 - w^2)}, \end{equation} where $v$ and $w$ are calculated solely using Feynman's theory \cite{Feynman1955,Frost2017}, $m=0.12$ is the electron effective mass. The coupling strength $\alpha$ is defined as: \begin{equation} \alpha = \frac{1}{2}\left( \frac{1}{\varepsilon_\infty} - \frac{1}{\varepsilon_0} \right) \frac{e^2}{\hbar \omega_{\rm LO}} \left( \frac{2m\omega_{\rm LO}}{\hbar} \right)^{1/2}, \end{equation} where $\varepsilon_\infty=30.61$ is the high-frequency dielectric constant, $\varepsilon_0=5.25$ is the low-frequency dielectric constant, $e$ is the elementary charge, and $\omega_{\rm LO}$ is the longitudinal optical phonon frequency. The resulting $\alpha$ was 3.81 and was used to calculate $v$ and $w$ (detail shown in the Supplementary Material). The longitudinal optical phonon modes at the zone-center ($\Gamma$-point) was calculated using \emph{phonopy} \cite{Togo2015,Skelton2017}, and they were averaged using the Hellwarth and Biaggio method \cite{Hellwarth1999,Frost2017}. The resulting polaron radius $r_f$ was 52.80 \AA, which was much larger than the size of $6\times 6 \times 1$ supercell having 23.4 \AA\ in the in-plane direction. We also considered a continuum electrostatic model: \begin{equation} E(\psi) = \int d{\bm r} \left[ \psi^*({\bm r}) \left( -\frac{\hbar \nabla^2}{2m} \right)\psi({\bm r}) - \frac{1}{2}{\bm E}({\bm r})\cdot {\bm D}({\bm r}) \right] \end{equation} Here $E(\psi)$ is the energy, $\psi$ is the polaron wavefunction, ${\bm E}$ is the self-consistent electric field, and ${\bm D}$ is the electronic displacement field by the polaron and the medium was assumed to be isotropic in three dimensions \cite{Sio2019,Devreese2009}. The polaron radius $r_p$ was obtained by minimizing the above energy with respect to the trial wavefunction $\psi({\bm r}) = (\pi r_p^3)^{-1/2} e^{-r/r_p}$, where $r_p$ is the polaron radius. The obtained radius was 56.86 \AA, which was a similar value to the result using Schultz's formalism. We then modeled the dipole induced by the complex of \ce{F^+} and the polaron. Since formation of polaron breaks the inversion symmetry, it is tempting to use the non-relaxed structure as a reference structure, where the excess charge induced by the dopant is symmetrically distributed. However, we found that this structure was metallic, so the polarization is ill-defined. An artificial structure, where the bonding around Ti was expanded to localize polaron symmetrically was used. The reference polarization vanishes with modulo of $e{\bf R}/\Omega$ \cite{Vanderbilt1993}. The resulting dipole was 6.15 Debye (spontaneous polarization $P_S$=0.41 $\mu C/cm^2$ in $4\times4\times1$ supercell). This value is small compared to prototypical ferroelectrics \cite{LinesGlass}, but direct comparison is not straightforward. A better quantification could be made by considering a point charge model. By placing a $+1$ and a $-1$ charge in the location of the F and the neighboring Ti in the unrelaxed supercell (1.92 \AA\ apart) , the resulting classical dipole is 9.52 Debye. The reduction of 35\% compared to the DFT calculation can be attributed to dielectric screening. \begin{figure}[tb] \includegraphics[width=0.8\columnwidth]{Fig3_hoppping.pdf} \caption{\label{fig:neb} Hopping barrier of electron polarons is calculated with (a) PBE+$U$ ($U$=4.0 eV) and (b) PBE+$U$ ($U$=5.0 eV). The horizontal axis was calculated by the projection along the linearly interpolated path between the initial and the final structure in the configuration space. } \end{figure} \textit{Polarization switching:} We have shown a dipole can be formed in this system, but it must be switchable to mimic a ferroelectric response. Based on the Landau-Devonshire model, a double-well potential should exist. The switching barrier for the 2D polaron (Fig.~\ref{fig:neb}(a)) and zero-dimensional (0D) polaron (Fig.~\ref{fig:neb}(b)) were 11 meV and 364 meV, respectively. A subtle change in the Hubbard $U$ parameter, $U$=4.0 to 5.0 eV, increases the hopping barrier by orders of magnitude. This originates from the qualitative different hopping mechanisms. As is apparent from Fig.~\ref{fig:neb}, 2D polarons gradually move to the opposite site, whereas 0D polaron stays in one site and suddenly hops. The HSE06 calculation showed a 2D polaron structure (Fig.~\ref{fig:polaron}), so Fig.~\ref{fig:neb}(a) is closer, however as we will later discuss, 0D polarons may be accessible by composition engineering. Longer-rang hopping through the rocksalt layer into the next nearest perovskite layer was too unfavourable to realize. Such a barrier may not be present in the less anisotropic structure of \ce{TiO2}, where many of the colossal permittivity studies have been performed \cite{Hu2013}, and suggests that dielectric loss may be reduced. A double-well potential may not be realized for all combinations of dopants and suitable host materials, because the binding energy of the polaron must be in an optimal range. If bound too strongly, the hopping barrier will diminish to a single-well structure. On the other hand, if the binding is too small, the polaron will diffuse away and cause the dipole to collapse. The results show that polarons in \ce{Sr3Ti2O7} fall in the optimal range. \begin{figure}[tb] \includegraphics[width=0.9\columnwidth]{Fig4_HSE06_2F.pdf} \caption{\label{fig:double_defect} Polaron distribution for the case of two \ce{F_O} in a $4 \times 4 \times 1$ supercell of \ce{Sr3Ti2O7}. In case of (a) ``ferroelectric'' and (b) ``antiferroelectric'' configurations. Green, blue, red, and gray circles are strontium, titanium, oxygen, and fluorine, respectively. Gray arrows are guides to the eye for the dipole directions. } \end{figure} \textit{Higher doping levels:} It is worthwhile to the effect of higher polaron concentrations. Fig.~\ref{fig:double_defect} shows the result for a doubled defect density. The ``antiferroelectric'' configuration (Fig.~\ref{fig:double_defect}(b)) is 0.6 meV more stable than the ``ferroelectric'' dipole configuration (Fig.~\ref{fig:double_defect}(a)). If each perovskite bi-layer is considered as a domain, the energy difference could be converted to interfacial energy of 0.018 mJ/$m^2$ (0.0012 meV/\AA$^2$). This energy is orders of magnitudes smaller than that seen for conventional ferroelectric materials, such as \ce{BaTiO3}, where the interfacial energy is in the order of $\sim$10 mJ/$m^2$ \cite{Marton2010,Grunebohm2012,Grunebohm2020}. The small energy highlights the fundamentally different mechanism of the ferroelectricity in the F-polaron dipole system, which relies largely on the local electronic structure rather than the long-range displacement of ions. The ``ferroelectric'' configuration has a dipole strength of 8.37 Debye, while it vanishes for the ``antiferroelectric'' configuration. Since the single F-polaron pair had 6.15 Debye, the dipole did not double with doping density. Although the interaction energy between the neighboring dipole was small, this result suggests that electrostatic repulsion between the polarons across the rocksalt layer is present. \begin{figure}[tb] \includegraphics[width=0.9\columnwidth]{Fig5_Uparam.pdf} \caption{\label{fig:U_param} (a) Relation between the Hubbard $U$ parameter and, the total dipole and spin polarization in the polaron location. (b) Defect state depth from conduction band minimum (CBM) (full density of states presented in Fig.~\ref{figSI:full_dos}). The labeled VBM is the value of the valence band maximum at $U$=9.0 eV. The label ``metallic'', ``2D'', and ``0D'' corresponds to the respective polaron solutions, and their boundaries are drawn with a vertical line at 2.0 eV and 4.2 eV. } \end{figure} \textit{Polaron regimes:} An extended family of Ruddlesden-Popper phases exist \cite{Mulder2013}. Instead of performing exhaustive calculations, we model the extremes of behavior by varying the Hubbard $U$ parameter. Such a variation could be realized by changing the B-site cation; effective Hubbard $U$ values for 3d metals range from $\sim$2.5 eV in Sc to $\sim$13.0 eV in Ni \cite{Torrance1991,Imada1998,Aryasetiawan2006}. We note that surfaces could alter the effective $U$ values through modification of the atomic environments \cite{Wehling2011}. Three distinct segments of the curve can be discerned in Fig.~\ref{fig:U_param}(a). The first is when the defect state falls in the conduction band and acts as an electron donor. Here delocalization throughout the crystal is seen (Fig.~\ref{figSI:polaron_PBEU}(a)). The system is metallic and dipoles are fully screened. The slight deviation of polarization from 0 in Fig.~\ref{fig:U_param}(a) is due to limiations of the formalism. The second regime starts from about $U$=2.0 eV and continues up to $U$=4.2 eV. This corresponds to a 2D polaron solution (Fig.~\ref{figSI:polaron_PBEU}(b)) and corresponds to polaron shape from the HSE06. From this value of $U$, a dipole emerges as a result of broken inversion symmetry by localization of polaron on a single side of the donor. Just over $U$=4.2 eV, the dipole strength discontinuously changes, and the polaron becomes 0D (Fig.~\ref{figSI:polaron_PBEU}(c)). This polaron distribution is similar to the case reported in the proximity of an oxygen vacancy in \ce{SrTiO3} \cite{Janotti2014}. Near the transition, the energy difference between 0D and 2D is small, allowing for coexistence of the two solutions. To quantify the extent of localization, we integrated the spin density difference within the radius 1.3 \AA~sphere about each Ti. The site with maximum magnetisation was consistently Ti atom neighboring F in positive $c$ direction and coincided with the location of the 0D polaron. The result is overlayed in Fig.~\ref{fig:U_param}(a). The change between the metallic occupation and 2D polaron was less apparent, which can be explained by the subtle polaron distribution change (Fig.~\ref{figSI:polaron_PBEU}(a) and (b)). The change in the defect single-particle level is shown in Fig.~\ref{fig:U_param}(b). As excess electrons localize in the Ti 3d, increasing the $U$ parameter has an effect of deepening the level \cite{Haldane1976}. Again, the metallic to 2D transition is not striking, while the 2D to 0D transition is vivid. This suggests that the former is analogous to a second-order phase transition, whereas the latter is first-order. Over $U$=8.0 eV, the defect state reaches the valence band and becomes a resonant band. In conclusion, we have presented the behavior of dipole created by the complex of \ce{F^+} and polaron in F-doped \ce{Sr3Ti2O7}. The dipole behaves similarly to ferroelectrics by exhibiting double-well potential energy surfaces. Calculation of multiple defects showed the possibility that the domain interfacial energies are orders of magnitude smaller than conventional ferroelectrics. By tuning the Hubbard $U$ parameter, we showed three types of polaron behavior. These results suggest the possibility of this dipole mimicking ferroelectric behavior, yet relying on a distinct microscopic mechanism. \hl{ Additionally, the Ruddlesden-Popper phase is home to rich phenomena, including improper ferroelectricity and orbital-ordering}{\cite{Moritomo1995,Benedek2011,Martin2016}}. \hl{ The dipole realized in this work is small compared to conventional ferroelectrics and is premature for commercial devices, but realizing a finite value from a non-polar host crystal has conceptual importance. The influence of strain, domain effects, surface screening and choice of dopants remain to be investigated. } \begin{acknowledgments} Funding was received from the Yoshida Scholarship Foundation and Japan Student Services Organization. This work was also supported by the core-to-core collaboration funded by EPSRC (EP/R034540-1) and JSPS (JPJSCCA20180006). Via our membership of the UK's HEC Materials Chemistry Consortium, funded by EPSRC (EP/L000202 and EP/P020194), this work used the ARCHER2 Supercomputing Service. \end{acknowledgments}
1,108,101,563,814
arxiv
\section{Introduction}\label{section_intro} Queueing models are often formulated to study stochastic congestion problems in manufacturing and service systems, computer and communication networks, social economics, etc. Research on queueing models is spreading from performance evaluation to performance optimization for system design and control. Queueing phenomena is caused by limited service resource. How to efficiently allocate the service resource is a fundamental issue in queueing systems, which continually attracts attention from the operational research community, such as several papers published in \emph{Oper. Res.} in recent volumes \cite{Dieker17,Tsitsiklis17}. In practice, there exists a category of queueing systems with multiple servers that provide homogeneous service at different service rates and costs. These servers can be categorized into different groups. Servers in the same group have the same service rate and cost rate, while those in different groups are heterogeneous in these two rates. Customers wait in line when all the servers are busy. Running a server (keeping a server on) will incur operating cost and holding a customer will incur waiting cost. There exists a tradeoff between these costs. Keeping more servers on will increase the operating cost but decrease the holding cost. The holding cost is increasingly convex in queue length and the operating cost is linear in the number of working servers. The system controller can dynamically turn on or off servers according to different backlogs (queue lengths) such that the system long-run average cost can be minimized. We call such queueing models \emph{group-server queues} \cite{Li17}. Note that any queueing system with heterogeneous servers can be considered as a group-server queue if the servers with the same service cost and rate are grouped as a class. Following are some examples that motivate the group-server queueing model. \begin{itemize} \item Multi-tier storage systems: As illustrated in Fig.~\ref{fig_multier}, such a multi-tier storage architecture is widely used in intelligent storage systems, where different storages are structured as multiple tiers and data are stored and migrated according to their hotness (access frequency) \cite{Wang14,Zhang10}. Solid state drive (SSD), hard disk drive (HDD), and cloud storage are organized in a descending order of their speeds and costs. A group-server queue fits such a system and can be used to study the system performance, such as the response time of I/O requests. It is an interesting topic to find the optimal architecture and scheduling of I/O requests so that the desired system throughput is achieved at a minimum cost. \item Clustered computing systems: As illustrated in Fig.~\ref{fig_clustercomp}, the computing facilities of a server farm are organized in clusters. Computers in different clusters have different performance and power consumption. For example, high performance computer (HPC) has greater processing rate and more power consumption. Computing jobs can be scheduled and migrated among computers. Energy efficiency is one of the key metrics for evaluating the performance of data centers. Power management policy aims to dynamically schedule servers' working states (e.g., high/low power, or sleep) according to workloads such that power consumption and processing rate can be traded-off in an optimal way \cite{Gandhi10,Kant09}. \item Human staffed service systems: One example is a call center that might have several groups of operators (customer representatives) in different locations (or different countries). Depending on the demand level, the number of operator groups attending calls may be dynamically adjusted. The service efficiencies and operating costs of these groups can be different although operators in each group can be homogeneous in these two aspects. Another example is the operation of a food delivery company, such as GrubHub in the US or Ele.me in China, which has several restaurant partners with good reputation. During the high demand period, the limited number of its own delivery drivers (servers in group 1) may not be able to deliver food orders to customers in a promised short time (due to long queues). Thus, the company can share part of their delivery service demands with another less reputable food delivery company who also owns a set of drivers (servers in group 2). \end{itemize} \begin{figure}[htbp] \centering \subfigure[A multi-tier storage system.] {\includegraphics[width=0.45\columnwidth]{Fig_multier.eps}\label{fig_multier}} \centering \subfigure[A clustered computing system.] {\includegraphics[width=0.45\columnwidth]{Fig_clustercomp.eps}\label{fig_clustercomp}} \caption{Motivations of group-server queues.}\label{fig_motivation} \end{figure} Similar resource allocation problems may exist in other systems such as clustered wireless sensor networks \cite{Kumar09}, tiered web site systems \cite{Urgaonkar05}, tiered tolling systems \cite{Hua16}, etc. The common features of these problems can be well captured by the group-server queue. How to efficiently schedule the server groups to optimize the targeted performance metrics is an important issue for both practitioners and queueing researchers. To address this issue, we focus on finding the optimal on/off server scheduling policy in a group-server queue to minimize the long-run average cost. \subsection{Related Research} Service resource allocation problems in queueing systems are widely studied in the literature. One stream of research focuses on the \emph{service rate control} which aims to find the optimal service rates such that the system average cost (holding cost plus operating cost) can be minimized. This type of problems are mainly motivated by improving the operational efficiency of computer and telecommunication systems. For a server with a fixed service rate, turning it on or off can be considered as a service rate control of full or zero service rate. The optimality of threshold type policy, such as $N$-policy, $D$-policy, and $T$-policy, has been studied in single-server queueing systems with fixed service rate \cite{Balachandran75,Heyman68,Heyman77}. Then, further studies extend single-server systems to multi-server networks, such as cyclic queues or tandem queues \cite{Rosberg82,Stidham93,Weber87}, where the optimality of bang-bang control or threshold type policy is studied. Note that the bang-bang control means that even for the case where the service rate can be chosen in a finite range, the optimal rate is always at either zero or the maximum rate depending on the system size (threshold). For complicated queueing networks, such as Jackson networks, it has been proved that the bang-bang control is optimal when the cost function has a linear form to service rates, using the techniques of linear programming by Yao and Schechner \cite{Yao89} and derivative approach by Ma and Cao \cite{Ma94}, respectively. Recent works by Xia et al. further extend such optimality structure from a linear cost function to a concave one \cite{Xia13,Xia15}. Another line of studying the service rate control problem is from a game theoretic viewpoint \cite{Hassin03,Xia14}. We cannot enumerate all service rate control studies due to space limit of this paper. A common feature of the past studies is to characterize the structure of optimal rate control policy in a variety of queueing systems. The tradeoff between holding cost and operating cost is also a major issue in some service systems with human servers. Thus, there exist many studies on \emph{server scheduling problems} (or called \emph{staffing problems}) which aim to dynamically adjust the number of servers to minimize the average holding and operating costs. An early work is by Yadin and Naor who study the dynamic on/off scheduling policy of a server in an $M/G/1$ queue with a non-zero setup time \cite{Yadin63}. Many other related works can be found in this area and we just name a few \cite{Bell80,Fu00,Sobel69,Yoo96}. To control the customer waiting time and improve the server utilization, Zhang studies a congestion-based staffing (CBS) policy for a multi-server service system motivated by the US-Canada border-crossing stations \cite{Zhang09}. Servers in these studies are assumed to be homogeneous. The CBS policy has a two-threshold structure and can be considered as a generalization of the multi-server queue with server vacations, which is an important class of queueing models \cite{Tian06}. \emph{Job assignment problem} in heterogenous servers is closely related to the on/off server scheduling problem treated in this paper. It has one queue and multiple servers. It focuses on optimal scheduling of homogeneous jobs at heterogeneous servers with different service rates and/or operating costs. In one class of problems, the objective is to minimize the average waiting time of jobs under the assumption that only holding cost is relevant and the operating cost is sunk (i.e., not considered). In addition, when a job is assigned to a server, it cannot be reassigned to other faster (more desirable) server which becomes available later. Such a problem is also called a \emph{slow-server problem} and can be used to study the job routing policy in computer systems. One pioneering work is by Lin and Kumar and they study the optimal control of an M/M/2 queue with two heterogeneous servers. They prove that the faster server should be always utilized while the slower one should be utilized only when the queue length exceeds a computable threshold value \cite{Lin84, Walrand84}. For the case with more than two servers, it is shown that the fastest available server (FAS) rule is optimal \cite{Millhiser16}. However, for other servers except for FAS, it is difficult to directly extend the single threshold (two-server system) to the multi-threshold optimality (more than two-server system), although it looks intuitive. This is because the system state becomes higher dimensional that makes the dynamic programming based analysis very complicated. Weber proposes a conjecture about the threshold optimality for multiple heterogenous servers and shows that the threshold may depend on the state of slower servers \cite{Weber93}. Rykov proves this conjecture using dynamic programming \cite{Rykov01} and Luh and Viniotis prove it using linear programming \cite{Luh02}, but their proofs are opaque or incomplete \cite{deVericourt06}. Armony and Ward further study a fair dynamic routing problem, in which the customer average waiting time is minimized at the constraint of a fair idleness portion of servers \cite{Armony10,Ward13}. Constrained Markov decision processes and linear programming models are utilized to characterize that the optimal routing policy asymptotically has a threshold type in a limit regime with many servers \cite{Armony10}. There are numerous studies on the slow-server problem from various perspectives, which are summarized in \cite{Akgun14,Hassin15,Xu93}. When job reassignment (also called job migration) is allowed, the slow-server problem becomes trivial since it is optimal to always assign jobs to available fastest servers. However, when the server operating cost (such as power consumption) is considered, the job assignment problem is not trivial even with job migration allowed. In fact, both holding and operating costs should be considered in practical systems, such as energy efficient data centers or cloud computing facilities \cite{Fu16}. Akgun et al. give a comprehensive study on this problem \cite{Akgun14}. They utilize the duality between the individually optimal and socially optimal policies \cite{Hassin85,Xu93} to prove the threshold optimality of heterogenous servers for a clearing system (no arrivals) with or without reassignment. They also prove the threshold optimality for the less preferred server in a two-server system with customer arrivals. It is shown that the preference of servers depends on not only their service rates, but also the usage costs (operating costs), holding costs, arrival rate, and the system state. Under a cost structure with both holding and operating costs, the job assignment problem for heterogeneous servers with customer arrivals and job migration can be viewed as an equivalence to our on/off server scheduling problem in a group-server queue. In this paper, we characterize the structure of the optimal policy which can significantly simplify the computation of the parameters of the optimal on/off server schedule. In general, under the optimal policy, a server group will not be turned on only if the ratio of operating cost rate $c$ to service processing rate $\mu$ is smaller than a computable quantity $G(n)$, called perturbation realization factor. The perturbation realization factor depends on the number of customers in the system (system state), the arrival rate, and the cost function. We call this type of policy an \emph{index policy} and it has a form of state-dependent multi-threshold. The term of state-dependent means that the preference rankings of groups (the order of server groups to be turned on) will change from one state to another. However, under a reasonable condition of server group's scale economies, the optimal index policy is reduced to a state-independent multi-threshold policy, called \emph{the $c/\mu$-rule}. This simple rule is easy to implement in practice and complements the well-known $c\mu$-rule for polling queues. In a polling queueing system, a single server serves multiple classes of customers which form multiple queues and a polling policy prescribes which queue to serve by the single server. In a group-server queueing system, heterogeneous servers grouped into multiple classes serve homogeneous customers (a single queue) and an on/off server schedule prescribes which server group is turned on to serve the single queue. Note that the ``$c$" in the $c \mu$-rule is the customer waiting cost rate, while the ``$c$" in the $c/\mu$-rule is the server operating cost rate. Due to the difference in the cost rate $c$, it is intuitive that the customer class with the highest $c \mu$ value should be served first and the server group with the lowest $c /\mu$ value should be utilized first. Although these results are kind of intuitive, the $c \mu$-rule was studied long time ago but the $c/\mu$-rule was not well established until this paper. This may be because of the more complexity caused by the heterogeneous server system. Note that although the $c/\mu$ rankings order the server's operating cost per unit of service rate, different service rates impact the customer holding cost differently. In contrast, in a polling queueing system, the only cost difference between polling two different queues is the $c \mu$, the holding cost moving out of the system per unit of time. Thus, a static $c \mu$-rule can be established as an optimal policy to minimize the system average cost. The early work of the $c\mu$-rule can be traced back to Smith's paper in 1956 under a deterministic and static setting \cite{Smith56}. Under the $c\mu$-rule, the queue with larger $c\mu$ value should be served with higher priority. This rule is very simple and easy to implement in practice. It stimulates numerous extensions in the literature \cite{Hirayama89,Kebarighotbi11,Kilmov74,Nain94,VanMieghem03}. Many works aim to study similar properties to the $c\mu$-rule under various queueing systems and assumptions. For example, Baras et al. study the optimality of the $c\mu$-rule from 2 to $K$ queues with linear costs and geometric service requirement \cite{Baras85,Baras85b}, and Buyukkoc et al. revisit the proof of the $c\mu$-rule in a simple way \cite{Buyukkoc85}. Van Mieghem studies the asymptotic optimality of a generalized version of the $c\mu$-rule with convex holding costs in heavy traffic settings \cite{VanMieghem95}. This work is then extended by Mandelbaum and Stolyar to a network topology \cite{Mandelbaum04}. Atar et al. further study another generalized version called the $c\mu/\theta$ rule in an abandonment queue where $\theta$ is the abandonment rate of impatient customers \cite{Atar10}. Recently, Saghafian and Veatch study the $c\mu$-rule in a two-tiered queue \cite{Saghafian16}. In contrast to the extensive studies on the $c\mu$-rule in the literature, there are few studies on the $c/\mu$ rule for the resource allocation in a single queue with heterogeneous servers. \subsection{Our Contributions} One of the significant differences between our work and relevant studies in the literature is that the servers in our model are heterogeneous and categorized into multiple groups, which makes the model more general but more complex. Most heterogenous server models in the literature may be viewed as a special case of our model, in which each group has only one server. Thus, our model is more applicable to large scale service systems such as data centers. Moreover, we assume that there is an unlimited waiting room for customers, which means that the dynamic policy is over an infinite state space. To find the optimal policy over the infinite state space is difficult. Thus, we aim to characterize the structure of the optimal policy. While the holding cost in job assignment problems is usually assumed to be linear, we assume the holding cost can be any increasing convex function (a generalization of linear function). We formulate this service resource allocation problem in a group-server queue as a Markov decision process (MDP). Unlike the traditional MDP approach, we utilize the sensitivity-based optimization theory to characterize the structure of the optimal policy and develop efficient algorithms to compute the optimal policy and thresholds. The main contribution of this paper can be summarized in the following aspects. \begin{itemize} \item Index policy: The server preference (priority of being turned on) is determined by an index $c-\mu G(n)$, where $G(n)$ is the perturbation realization factor and it is computable and state-dependent. Servers with more negative value of $c-\mu G(n)$ have more preference to be turned on. Servers with positive $c-\mu G(n)$ should be kept off. The value of $G(n)$ will affect the preference order of servers and depends on $n$, arrival rate, and cost functions. We prove the optimality of this index policy and show that $G(n)$ plays a fundamental role in determining the optimal index policy. \item The $c/\mu$-rule: Under the condition of scale economies for server groups, the preference of servers can be determined by their $c/\mu$ values, instead of $c-\mu G(n)$. Thus, the preference order of servers is independent of $n$, arrival rate, and cost functions. The server's on/off scheduling policy becomes the $c/ \mu$ rule. Under this rule, the server with smaller $c/\mu$ should be turned on with higher priority. Searching the optimal policy over an infinite-dimensional mapping space is reduced to searching the optimal multiple thresholds. Multi-threshold policy is easier to be implemented in practice and robust. \item Optimality structures: With the performance difference formula, we derive a necessary and sufficient condition of the optimal policy. The optimality of quasi bang-bang control is also established. The monotone and convexity properties of performance potentials and perturbation realization factors, which are fundamental quantities during optimization, are established. With these properties, the optimality of index policy and the $c/\mu$ rule is proved. The structure of optimal policy is characterized well and the optimization complexity is reduced significantly. \end{itemize} Besides the theoretical contributions in the above aspects, using the performance difference formula, we decompose the original problem into an infinite number of integer linear programs. Based on the structure of the optimal policy, we develop iterative algorithms to find the optimal index policy or optimal multi-threshold policy. Here, the $c/\mu$-rule can be utilized to simplify the search algorithms significantly. These algorithms are similar to the policy iteration in the traditional MDP theory and their performance is demonstrated by numerical examples. \subsection{Paper Organization} The rest of the paper is organized as follows. In Section~\ref{section_model}, a model of group-server queue is developed to capture the heterogeneity of servers. An optimization problem is formulated to determine the on/off server scheduling policy of cost minimization. The analysis is presented in Section~\ref{section_result}, where the structure of optimal index policy is characterized based on the perturbation realization factor of server groups. In Section~\ref{section_rule}, we derive the $c/\mu$-rule and study the optimality of multi-threshold policy under the condition of scale economies. In Section~\ref{section_numerical}, we conduct numerical experiments to gain the managerial insights and to show the efficiency of our approach. Finally, the paper is concluded in Section~\ref{section_conclusion} with a summary. \section{Optimization Problem in Group-Server Queues}\label{section_model} In this section, we describe the service resource allocation problem in a group-server queue model. This model can be used to represent a waiting line with heterogeneous servers classified into a finite number of groups, which can also be called parallel-server systems in previous studies \cite{Armony10}. Servers are homogeneous within the group and are heterogeneous between groups. A group-server queue is shown in Fig.~\ref{fig_GSqueue} and described as follows. \begin{figure}[htbp] \centering \includegraphics[width=0.55\columnwidth]{Fig_GSqueue.eps} \caption{An example of group-server queue model, where servers are in parallel.}\label{fig_GSqueue} \end{figure} Customers arrive to a service station with multiple groups of servers according to a Poisson process with rate $\lambda$. The waiting room is infinite and the service discipline is first-come-first-serve (FCFS). The service times of each server are assumed to be independent and exponentially distributed. The heterogeneous servers are classified into $K$ groups. Each group has $M_k$ servers, which can be turned on or off, $k=1,2,\cdots,K$. When a server in group $k$ is turned on, it will work at service rate $\mu_k$ and consume an operating cost $c_k$ per unit of time. Servers in the same group are homogeneous, i.e., they have the same service rate $\mu_k$ and cost rate $c_k$, $k=1,2,\cdots,K$. Servers in different groups are heterogeneous in $\mu_k$ and $c_k$. We assume that servers in different groups offer the same service, i.e., customers are homogeneous. In general, services offered by different groups may be different and the connection of groups may be cascaded, or even interconnected. Such a setting can be called a \emph{group-server queueing network}. When a working server has to be turned off, the customer being served at that server is interrupted and transferred to the waiting room or another idle server if available. Due to the memoryless property of the service time, such an interruption has no effect on customer's remaining service time. The system state $n$ is defined as the number of customers in the system (including those in service). The on/off status of servers need not be included in the system state because free customer migrations among servers are allowed in the model. Thus, the state space is the nonnegative integer set $\mathbb N$, which is infinite. At each state $n \in \mathbb N$, we determine the number of working servers in each group, which can be represented by a $K$-dimensional row vector as \begin{equation} \bm m:=(m_1,m_2,\cdots,m_K), \end{equation} where $m_k$ is the number of working servers in group $k$, i.e., $m_k \in \mathbb Z_{[0,M_k]}$, $k=1,2,\cdots,K$. We call $\bm m$ the scheduling action at state $n$, according to the terminology of MDPs. Thus, the action space is defined as \begin{equation} \mathbb M := \mathbb Z_{[0,M_1]} \times \mathbb Z_{[0,M_2]} \times \cdots \times \mathbb Z_{[0,M_K]}, \end{equation} where $\times$ is the Cartesian product. We assume that the system has reached a steady state under a condition to be specified later in Proposition~\ref{pro2}. Therefore, a stationary scheduling policy $d$ is defined as a mapping from the infinite state space $\mathbb N$ to the finite action space $\mathbb M$, i.e., $d: \mathbb N \rightarrow \mathbb M$. If $d$ is determined, we will adopt action $d(n)$ at every state $n$ and $d(n,k)$ is the number of working servers of group $k$, where $n \in \mathbb N$ and $k=1,2,\cdots,K$. All possible $d$'s form the policy space $\mathcal D$, which is an infinite dimensional searching space. When the system state is $n$ and the scheduling action $\bm m= d(n)$ is adopted, a \emph{holding cost} $h(n)$ and an \emph{operating cost} $o(\bm m)$ will be incurred per unit of time. In the literature, it is commonly assumed that the operating cost is increasing with respect to (w.r.t.) the number of working servers. In this paper, we define the linear operating cost function $o(\bm m)$ as follows. \begin{equation} o(\bm m) := \sum_{k=1}^{K}m_k c_k = \bm m \bm c, \end{equation} where $\bm c:=(c_1,c_2,\cdots,c_K)^T$ is a $K$-dimensional column vector and $c_k$ represents the operating cost rate per server in group $k$. Therefore, the total cost rate function of the whole system per time unit is defined as \begin{equation}\label{eq_f} f(n,\bm m) := h(n) + \bm m \bm c. \end{equation} We make the following assumption regarding the customer's holding cost (waiting cost) and the server's setup cost (changeover cost). \begin{assumption}\label{assumption1} $h(n)$ is an increasing convex function w.r.t. $n$ and $h(n)\rightarrow \infty$ when $n \rightarrow \infty$. The server's setup cost is negligible. \end{assumption} Such a holding cost assumption is widely used in the literature \cite{VanMieghem95} and represents the situation where the delay cost grows rapidly as the system becomes more congested. For a non-empty state $n$, if a scheduling action $\bm m$ is adopted, some working servers may be turned off and services of customers at those servers will be interrupted. These customers will be returned to the waiting room or reassigned to other currently available working servers. Such a rule is called \emph{non-resume transfer discipline}. Since the setup cost for turning on a server (including transferring a customer to an available server) is zero, we do not have to keep track of the number of on (or off) servers for any state. Otherwise, each server's status must be included in the definition of the system state so that the state space will be changed from one dimensional to multi-dimensional one, which is much more complex. Denote by $n_t$ the number of customers in the system at time $t \geq 0$. The long-run average cost of the group-server queue under policy $d$ can be written as \begin{equation}\label{eq_eta} \eta^d := \lim\limits_{T \rightarrow \infty}\mathbb E \left\{ \frac{1}{T} \int_{0}^{T} f(n_t,d(n_t))\dif t \right\}. \end{equation} The objective is to find the optimal policy $d^*$ such that the associated long-run average cost is minimized. That is, \begin{equation}\label{eq_prob} d^* = \argmin \limits_{d \in \mathcal D} \{ \eta^d \}. \end{equation} \noindent\textbf{Remark 1.} It is worth noting that the scheduling policy $d$ is a mapping from an infinite state space to a $K$-dimensional finite action space. The state space is infinite and the action space grows exponentially with $K$. Thus, the policy space $\mathcal D$ to be searched is of infinite dimension. Characterizing the optimal structure of such a mapping is challenging but necessary in solving this optimization problem. A major contribution of this paper is to accomplish this challenging task and derive a simple $c/\mu$-rule as the optimal policy under a certain condition. \section{Optimal Policy Structure}\label{section_result} The optimization problem (\ref{eq_prob}) can be modeled as a continuous-time MDP with the long-run average cost criterion. The traditional theory of MDPs is based on the well-known Bellman optimality equation. However, in a multi-server queueing model with infinite buffer, it may be difficult to characterize the structure of the optimal policy using the traditional approach. Recently, Cao proposed the sensitivity-based optimization (SBO) theory \cite{Cao07}. This relatively new theory provides a new perspective to optimize the performance of Markov systems. The key idea of the SBO theory is to utilize the performance sensitivity information, such as the performance difference or the performance derivative, to conduct the optimization of stochastic systems. It may even treat the stochastic optimization problems to which the dynamic programming fails to offer the solution \cite{Cao07,Xia13,Xia15}. We use the SBO theory to characterize the structure of the optimal policy of the optimization problem (\ref{eq_prob}). First, we study the structure of the action space. Owing to zero setup cost, we should turn off any idle servers and obtain the following result immediately. \begin{proposition}\label{pro1} The optimal action $\bm m$ at state $n$ satisfies $\bm m \bm 1\leq n$, where $\bm 1$ is a column vector with proper dimension and its all elements are 1's. \end{proposition} Note that if the server setup cost is not zero, this proposition may not hold. From Proposition~\ref{pro1}, for every state $n$, we can define the \emph{efficient action space} $\mathbb M_n$ as \begin{equation} \mathbb M_n := \{ \mbox{all } \bm m \in \mathbb M : \bm m \bm 1 \leq n \}. \end{equation} A policy $d$ is said to be \emph{efficient} if $d(n) \in\mathbb M_n$ for every $n \in \mathbb N$. Accordingly, the \emph{efficient policy space} $\mathcal D_e$ is defined as \begin{equation} \mathcal D_e := \{\mbox{all } d : d(n)\in \mathbb M_n, \forall n \in \mathbb N\}. \end{equation} Therefore, in the rest of the paper, we limit our optimal policy search in $\mathcal D_e$. For any efficient action $\bm m \in \mathbb M_n$, the total service rate of the queueing system is $\bm m \bm \mu$, where $\bm \mu$ is a $K$-dimensional column vector of service rates defined as \begin{equation} \bm \mu := (\mu_1, \mu_2, \cdots, \mu_K)^T. \end{equation} For the continuous-time MDP formulated in (\ref{eq_prob}), we define the \emph{performance potential} as follows \cite{Cao07}. \begin{equation}\label{eq_g} g(n) := \mathbb E\left\{ \int_{0}^{\infty} [f(n_t,d(n_t)) - \eta] \dif t \Big | n_0 = n \right\}, \quad n \in \mathbb N, \end{equation} where $\eta$ is defined in (\ref{eq_eta}) and we omit the superscript `$d$' for simplicity. The definition (\ref{eq_g}) indicates that $g(n)$ quantifies the long-run accumulated effect of the initial state $n$ on the average performance $\eta$. In the traditional MDP theory, $g(n)$ can also be understood as the \emph{relative value function} or \emph{bias} \cite{Puterman94}. By using the strong Markov property, we can decompose the right-hand-side of (\ref{eq_g}) into two parts as follows. \begin{eqnarray}\label{eq10} g(n) &=& \mathbb E\{\tau\} [f(n,d(n)) - \eta] + \mathbb E\left\{ \int_{\tau}^{\infty} [f(n_t,d(n_t)) - \eta] \dif t \Big | n_0 = n \right\} \nonumber\\ &=& \frac{1}{\lambda+d(n)\bm \mu} [f(n,d(n)) - \eta] + \frac{\lambda}{\lambda+d(n)\bm \mu}\mathbb E\left\{ \int_{\tau}^{\infty} [f(n_t,d(n_t)) - \eta] \dif t \Big | n_{\tau} = n+1 \right\} \nonumber\\ && + \frac{d(n)\bm \mu}{\lambda+d(n)\bm \mu}\mathbb E\left\{ \int_{\tau}^{\infty} [f(n_t,d(n_t)) - \eta] \dif t \Big | n_{\tau} = n-1 \right\}, \end{eqnarray} where $\tau$ is the sojourn time at the current state $n$ and $\mathbb E\{\tau\} =\frac{1}{\lambda+d(n)\bm \mu}$. Combining (\ref{eq_g}) and (\ref{eq10}), we have the recursion \begin{equation}\label{eq11} \begin{array}{ll} \left[\lambda+d(n)\bm \mu \right] g(n) = f(n,d(n)) - \eta + \lambda g(n+1) + d(n)\bm \mu g(n-1), & \quad n \geq 1; \\ \lambda g(n) = f(n,d(n)) - \eta + \lambda g(n+1), & \quad n = 0. \\ \end{array} \end{equation} We denote $\bm B$ as the infinitesimal generator of the Markov process under an efficient policy $d \in \mathcal D_e$. Due to nature of the continuous-time Markov process, the elements of $\bm B$ are: for a state $n \geq 1$, $B(n,n) = -\lambda - d(n)\bm \mu$, $B(n,n+1) = \lambda$, $B(n,n-1) = d(n)\bm \mu$, $B(n,:) = 0$ otherwise. Therefore, with such a birth-death process, $\bm B$ can be written as the following form \begin{equation}\label{eq_B} \bm B = \left[ \begin{array}{cccccc} -\lambda & \lambda & 0 & 0 & 0 & \cdots \\ d(1)\bm \mu & -\lambda-d(1)\bm \mu & \lambda & 0 & 0 & \cdots\\ 0 & d(2)\bm \mu & -\lambda-d(2)\bm \mu & \lambda & 0 & \cdots\\ 0 & 0 & d(3)\bm \mu & -\lambda-d(3)\bm \mu & \lambda & \cdots\\ \vdots & \vdots & \vdots & \ddots & \ddots & \ddots \\ \end{array} \right]. \end{equation} Hence, we can rewrite (\ref{eq11}) as follows. \begin{equation}\label{eq12} \begin{array}{ll} -B(n,n) g(n) = f(n,d(n)) - \eta + B(n,n+1) g(n+1) + B(n,n-1) g(n-1), & n \geq 1; \\ -B(n,n) g(n) = f(n,d(n)) - \eta + B(n,n+1) g(n+1), & n = 0. \\ \end{array} \end{equation} We further denote $\bm g$ and $\bm f$ as the column vectors whose elements are $g(n)$'s and $f(n,d(n))$'s, respectively. We can rewrite (\ref{eq12}) in a matrix form as below. \begin{equation}\label{eq_poisson} \bm f - \eta \bm 1 + \bm B \bm g = \bm 0. \end{equation} The above equation is also called \emph{the Poisson equation} for continuous-time MDPs with long-run average criterion \cite{Cao07}. As $\bm g$ is called performance potential or relative value function, we can set $g(0)=\zeta$ and recursively solve $g(n)$ based on (\ref{eq12}), where $\zeta$ is any real number. Using matrix operations, we can also evaluate $\bm g$ by solving the infinite dimensional Poisson equation (\ref{eq_poisson}) through numerical computation techniques, such as RG-factorizations \cite{Li04}. For the stability of the queueing system, we impose a \emph{sufficient condition} as follows. \begin{proposition}\label{pro2} If there exists a constant $\tilde{n}$ and for any $n \geq \tilde{n}$, we always have $d(n) \bm \mu > \lambda$, then this group-server queue under policy $d$ is stable and its steady state distribution $\bm \pi$ exists. \end{proposition} Proposition~\ref{pro2} ensures that $\bm \pi$ exists under a proper selection of policy $d$. Thus, we have \begin{equation} \begin{array}{l} \bm \pi \bm B = \bm 0, \\ \bm \pi \bm 1 = 1. \\ \end{array} \end{equation} The long-run average cost of the system can be written as \begin{equation} \eta = \bm \pi \bm f. \end{equation} Suppose the scheduling policy is changed from $d$ to $d'$, where $d, d' \in \mathcal D_e$. Accordingly, all the associated quantities under the new policy $d'$ will be denoted by $\bm B'$, $\bm f'$, $\bm \pi'$, $\eta'$, etc. Obviously, we have $\bm \pi' \bm B' = \bm 0$, $\bm \pi' \bm 1 = 1$, and $\eta' = \bm \pi' \bm f'$. Left-multiplying $\bm \pi'$ on both sides of (\ref{eq_poisson}), we have \begin{equation} \bm \pi' \bm f - \eta \bm \pi' \bm 1 + \bm \pi' \bm B \bm g = 0. \end{equation} Using $\bm \pi' \bm B' = \bm 0$, $\bm \pi' \bm 1 = 1$, and $\eta' = \bm \pi' \bm f'$ , we can write (18) as \begin{equation} \eta' - \bm \pi' \bm f' + \bm \pi' \bm f - \eta + \bm \pi' \bm B \bm g - \bm \pi' \bm B' \bm g = 0, \end{equation} which gives the \emph{performance difference formula} for the continuous-time MDP as follows \cite{Cao07}. \begin{center} \begin{boxedminipage}{1\columnwidth} \begin{equation}\label{eq_diff} \eta' - \eta = \bm \pi' [(\bm B' - \bm B)\bm g + (\bm f' - \bm f)]. \end{equation} \vspace{-13pt} \end{boxedminipage} \end{center} Equation (\ref{eq_diff}) provides the sensitivity information about the system performance, which can be used to achieve the optimization. It clearly quantifies the performance change due to a policy change. Although the exact value of $\bm \pi'$ may not be known for every new policy $d'$, all its entries are always nonnegative and even positive for those positive recurrent states. Therefore, if we choose a proper new policy (with associated $\bm B'$ and $\bm f'$) such that the elements of the column vector represented by the square bracket in (\ref{eq_diff}) are always nonpositive, then we have $\eta'-\eta \leq 0$ and the long-run average cost of the system will be reduced. If there is at least one negative element in the square bracket for a positive recurrent state, then we have $\eta'-\eta < 0$ and the system average cost will be reduced strictly. This is the main idea for policy improvement based on the performance difference formula (\ref{eq_diff}). Using (\ref{eq_diff}), we examine the sensitivity of scheduling policy on the long-run average cost of the group-server queue. Suppose that we choose a new policy $d'$ which is the same as the current policy $d$ except for the action at a particular state $n$. For this state $n$, policy $d$ selects action $\bm m$ and policy $d'$ selects action $\bm m'$, where $\bm m,\bm m' \in \mathbb M_n$. Substituting (\ref{eq_f}) and (\ref{eq_B}) into (\ref{eq_diff}), we have \begin{eqnarray}\label{eq_diff2} \eta' - \eta &=& \bm \pi' [(\bm B' - \bm B)\bm g + (\bm f' - \bm f)] \nonumber\\ &=& \pi'(n)[(\bm B'(n,:) - \bm B(n,:))\bm g + (f'(n) - f(n))]\nonumber\\ &=& \pi'(n)\left[\sum_{k=1}^{K}(m'_k-m_k)\mu_k (g(n-1) - g(n)) + (\bm m' \bm c - \bm m \bm c)\right] \nonumber\\ &=& \pi'(n)\sum_{k=1}^{K}(m'_k-m_k)\left[c_k - \mu_k (g(n) - g(n-1))\right], \end{eqnarray} where $g(n)$ is the performance potential of the system under the current policy $d$. The value of $g(n)$ can be numerically computed based on (\ref{eq_poisson}) or online estimated based on (\ref{eq_g}). Details can be found in Chapter 3 of \cite{Cao07}. For the purpose of analysis, we define a new quantity $G(n)$ as below. \begin{equation}\label{eq_G} G(n) := g(n) - g(n-1), \quad n=1,2,\cdots. \end{equation} Note that $G(n)$ quantifies the performance potential difference between neighboring states $n$ and $n-1$. According to the theory of perturbation analysis (PA) \cite{Cao94,Ho91}, $G(n)$ is called \emph{perturbation realization factor} (PRF) which measures the effect on the average performance when the initial state is perturbed from $n-1$ to $n$. For our specific problem (\ref{eq_prob}), in certain sense, $G(n)$ can be considered as the benefit of reducing the long-run average holding cost due to operating a server. In the following analysis, $G(n)$ plays a fundamental role of directly determining the optimal scheduling policy for the group-server queue. Based on the recursive relation of $\bm g$ in (\ref{eq11}), we can also develop the following recursions for computing $G(n)$'s. \begin{lemma} The PRF $G(n)$ can be computed by the following recursive relations \begin{equation}\label{eq_G-Recur} \begin{array}{l} G(n+1) = \frac{d(n)\bm \mu}{\lambda} G(n) + \frac{\eta - f(n,d(n))}{\lambda}, \quad n \geq 1,\\ G(1) = \frac{\eta - f(0,d(0))}{\lambda}.\\ \end{array} \end{equation} \end{lemma} \begin{proof} From the second equation in (\ref{eq11}), we have \begin{equation} G(1) = g(1) - g(0) = \frac{\eta - f(0,d(0))}{\lambda}. \end{equation} Using the first equation in (\ref{eq11}), we have \begin{equation} \lambda(g(n+1) - g(n)) = d(n)\bm \mu (g(n) - g(n-1)) + \eta - f(n,d(n)), \quad n \geq 1. \end{equation} Substituting (\ref{eq_G}) into the above equation, we directly have \begin{equation} G(n+1) = \frac{d(n)\bm \mu}{\lambda} G(n) + \frac{\eta - f(n,d(n))}{\lambda}, \quad n \geq 1. \end{equation} Thus, the recursions for $G(n)$ are proved. \end{proof} Substituting (\ref{eq_G}) into (\ref{eq_diff2}), we obtain the following performance difference formula in terms of $G(n)$ when the scheduling action at a single state $n$ is changed from $\bm m$ to $\bm m'$. \begin{center} \begin{boxedminipage}{1\columnwidth} \begin{equation}\label{eq_diff3} \eta' - \eta = \pi'(n)\sum_{k=1}^{K}(m'_k-m_k) \left(c_k - \mu_k G(n)\right). \end{equation} \vspace{-13pt} \end{boxedminipage} \end{center} This difference formula can be extended to a general case when $d$ is changed to $d'$, i.e. $d(n)$ is changed to $d'(n)$ for all $n \in \mathbb N$. Substituting the associated $(\bm B, \bm f)$ and $(\bm B',\bm f')$ into (\ref{eq_diff}) yields \begin{center} \begin{boxedminipage}{1\columnwidth} \begin{equation}\label{eq_diff4} \eta' - \eta = \sum_{n \in \mathbb N}\pi'(n)\sum_{k=1}^{K}(d'(n,k)-d(n,k)) \left(c_k - \mu_k G(n)\right). \end{equation} \vspace{-13pt} \end{boxedminipage} \end{center} Based on (\ref{eq_diff4}), we can directly obtain a condition for generating an improved policy as follows. \begin{theorem}\label{theorem1} If a new policy $d' \in \mathcal D_e$ satisfies \begin{equation}\label{eq25} (d'(n,k) - d(n,k))\left({c_k} - {\mu_k}G(n)\right) \leq 0 \end{equation} for all $k=1,2,\cdots,K$ and $n \in \mathbb N$, then $\eta' \leq \eta$. Furthermore, if for at least one state-group pair $(n,k)$, the inequality in (\ref{eq25}) strictly holds, then $\eta' < \eta$. \end{theorem} \begin{proof} Since (\ref{eq25}) holds for every $n$ and $k$ and $\pi'(n)$ is always positive for ergodic processes, it follows from (\ref{eq_diff4}) that $\eta' - \eta \leq 0$. Thus, the first part of the theorem is proved. The second part can be proved using a similar argument. \end{proof} Theorem~\ref{theorem1} provides a way to generate improved policies based on the current feasible policy. For the system under the current policy $d$, we compute or estimate $G(n)$'s based on its definition. For every state $n$ and server group $k$, if we find $\frac{c_k}{\mu_k} > G(n)$, then we choose a smaller $d'(n,k)$; if we find $\frac{c_k}{\mu_k} < G(n)$, then we choose a larger $d'(n,k)$ satisfying the condition $d'(n)\bm 1 \leq n$, as stated by Proposition~\ref{pro1}. Therefore, according to Theorem~\ref{theorem1}, the new policy $d'$ obtained from this procedure will perform better than the current policy $d$. This procedure can be repeated to continually reduce the system average cost. Note that the condition above is only a sufficient one to generate improved policies. Now, we establish a \emph{necessary and sufficient condition} for the optimal scheduling policy as follows. \begin{theorem}\label{theorem2} A policy $d^*$ is optimal if and only if its element $d^*(n)$, i.e., $(d^*(n,1),\cdots,d^*(n,K))$, is the solution to the following integer linear programs \begin{equation}\label{eq_ilp} \mbox{\hspace{-2cm}\emph{ILP Problem:}} \hspace{1cm}\left\{ \begin{array}{l} \min\limits_{d(n,k)}\left\{ \sum_{k=1}^{K}d(n,k)(c_k - \mu_k G^*(n)) \right\} \\ \mbox{s.t.} \quad 0 \leq d(n,k) \leq M_k, \\ \hspace{1cm}\sum_{k=1}^{K}d(n,k) \leq n, \end{array} \right. \end{equation} for every state $n \in \mathbb N$, where $G^*(n)$ is the PRF defined in (\ref{eq_G}) under policy $d^*$. \end{theorem} \begin{proof} First, we prove the sufficient condition. Suppose $d^*(n)$ is the solution to the ILP problem (\ref{eq_ilp}), $\forall n \in \mathbb N$. For any other policy $d' \in \mathcal D_e$, we know that it must satisfy the constraints in (\ref{eq_ilp}) and \begin{equation}\label{eq27} \sum_{k=1}^{K}d'(n,k)(c_k - \mu_k G^*(n)) \geq \sum_{k=1}^{K}d^*(n,k)(c_k - \mu_k G^*(n)), \quad \forall n \in \mathbb N, \end{equation} since $d^*(n)$ is the solution to (\ref{eq_ilp}). Substituting (\ref{eq27}) into (\ref{eq_diff4}), we obtain \begin{eqnarray} \eta' - \eta^* = \sum_{n \in \mathbb N}\pi'(n)\sum_{k=1}^{K}(d'(n,k)-d^*(n,k))(c_k - \mu_k G^*(n)) \geq 0, \end{eqnarray} for any $d' \in \mathcal D_e$. Therefore, $\eta^*$ is the minimal average cost of the scheduling problem (\ref{eq_prob}) and $d^*$ is the optimal policy. The sufficient condition is proved. Second, we use contradiction to prove the necessary condition. Assume that the optimal policy $d^*$ is not always the solution to the ILP problem (\ref{eq_ilp}). That is, at least for a particular state $n$, there exists another $d(n)$ which is the solution to (\ref{eq_ilp}) and satisfies \begin{equation}\label{eq28} \sum_{k=1}^{K}d(n,k)(c_k - \mu_k G^*(n)) < \sum_{k=1}^{K}d^*(n,k)(c_k - \mu_k G^*(n)). \end{equation} Therefore, we can construct a new policy $d'$ as follows: It chooses the action $d(n)$ at the state $n$ only and chooses the same actions prescribed by $d^*$ at other states. Substituting $d'$ and $d^*$ into (\ref{eq_diff4}) gives \begin{eqnarray} \eta' - \eta^* = \pi'(n)\sum_{k=1}^{K}(d(n,k)-d^*(n,k))(c_k - \mu_k G^*(n)). \end{eqnarray} Substituting (\ref{eq28}) into the above equation and using the fact $\pi'(n)>0$ for any positive recurrent state $n$, we have $\eta' < \eta^*$, which contradicts the assumption that $d^*$ is the optimal policy. Thus, the assumption does not hold and $d^*$ should be the solution to (\ref{eq_ilp}). The necessary condition is proved. \end{proof} Theorem~\ref{theorem2} indicates that the original scheduling problem (\ref{eq_prob}) can be converted into a series of ILP problem (\ref{eq_ilp}) at every state $n \in \mathbb N$. However, it is impossible to directly solve an \emph{infinite} number of ILP problems since the state space is infinite. To get around this difficulty, we further investigate the structure of the solution to these ILPs. By analyzing (\ref{eq_ilp}), we can find that the solution to the ILP problem must have the following structure: \begin{itemize} \item For those groups with $c_k - \mu_k G^*(n) > 0$, we have $d^*(n,k) = 0$; \item For those groups with $c_k - \mu_k G^*(n) < 0$, we repeat letting $d^*(n,k) = M_k$ or as large as possible in an ascending order of $c_k - \mu_k G^*(n)$, under the constraint $\sum_{k=1}^{K}d(n,k) \leq n$. \end{itemize} Then, we can further specify the above necessary and sufficient condition of the optimal policy as follows. \begin{theorem}\label{theorem3} A policy $d^*$ is optimal if and only if its element $d^*(n)$ satisfies the condition: If $G^*(n) > \frac{c_k}{\mu_k}$, then $d^*(n,k) = M_k \wedge (n-\sum_{l=1}^{k-1}d^*(n,l))$; otherwise, $d^*(n,k) = 0$, for $k=1,2,\cdots,K$, where the index of server groups should be renumbered in an ascending order of $c_k-\mu_k G^*(n)$ at the current state $n$, $n \in \mathbb N$. \end{theorem} With Theorem~\ref{theorem3}, we can see that the optimal policy can be fully determined by the value of $c_k-\mu_k G^*(n)$. Such a policy form is called an \emph{index policy} and $c_k - \mu_k G^*(n)$ can be viewed as an index, which has similarity to the \emph{Gittins' index} or \emph{Whittle's index} for solving multi-armed bandit problems \cite{Gittins11,Whittle88}. Theorem~\ref{theorem3} also reveals the \emph{quasi bang-bang control} structure of the optimal policy $d^*$. That is, the optimal number of working servers in group $k$ is either 0 or $M_k$, except for the group that first violates the efficient condition in Proposition~\ref{pro1}. For any state $n$, after the group index is renumbered according to Theorem 3, the optimal action always has the following form \begin{equation}\label{eq34} d^*(n) = (M_1,M_2,\cdots,M_{\hat k-1},M_{\hat k} \wedge (n-\sum_{l=1}^{\hat k-1}M_l), 0,0,\cdots,0), \end{equation} where $\hat k$ is the first group index that violates the constraint $\sum_{l=1}^{\hat k}M_l \leq n$ or $c_{\hat k+1} - \mu_{\hat k+1} G^*(n) < 0$, i.e., \begin{equation}\label{eq_hatK} \hat{k} := \min \left\{k: \sum_{l=1}^{k}M_l > n, \mbox{ or } \frac{c_{k+1}} {\mu_{k+1}} \geq G^*(n) \right\}. \end{equation} Therefore, $\hat k$ can also be viewed as a \emph{threshold} and we have $\hat k \in \{0,1,\cdots,K\}$. Such a policy can be called a \emph{quasi threshold policy} with threshold $\hat k$. Under this policy, the number of servers to be turned on for each group is as follows. \begin{equation} \left\{ \begin{array}{ll} d^*(n,l) = M_l, \quad &\mbox{if }l<\hat k;\\ d^*(n,l) = 0, \quad &\mbox{if }l>\hat k;\\ d^*(n,l) = M_{\hat k} \wedge (n-\sum_{l=1}^{\hat k-1}M_l), \quad &\mbox{if }l=\hat k.\\ \end{array} \right. \end{equation} If threshold $\hat k$ is determined, $d^*(n)$ is also determined. Thus, finding $d^*(n)$ becomes finding the threshold $\hat k$, which simplifies the search for the optimal policy. However, we note that the index order of groups is renumbered according to the ascending value of $c_k - \mu_k G^*(n)$, which is varied at different state $n$ or different value of $G^*(n)$. On the other hand, the value of $\hat k$ also depends on the system state $n$, $n \in \mathbb N$. Therefore, the index order of groups and the threshold $\hat k$ will both vary at different state $n$, which makes the quasi threshold policy not easy to implement in practice. To further characterize the optimal policy, we explore its other structural properties. Difference formula (\ref{eq_diff4}) and Theorem~\ref{theorem3} indicate that $c_k - \mu_k G(n)$ is an important quantity to differentiate the server groups. If $c_k - \mu_k G(n) < 0$, turning on servers in group $k$ can reduce the system average cost. Group $k$ can be called an \emph{economic group} for the current system. Therefore, we define $\mathbb K_n$ as the economic group set at the current state $n$ \begin{equation}\label{eq_K} \mathbb K_n := \left\{ k : G(n) > \frac{c_k}{\mu_k} \right\}. \end{equation} We should turn on severs in the economic groups $\mathbb K_n$ as many as possible, subject to $d(n) \bm 1 \leq n$. Note that $G(n)$ reflects the reduction of the holding cost due to operating a server, from a long-run average perspective. With Theorems~\ref{theorem2} and \ref{theorem3}, the optimization problem (\ref{eq_prob}) for each state $n$ can be solved by finding the solution to each subproblem in (\ref{eq_ilp}) with the structure of quasi bang-bang control or quasi threshold form like (\ref{eq34}). However, since the number of ILPs is infinite, we need to establish the monotone property of PRF $G(n)$ which can convert the infinite state space search to a finite state space search for the optimal policy. To achieve this goal, we first establish the convexity of performance potential $g^*(n)$. \begin{theorem}\label{theorem4} The performance potential $g^*(n)$ under the optimal policy $d^*$ is increasing and convex in $n$. \end{theorem} \begin{proof} We prove this theorem by induction. Since the problem (\ref{eq_prob}) is a continuous time MDP with the long-run average cost criterion, the optimal policy $d^*$ should satisfy the \emph{Bellman optimality equation} as follows. \begin{equation} \min\limits_{\bm m \in \mathbb M_n} \left\{ f(n,\bm m) - \eta^* + \bm B(n,:|\bm m)\bm g^* \right\} = 0, \quad \forall n \in \mathbb N, \end{equation} where $\bm B(n,:|\bm m)$ is the $n$th row of the infinitesimal generator $\bm B$ defined in (\ref{eq_B}) if action $\bm m$ is adopted. Define $\Lambda$ as any constant that is larger than the maximal absolute value of all elements in $\bm B$ under any possible policy. Without loss of generality, we further define \begin{equation}\label{eq_lambda} \Lambda := \sup\limits_{n,\bm m}\{|B(n,n|\bm m)|\} = \lambda + \sum_{k=1}^{K} M_k\mu_{k}. \end{equation} Then we can use the Bellman optimality equation to derive the recursion for value iteration as follows. \begin{eqnarray}\label{eq_gl+1} \hspace{-0.6cm}\Lambda g_{l+1}(n) &=& \min\limits_{\bm m \in \mathbb M_n}\left\{ f(n,\bm m) - \eta_{l} + \sum_{n' \in \mathbb N} B(n,n'|\bm m)g_l(n') + \Lambda g_l(n) \right\} \nonumber\\ &=& \min\limits_{\bm m \in \mathbb M_n}\left\{ h(n) + \bm m \bm c - \eta_{l} + (\Lambda-\lambda-\bm m \bm \mu ) g_l(n) + \lambda g_l(n+1) + \bm m \bm \mu g_l(n-1) \right\}, \end{eqnarray} where the second equality holds because of using (\ref{eq_f}) and (\ref{eq12}), $g_l(n)$ is the performance potential (relative value function) of state $n$ at the $l$th iteration, and $\eta_l$ is the long-run average cost at the $l$th iteration. By defining \begin{equation}\label{eq_A} A(n) := h(n) + \bm m \bm c - \eta_{l} + (\Lambda-\lambda-\bm m \bm \mu ) g_l(n) + \lambda g_l(n+1) + \bm m \bm \mu g_l(n-1), \end{equation} we can rewrite (\ref{eq_gl+1}) as \begin{equation}\label{eq_gl+2} \Lambda g_{l+1}(n) = \min\limits_{\bm m \in \mathbb M_n}\left\{ A(n) \right\}. \end{equation} It is well known from the MDP theory \cite{Puterman94} that the initial value of $g_0$ can be any value. Therefore, we set $g_0(n) = 0$ for all $n$, which satisfies the increasing and convex property. Now we use the induction to establish this property. Suppose $g_l(n)$ is increasing and convex in $n$. We need to show that $g_{l+1}(n)$ also has this property. If done, we know that $g_l(n)$ is increasing and convex in $n$ for all $l$. In addition, since the value iteration converges to the optimal value function, i.e., \begin{equation}\label{eq_gl} \lim\limits_{l \rightarrow \infty}g_l(n) = g^*(n), \quad n \in \mathbb N, \end{equation} Then, we can conclude that $g^*(n)$ is increasing and convex in $n$. The induction is completed in two steps. First step, we prove the increasing property of $g_{l+1}(n)$ or $g_{l+1}(n+1) - g_{l+1}(n) \geq 0$. Using (\ref{eq_gl+2}), we have \begin{equation}\label{eq35} \Lambda [g_{l+1}(n+1) - g_{l+1}(n)] = \min\limits_{\bm m \in \mathbb M_{n+1}} \{ A(n+1) \} - \min\limits_{\bm m \in \mathbb M_n}\left\{A(n) \right\}. \end{equation} Denote $\bm m^*_{n+1}$ as the optimal action in $\mathbb M_{n+1}$, which achieves the minimum for $A(n+1)$ in (\ref{eq_gl+2}). Below, we want to use $\bm m^*_{n+1}$ to remove the operators $\min\limits_{\bm m \in \mathbb M_n}\{\cdot\}$ in (\ref{eq35}), which has to be discussed in two cases by concerning whether $\bm m^*_{n+1} \in \mathbb M_n$. \noindent Case \textcircled{1}: $\bm m^*_{n+1} \in \mathbb M_n$, we can directly use $\bm m^*_{n+1}$ to replace $\min\limits_{\bm m \in \mathbb M_n}\{\cdot\}$ in (\ref{eq35}) and obtain \begin{eqnarray}\label{eq36} \Lambda [g_{l+1}(n+1) - g_{l+1}(n)] &\geq& A(n+1)|_{\bm m^*_{n+1}} - A(n)|_{\bm m^*_{n+1}}\nonumber\\ &&\hspace{-4cm}= h(n+1) + \bm m^*_{n+1} \bm c - \eta_{l} + (\Lambda-\lambda-\bm m^*_{n+1} \bm \mu ) g_l(n+1) + \lambda g_l(n+2) + \bm m^*_{n+1} \bm \mu g_l(n) \nonumber\\ && \hspace{-3.6cm} - [h(n) + \bm m^*_{n+1} \bm c - \eta_{l} + (\Lambda-\lambda-\bm m^*_{n+1} \bm \mu ) g_l(n) + \lambda g_l(n+1) + \bm m^*_{n+1} \bm \mu g_l(n-1) ] \nonumber\\ &&\hspace{-4cm}= [h(n+1) - h(n)] + (\Lambda-\lambda-\bm m^*_{n+1} \bm \mu ) [g_l(n+1)-g_l(n)] \nonumber\\ && \hspace{-3.6cm} + \lambda[g_l(n+2)-g_l(n+1)] + \bm m^*_{n+1} \bm \mu [g_l(n)-g_l(n-1)]. \end{eqnarray} From Assumption~\ref{assumption1} that $h(n)$ is increasing in $n$, the first term of RHS of (\ref{eq36}) is non-negative. Moreover, we already assume that $g_l(n)$ is increasing in $n$. We also know that $(\Lambda-\lambda-\bm m^*_{n+1} \bm \mu ) \geq 0$ from the definition (\ref{eq_lambda}). Therefore, with (\ref{eq36}), we have $g_{l+1}(n+1) - g_{l+1}(n) \geq 0$ in this case. \noindent Case \textcircled{2}: $\bm m^*_{n+1} \notin \mathbb M_n$, it means that $\bm m^*_{n+1} \bm 1 = n+1 > n$ violating the condition in Proposition~\ref{pro1}. In this case, we select an action $\bm \alpha$ as below. \begin{equation} \bm \alpha = \bm m^*_{n+1} - \bm e_1, \quad \bm \alpha \in \mathbb M_n, \end{equation} where $\bm e_1$ is a zero vector except one proper element is 1 such that every element of $\bm \alpha$ is nonnegative. We use $\bm \alpha$ to replace $\min\limits_{\bm m \in \mathbb M_n}\{\cdot\}$ in (\ref{eq35}) and obtain \begin{eqnarray}\label{eq36b} \Lambda [g_{l+1}(n+1) - g_{l+1}(n)] &\geq& A(n+1)|_{\bm m^*_{n+1}} - A(n)|_{\bm \alpha}\nonumber\\ &&\hspace{-4cm}= h(n+1) + \bm m^*_{n+1} \bm c - \eta_{l} + (\Lambda-\lambda-\bm m^*_{n+1} \bm \mu ) g_l(n+1) + \lambda g_l(n+2) + \bm m^*_{n+1} \bm \mu g_l(n) \nonumber\\ && \hspace{-3.6cm} - [h(n) + \bm \alpha \bm c - \eta_{l} + (\Lambda-\lambda-\bm \alpha \bm \mu ) g_l(n) + \lambda g_l(n+1) + \bm \alpha \bm \mu g_l(n-1) ] \nonumber\\ &&\hspace{-4cm}= [h(n+1) - h(n)] + \bm e_1 \bm c + \lambda[g_l(n+2)-g_l(n+1)] + (\Lambda-\lambda-\bm m^*_{n+1} \bm \mu ) g_l(n+1) \nonumber\\ &&\hspace{-3.6cm} + \bm m^*_{n+1} \bm \mu g_l(n) - [(\Lambda-\lambda-\bm \alpha \bm \mu ) g_l(n) + \bm \alpha \bm \mu g_l(n-1)] . \end{eqnarray} Since $g_l(n)$ is increasing in $n$, we have \begin{eqnarray} (\Lambda-\lambda-\bm m^*_{n+1} \bm \mu ) g_l(n+1) + \bm m^*_{n+1} \bm \mu g_l(n) \geq (\Lambda-\lambda ) g_l(n); \\ - [(\Lambda-\lambda-\bm \alpha \bm \mu ) g_l(n) + \bm \alpha \bm \mu g_l(n-1)] \geq -(\Lambda-\lambda ) g_l(n). \end{eqnarray} Substituting the above equations into (\ref{eq36b}), we have \begin{equation} \Lambda [g_{l+1}(n+1) - g_{l+1}(n)] \geq [h(n+1) - h(n)] + \bm e_1 \bm c + \lambda[g_l(n+2)-g_l(n+1)] > 0. \end{equation} Combining cases \textcircled{1}\&\textcircled{2}, we always have $g_{l+1}(n+1) - g_{l+1}(n) \geq 0$ and the increasing property of $g_{l+1}(n)$ is proved. Second step, we prove the convex property of $g_{l+1}(n)$ or $g_{l+1}(n+1) - 2g_{l+1}(n) + g_{l+1}(n-1) \geq 0$. We denote $\bm m^*_{n-1}$ as the optimal action in $\mathbb M_{n-1}$, which achieves the minimum for $A(n-1)$ in (\ref{eq_gl+2}). From (\ref{eq_gl+2}), we have \begin{eqnarray}\label{eq43a} \Lambda [g_{l+1}(n+1) - 2g_{l+1}(n) + g_{l+1}(n-1)] &=& \hspace{-0.5cm} \min\limits_{\bm m \in \mathbb M_{n+1}} \{ A(n+1) \} - 2 \min\limits_{\bm m \in \mathbb M_{n}}\left\{A(n) \right\} + \min\limits_{\bm m \in \mathbb M_{n-1}} \{ A(n-1) \}\nonumber\\ &=& A(n+1)|_{\bm m^*_{n+1}} - 2 \min\limits_{\bm m \in \mathbb M_n}\left\{A(n) \right\} + A(n-1)|_{\bm m^*_{n-1}}. \end{eqnarray} Similarly, we select actions to replace the operators $\min\limits_{\bm m \in \mathbb M_n}\{\cdot\}$ in (\ref{eq43a}). The actions are generated from $\bm m^*_{n+1}$ and $\bm m^*_{n-1}$ and they should belong to the feasible set $\mathbb M_n$. We select two actions $\bm \alpha_1$ and $\bm \alpha_2$ that satisfy \begin{equation}\label{eq_alpha} \bm \alpha_1 + \bm \alpha_2 = \bm m^*_{n+1} + \bm m^*_{n-1}, \quad \bm \alpha_1, \bm \alpha_2 \in \mathbb M_n. \end{equation} For example, when $n$ is large enough and the condition in Proposition~\ref{pro1} is always satisfied, we can simply select $\bm \alpha_1 = \bm m^*_{n+1}$ and $\bm \alpha_2 = \bm m^*_{n-1}$. For other cases where the condition in Proposition~\ref{pro1} may be violated, we can properly adjust the number of working servers based on $\bm m^*_{n+1}$ and $\bm m^*_{n-1}$ and always find feasible $\bm \alpha_1$ and $\bm \alpha_2$. It is easy to verify and we omit the details for simplicity. Therefore, we use $\bm \alpha_1$ and $\bm \alpha_2$ to replace the operators $\min\limits_{\bm m \in \mathbb M_n}\{\cdot\}$ of the two $A(n)$'s in (\ref{eq43a}) and obtain \begin{equation}\label{eq43e} \Lambda [g_{l+1}(n+1) - 2g_{l+1}(n) + g_{l+1}(n-1)] \geq A(n+1)|_{\bm m^*_{n+1}} - A(n)|_{\bm \alpha_1} - A(n)|_{\bm \alpha_2} + A(n-1)|_{\bm m^*_{n-1}}. \end{equation} Substituting (\ref{eq_A}) into the above equation, we have \begin{eqnarray} \Lambda [g_{l+1}(n+1) - 2g_{l+1}(n) + g_{l+1}(n-1)] &\geq& [h(n+1)-2h(n)+h(n-1)] + [\bm m^*_{n+1}\bm c - (\bm \alpha_1+\bm \alpha_2)\bm c \nonumber\\ && \hspace{-8cm} + \bm m^*_{n-1}\bm c] + (\Lambda-\lambda-\bm m^*_{n+1}\bm \mu)g_l(n+1) - (2\Lambda-2\lambda-\bm \alpha_1\bm \mu - \bm \alpha_2\bm \mu) g_l(n) + (\Lambda-\lambda-\bm m^*_{n-1}\bm \mu)g_l(n-1) \nonumber\\ && \hspace{-8cm} + \lambda[g_l(n+2)-2g_l(n)+g_l(n-1)] + \bm m^*_{n+1} \bm \mu g_l(n) - (\bm \alpha_1\bm \mu + \bm \alpha_2\bm \mu) g_l(n-1) + \bm m^*_{n-1}\bm \mu g_l(n-2). \end{eqnarray} Since $h(n)$ is convex in $n$ (Assumption~\ref{assumption1}), we have $h(n+1)-2h(n)+h(n-1) \geq 0$. Moreover, it is assumed that $g_l(n)$ is convex in $n$, hence $g_l(n+2)-2g_l(n)+g_l(n-1) \geq 0$ holds. By further utilizing (\ref{eq_alpha}), we can derive \begin{equation} \bm m^*_{n+1}\bm c - (\bm \alpha_1+\bm \alpha_2)\bm c + \bm m^*_{n-1}\bm c = 0, \nonumber \end{equation} \begin{equation} (\Lambda-\lambda-\bm m^*_{n+1}\bm \mu)g_l(n+1) - (2\Lambda-2\lambda-\bm \alpha_1\bm \mu - \bm \alpha_2\bm \mu) g_l(n) + (\Lambda-\lambda-\bm m^*_{n-1}\bm \mu)g_l(n-1) \geq 0, \nonumber \end{equation} \begin{equation} \bm m^*_{n+1} \bm \mu g_l(n) - (\bm \alpha_1\bm \mu + \bm \alpha_2\bm \mu) g_l(n-1) + \bm m^*_{n-1}\bm \mu g_l(n-2) \geq 0. \nonumber \end{equation} Therefore, we know $g_{l+1}(n+1) - 2g_{l+1}(n) + g_{l+1}(n-1) \geq 0$ and the convex property of $g_{l+1}(n)$ is proved. In summary, we have proved that $g_l(n)$ is increasing and convex in $n$ by induction, $\forall l \in \mathbb N$. Therefore, $g^*(n)$ is also increasing and convex in $n$ by (\ref{eq_gl}). This completes the proof. \end{proof} Since $G(n) = g(n) - g(n-1)$, from Theorem~\ref{theorem4}, we can directly derive the following theorem about the monotone property of $G^*(n)$. \begin{theorem}\label{theorem5} The PRF $G^*(n)$ under the optimal policy $d^*$ is nonnegative and increasing in $n$. \end{theorem} Note that $G(n)$ plays a fundamental role in (\ref{eq_diff3}) and (\ref{eq_diff4}). Thus, the increasing property of $G^*(n)$ enables us to establish the monotone structure of optimal policy $d^*$ as follows. \begin{theorem}\label{theorem6_monotoned} The optimal total number of working servers is increasing in $n$. In other words, we have $||d^*(n+1)||_1 \geq ||d^*(n)||_1$, $\forall n \in \mathbb N$. \end{theorem} \begin{proof} Similar to (\ref{eq_K}), we define $\mathbb K^*_n$ as the set of economic groups under the optimal policy $d^*$ \begin{equation}\label{eq_Ka} \mathbb K^*_n := \left\{ k : G^*(n) > \frac{c_k}{\mu_k} \right\}. \end{equation} Note that $G^*(n+1) \geq G^*(n)$ from Theorem~\ref{theorem5} implies that any $k \in \mathbb K^*_n$ also belongs to $\mathbb K^*_{n+1}$ or \begin{equation}\label{eq44a} \mathbb K^*_n \subseteq \mathbb K^*_{n+1}. \end{equation} Theorem~\ref{theorem3} indicates that the optimal total number of working servers equals \begin{equation}\label{eq45a} ||d^*(n)||_1 = \sum_{k \in \mathbb K^*_n} d^*(n,k) = \left\{ \begin{array}{ll} \sum_{k \in \mathbb K^*_n} M_k, & \mbox{ if } n \geq \sum_{k \in \mathbb K^*_n} M_k;\\ n, & \mbox{ if } n < \sum_{k \in \mathbb K^*_n} M_k;\\ \end{array} \right. \end{equation} which means \begin{equation}\label{eq46a} ||d^*(n)||_1 = n \wedge \sum_{k \in \mathbb K^*_n} M_k. \end{equation} Therefore, for state $n+1$, we have \begin{equation}\label{eq46b} ||d^*(n+1)||_1 = (n+1) \wedge \sum_{k \in \mathbb K^*_{n+1}} M_k. \end{equation} Utilizing (\ref{eq44a}) and comparing (\ref{eq46a}) and (\ref{eq46b}), we directly obtain \begin{equation} ||d^*(n+1)||_1 \geq ||d^*(n)||_1. \end{equation} This completes the proof. \end{proof} Theorem~\ref{theorem6_monotoned} rigorously confirms an intuitive result that when the queue length increases, more servers should be turned on to alleviate the system congestion, which is also the essence of the congestion-based staffing policy \cite{Zhang09}. However, it does not mean that the number of working servers in a particular group is necessarily monotone increasing in $n$ (an example is shown in Fig.~\ref{fig_ex1-b} in Section~\ref{section_numerical}). More detailed discussion will be given by Theorem~\ref{theorem8_monotoned} in the next section. Based on Theorem~\ref{theorem6_monotoned}, we can further obtain the following result. \begin{corollary}\label{corollary1} For a state $\bar{n}$, if $d^*(\bar{n}, k) = M_k$, $\forall k$, then $d^*(n, k) = M_k$, $\forall k$ and $n \geq \bar{n}$. \end{corollary} \noindent\textbf{Remark 2.} Corollary~\ref{corollary1} again confirms an intuitive result that once the optimal action is turning on all servers at certain state $\bar{n}$, then the same action is optimal for all states larger than $\bar{n}$. Therefore, the search for the optimal policy can be limited to the states $n < \bar{n}$ and the infinite state space is truncated without loss of optimality. The difficulty of searching over the infinite state space for the optimal policy can be avoided. There exists a finite $\bar{n}$ (queue length) for which all servers in all groups must be turned on. Such an existence of $\bar{n}$ can be guaranteed by the linear operating cost and the increasing convex holding cost function (Assumption 1), which can be verified by a simple reasoning. Assume for any given scheduling policy, there is no existence of such an $\bar{n}$, which means that there is at least one idle server no matter how long the queue is (for all states). With the increasing convex holding cost function, when the queue length is long enough, the holding cost reduction due to the work completion by turning on the idle server must exceed the constant increase of server's operating cost. Then, turning on the idle server at this state must reduce the system average cost. Therefore, the assumed policy is never optimal. The suggested policy of turning on the idle server must occur, which means the existence of $\bar{n}$. Based on Theorems~\ref{theorem1} and \ref{theorem3}, we can design a procedure to find the optimal scheduling policy. First, compute the value of $G(n)$ from Lemma 1 and determine the set $\mathbb K_n$ defined in (\ref{eq_K}). If $n \geq \sum_{k \in \mathbb K_n} M_k$, turn on all the servers in groups belonging to $\mathbb K_n$. If $n < \sum_{k \in \mathbb K_n} M_k$, renumber the group indexes and set $d(n,k) = M_k \wedge (n-\sum_{l=1}^{k-1}d(n,l))$, as stated in Theorem~\ref{theorem3}. All the other servers should be off. This process is repeated for $n=1,2,\cdots$, until the state $\bar{n}$ satisfying the condition in Corollary~\ref{corollary1} is reached. Set $d(n,k)=M_k$ for all $k$ and $n \geq \bar{n}$ so that the whole policy $d$ is determined. Then, we iterate the above procedure under this new policy $d$. New improved policies will be repeatedly generated until the policy cannot be improved and the procedure stops. Based on this procedure, we develop the following Algorithm~\ref{algo1} to find the optimal scheduling policy. \begin{algorithm}[htbp] \caption{An iterative algorithm to find the optimal scheduling policy}\label{algo1} \begin{algorithmic}[1] \State choose a proper initial policy $d^*$, e.g., $d^*(n,k)=M_k$, $\forall n, k$, which indicates to turn on all servers; \Repeat \State set $d = d^*$, $d^* = \bm 0$, and $n=0$; \State compute or estimate $\eta$ of the system under policy $d$; \Repeat \State set $n = n+1$; \State compute $G(n)$ by using (\ref{eq_G-Recur}) recursively or by solving (\ref{eq_poisson}) and (\ref{eq_G}); \State compute $\mathbb K_n$ using (\ref{eq_K}); \If{$n \geq \sum_{k \in \mathbb K_n}M_k$} \State set $d^*(n,k) = M_k$, $\forall k \in \mathbb K_n$; \Else \State set $d^*(n,k) = M_k \wedge (n-\sum_{l=1}^{k-1}d^*(n,l))$, where $k \in \mathbb K_n$ and group indexes $k$'s are renumbered according to ascending order of $c_k - \mu_k G(n)$, as stated in Theorem~\ref{theorem3}; \EndIf \Until{$d^*(n,k) = M_k, \forall k$} \State set $\bar{n} = n$; \State set $d^*(n,k) = M_k$, $\forall n \geq \bar{n}, \forall k$; \Until{$d = d^*$} \Return optimal $d^*$. \end{algorithmic} \end{algorithm} From Algorithm~\ref{algo1}, we can see that this algorithm can iteratively generate better policies. Such a manner is similar to the policy iteration widely used in the traditional MDP theory. Note that the index order of groups should be renumbered at every state $n$, as stated in line 12 of Algorithm~\ref{algo1}. Since the server groups are ranked by the index based on $c_k - \mu_k G^*(n)$, the index sequence varies with state $n$. Moreover, although the total number of working servers $||d^*(n)||_1$ is increasing in $n$, $d^*(n,k)$ is not necessarily monotone increasing in $n$ for a particular group $k$. This means that it is possible that for some $n$ and $k$, we have $d^*(n,k) > d^*(n+1,k)$, as shown in Fig.~\ref{fig_ex1-b} in Section~\ref{section_numerical}. These complications may make it difficult to implement the optimal scheduling policy in practical service systems with human servers, as the servers have to be turned on or off without a regular pattern. However, as we will discuss in the next section, the group index sequence can remain unchanged if the ratio of cost rate to service rate satisfies a reasonable condition. Then, we can develop a simpler optimal scheduling policy obeying the $c/\mu$-rule, which is much easier to implement in practice. \section{The $c/{\mu}$-Rule}\label{section_rule} We further study the optimal scheduling policy for the group-server queue when the scale economies in terms of $c/{\mu}$ ratios exist. \begin{assumption}\label{assumption2} (Scale Economies) If the server groups are sorted in the order of $\frac{c_1}{\mu_1} \leq \frac{c_2}{\mu_2} \leq \cdots \leq \frac{c_K}{\mu_K}$, then their service rates satisfy $\mu_1 \geq \mu_2 \geq \cdots \geq \mu_K$. \end{assumption} This assumption is reasonable in some practical situations as it means that a faster server has a smaller operating cost rate per unit of service rate. This can be explained by \emph{the effect of the scale economies}. For example, in a data center, a faster computer usually has a lower cost per unit of computing capacity. With Assumption~\ref{assumption2}, we can verify that the group index according to the ascending order of $c_k - \mu_k G^*(n)$ remains unvaried, no matter what the value of $G^*(n)$ is. The ascending order of $c_k - \mu_k G^*(n)$ is always the same as the ascending order of $c_k/\mu_k$. That is, the optimal policy structure in Theorem~\ref{theorem3} can be characterized as follows. \begin{theorem}\label{theorem7} With Assumption~\ref{assumption2}, a policy $d^*$ is optimal if and only if it satisfies the condition: If $G^*(n) > \frac{c_k}{\mu_k}$, then $d^*(n,k) = M_k \wedge (n-\sum_{l=1}^{k-1}d^*(n,l))$; otherwise, $d^*(n,k) = 0$, for $k=1,2,\cdots,K$, $n \in \mathbb N$. \end{theorem} Theorem~\ref{theorem7} implies that the optimal policy $d^*$ follows a simple rule called \emph{the $c/{\mu}$-rule}: \emph{Servers in the group with smaller $c/{\mu}$ ratio should be turned on with higher priority}. This rule is very easy to implement as the group index renumbering for each state in Theorem~\ref{theorem3} is not needed anymore. As mentioned earlier, the $c/{\mu}$-rule can be viewed as a counterpart of the famous \emph{$c \mu$-rule} for the scheduling of polling queues \cite{Smith56,VanMieghem95}, in which the queue with greater $c \mu$ will be given higher priority to be served by the single service facility. Using the monotone increasing property of $G^*(n)$ in Theorem~\ref{theorem5} and Assumption~\ref{assumption2}, we can further characterize the monotone structure of the optimal policy $d^*$ as follows. \begin{theorem}\label{theorem8_monotoned} The optimal scheduling action $d^*(n,k)$ is increasing in $n$, $\forall k=1,2,\cdots,K$. \end{theorem} \begin{proof} First, it follows from Theorem~\ref{theorem5} that \begin{equation} G^*(n+1) \geq G^*(n), \qquad \forall n \in \mathbb N. \end{equation} From Theorem~\ref{theorem7}, we know that for any state $n$, if $G^*(n) > \frac{c_k}{\mu_k}$, the optimal action is \begin{equation}\label{eq44} d^*(n,k) = M_k \wedge (n-\sum_{l=1}^{k-1}d^*(n,l)). \end{equation} Therefore, for state $n+1$, we have $G^*(n+1) \geq G^*(n) > \frac{c_k}{\mu_k}$. Thus, the optimal action is \begin{equation}\label{eq45} d^*(n+1,k) = M_k \wedge (n+1 -\sum_{l=1}^{k-1}d^*(n+1,l)). \end{equation} Since $d^*(n)$ has a quasi threshold structure, there exists a certain $\hat k$ defined in (\ref{eq_hatK}) such that $d^*(n)$ has the following form \begin{equation} d^*(n) = (M_1,M_2,\cdots,M_{\hat k-1},M_{\hat k} \wedge (n-\sum_{l=1}^{\hat k-1}M_l), 0,\cdots,0). \end{equation} Therefore, with (\ref{eq44}) and (\ref{eq45}) we have \begin{equation} d^*(n+1,l) = M_l = d^*(n,l), \qquad l=1,2,\cdots, \hat k-1. \end{equation} For group $\hat k$, we have \begin{equation}\label{eq46} d^*(n+1,\hat k) = M_{\hat k} \wedge (n+1 -\sum_{l=1}^{\hat k-1}M_l) \ \geq \ d^*(n,\hat k) = M_{\hat k} \wedge (n -\sum_{l=1}^{\hat k-1}M_l). \end{equation} More specifically, if $d^*(n,\hat k) < M_{\hat k}$, the inequality in (\ref{eq46}) strictly holds and we have $d^*(n+1) = (M_1,M_2,\cdots,$ $M_{\hat k-1},d^*(n,\hat k)+1, 0,\cdots,0)$. If $d^*(n,\hat k) = M_{\hat k}$, the equality in (\ref{eq46}) holds and we have $d^*(n+1) = (M_1,M_2,\cdots,M_{\hat k-1},M_{\hat k}, 1,0,\cdots,0)$ for $\hat k < K$ or $d^*(n+1) = (M_1,M_2,\cdots,M_{K-1},M_{K})$ for $\hat k=K$. Therefore, we always have $d^*(n+1,k) \geq d^*(n,k)$, $\forall k=1,2,\cdots,K$. This completes the proof. \end{proof} \noindent\textbf{Remark 3.} Theorem~\ref{theorem8_monotoned} implies that $d^*(n)$ is increasing in $n$ in vector sense. That is $d^*(n+1) \geq d^*(n)$ in vector comparison. Therefore, we certainly have $||d^*(n+1)||_1 \geq ||d^*(n)||_1$, as indicated in Theorem~\ref{theorem6_monotoned}. Using the monotone property of $G^*(n)$ in Theorem~\ref{theorem5} and the $c/{\mu}$-rule in Theorem~\ref{theorem7}, we can directly obtain the multi-threshold policy as the optimal policy, which can be viewed as a generalization of the two-threshold CBS policy in our previous study \cite{Zhang09}. \begin{theorem}\label{theorem9} The optimal policy $d^*$ has a multi-threshold form with thresholds $\theta_k$: If $n \geq \theta_{k}$, the maximum number of servers in group $k$ should be turned on, $\forall n \in \mathbb N$, $k=1,2,\cdots,K$. \end{theorem} \begin{proof} We know that $G^*(n)$ increases with $n$ from Theorem~\ref{theorem5}. For any particular group $k$, we can define a threshold as \begin{equation} \theta_k := \min\left\{n: G^*(n)>\frac{c_k}{\mu_k}\right\}, \quad k=1,2,\cdots,K. \end{equation} Therefore, for any $n \geq \theta_k$, $G^*(n)>\frac{c_k}{\mu_k}$ and the servers in group $k$ should be turned on as many as possible, according to the $c/\mu$-rule in Theorem~\ref{theorem7}. Thus, the optimal scheduling for servers in group $k$ has a form of threshold $\theta_k$ and the theorem is proved. \end{proof} Note that ``the maximum number of servers in group $k$ should be turned on" in Theorem~\ref{theorem9} means that the optimal action $d^*(n,k)$ should obey the constraint in Proposition~\ref{pro1}, i.e., $d^*(n,k) = M_k \wedge (n-\sum_{l=1}^{k-1}d^*(n,l))$. Theorem~\ref{theorem9} implies that the policy of the original problem (\ref{eq_prob}) can be represented by a $K$-dimensional threshold vector $\bm \theta$ as below. \begin{equation} \bm \theta := (\theta_1,\theta_2,\cdots,\theta_K), \end{equation} where $\theta_k \in \mathbb N$. With the monotone property of $G^*(n)$ and Theorem~\ref{theorem7}, we can directly derive that $\theta_k$ is monotone in $k$, i.e., \begin{equation} \theta_1 \leq \theta_2 \leq \cdots \leq \theta_K. \end{equation} As long as $\bm \theta$ is given, the associated policy $d$ can be recursively determined as below. \begin{equation}\label{eq_threshold_d} \begin{array}{ll} d(n,k) = M_k \wedge \Big(n - \sum_{l=1}^{k-1} d(n,l)\Big), & \mbox{if } n \geq \theta_k; \\ d(n,k) = 0, & \mbox{if } n < \theta_k; \\ \end{array} \end{equation} where $n \in \mathbb N$, $k=1,2,\cdots,K$. We can further obtain the constant optimal threshold for group 1. \begin{theorem}\label{theorem_th1} The optimal threshold of group 1 is always $\theta^*_1 = 1$, that is, we should always utilize the most efficient server group whenever any customer presents in the system. \end{theorem} \begin{proof} We use contradiction argument to prove this theorem. Assume that the optimal threshold policy is $\bm \theta^*$ with $\theta^*_1 > 1$. We denote by $\bm X^{\bm \theta^*}=\{X^{\bm \theta^*}(t), t \geq 0\}$ the stochastic process of the queueing system under this policy $\bm \theta^*$, where $X^{\bm \theta^*}(t)$ is the system state at time $t$. We construct another threshold policy as $\tilde{\bm \theta} = \bm \theta^* - (\theta^*_1 - 1)\bm 1^T$, where $\bm 1$ is a $K$-dimensional column vector with 1's. Therefore, we know $\tilde{\theta_1} = 1$. We denote by $\bm X^{\tilde{\bm \theta}}$ the stochastic process of the queueing system under policy $\tilde{\bm \theta}$. As $\theta^*_1 > 1$, we know that any state in the set $\{0,1,\cdots,\theta^*_1-2\}$ is a \emph{transient state} of $\bm X^{\bm \theta^*}$. Since transient states have no contribution to the long-run average cost $\eta$, the statistics of $\bm X^{\tilde{\bm \theta}}$ is equivalent to those of $\{X^{\bm \theta^*}(t) - (\theta^*_1 - 1)\}$ if we omit the transient states. Since the holding cost $h(n)$ is an increasing convex function in $n$, it is easy to verify that $\eta^{\bm \theta^*} \geq \eta^{\tilde{\bm \theta}}$. That is, if we simultaneously decrease the thresholds of $\bm \theta^*$ to $\tilde{\bm \theta}$, the system average cost will be decreased. Therefore, the assumption is not true and $\theta^*_1 = 1$ is proved. \end{proof} Theorem~\ref{theorem9} indicates that the optimization problem (\ref{eq_prob}) over an infinite state space is converted to the problem of finding the optimal thresholds $\theta^*_k$, where $k=1,2,\cdots,K$. Denoting by $\mathbb N^{K}_{\uparrow}$ a $K$-dimensional positive integer space with its elements satisfying $\theta_1 \leq \theta_2 \leq \cdots \leq \theta_K$, the original problem (\ref{eq_prob}) can be rewritten as \begin{equation}\label{eq_prob2} \bm \theta^* = \argmin\limits_{\bm \theta \in \mathbb N^{K}_{\uparrow}}\{\eta^{\bm \theta}\}. \end{equation} Therefore, the state-action mapping policy ($\mathbb N \rightarrow \mathbb M$) is replaced by a parameterized policy with thresholds $\bm \theta$. The original policy space is reduced from an infinite dimensional space $\mathcal D$ to a $K$-dimensional integer space $\mathbb N^{K}_{\uparrow}$. The \emph{curse of dimensionality} of action space $\mathbb M$ and the optimal policy search over an infinite state space $\mathbb N$ can be avoided by focusing on the multi-threshold policies. To illustrate the procedure of policy space reduction, we give an example of a 2-group server queue illustrated in Fig.~\ref{fig_Policyredc}. We observe that the policy space is significantly reduced after applying Theorems~\ref{theorem3}, \ref{theorem7}, and \ref{theorem9}, which identify the optimality structures. Since $\theta^*_1 = 1$ according to Theorem~\ref{theorem_th1}, we only need to search for $K-1$ optimal thresholds. Sometimes we also treat $\theta^*_1$ as a variable in order to maintain a unified presentation. \begin{figure}[htbp] \center \includegraphics[width=1\columnwidth]{Fig_policyredc.eps} \caption{Illustration of policy space reduction with an example of 2-group server queue.}\label{fig_Policyredc} \end{figure} By utilizing the $c/{\mu}$-rule and the optimality of multi-threshold policy, we can further simplify Algorithm~\ref{algo1} to find the optimal threshold policy $\bm \theta^*$, which is described as Algorithm~\ref{algo2}. We can see that Algorithm~\ref{algo2} iteratively updates the threshold policy $\bm \theta$. Consider two threshold policies $\bm \theta$ and $\bm \theta'$ generated from two successive iterations, respectively, by using Algorithm~\ref{algo2}. $d'(n,k)$ and $d(n,k)$ are the associated scheduling action determined by (\ref{eq_threshold_d}) based on $\bm \theta'$ and $\bm \theta$, respectively. From (\ref{eq_threshold_d}), we see that $d'(n,k)$ and $d(n,k)$ have the following relation. \begin{equation}\label{eq_threshold_dd'} \begin{array}{ll} d'(n,k) = d(n,k) = M_k \wedge \Big(n - \sum_{l=1}^{k-1} d(n,l)\Big), & \mbox{if } n \geq \theta_k' \vee \theta_k; \\ d'(n,k) = d(n,k) = 0, & \mbox{if } n < \theta_k' \wedge \theta_k; \\ d'(n,k) = 0, d(n,k) = M_k \wedge \Big(n - \sum_{l=1}^{k-1} d(n,l)\Big), & \mbox{if } (\theta_k' \wedge \theta_k \leq n < \theta_k' \wedge \theta_k) \ \& \ (\theta_k'>\theta_k); \\ d(n,k) = 0, d'(n,k) = M_k \wedge \Big(n - \sum_{l=1}^{k-1} d'(n,l)\Big), & \mbox{if } (\theta_k' \wedge \theta_k \leq n < \theta_k' \wedge \theta_k) \ \& \ (\theta_k'<\theta_k). \\ \end{array} \end{equation} Substituting (\ref{eq_threshold_dd'}) into (\ref{eq_diff4}), we can derive the following performance difference formula that quantifies the effect of the change of threshold policy from $\bm \theta$ to $\bm \theta'$, where $\bm \theta, \bm \theta' \in \mathbb N^{K}_{\uparrow}$. \begin{center} \begin{boxedminipage}{1\columnwidth} \begin{equation}\label{eq_diff5} \eta' - \eta = \sum_{k=1}^{K}\sum_{n=\theta_k' \wedge \theta_k}^{(\theta_k' \vee \theta_k)-1}\pi'(n)\left(d'(n,k) - d(n,k)\right) \left(c_k - \mu_k G(n)\right), \end{equation} \vspace{-13pt} \end{boxedminipage} \end{center} where $d'(n,k)$ and $d(n,k)$ are determined by (\ref{eq_threshold_dd'}). From line 8-11 in Algorithm~\ref{algo2}, we observe that once $G(n)$ is larger than $\frac{c_k}{\mu_k}$, we should set $\theta_k^* = n$ and turn on as many servers as possible in group $k$. Groups with smaller $\frac{c_k}{\mu_k}$ will be turned on with higher priority, which is the $c/\mu$-rule stated in Theorem~\ref{theorem7}. With performance difference formula (\ref{eq_diff5}), we see that the long-run average cost of the system will be reduced after each policy update in Algorithm~\ref{algo2}. When the algorithm stops, it means that the system average cost cannot be reduced anymore and the optimal threshold $\bm \theta^*$ is obtained. This procedure is also similar to the policy iteration in the traditional MDP theory. \begin{algorithm}[htbp] \caption{A $c/{\mu}$-rule based algorithm to find the optimal multi-threshold policy.}\label{algo2} \begin{algorithmic}[1] \State renumber the groups index in an ascending order of their value $\frac{c_k}{\mu_k}$, i.e., we have $\frac{c_1}{\mu_1} < \frac{c_2}{\mu_2} < \cdots < \frac{c_K}{\mu_K}$; \State choose the initial threshold as $\bm \theta^*=(0,0,\cdots,0)_K$, which indicates to always turn on all servers; \Repeat \State set $\bm \theta = \bm \theta^*$, $n=1$, and $k=1$; \State compute or estimate $\eta$ of the system under threshold policy $\bm \theta$; \While{$k \leq K$ } \State compute $G(n)$ by using (\ref{eq_G-Recur}) recursively or by solving (\ref{eq_poisson}) and (\ref{eq_G}); \While{($G(n) > \frac{c_k}{\mu_k}$) $\&$ ($k \leq K$) } \State set $\theta^*_k = n$; \State set $k = k+1$; \EndWhile \State set $n = n+1$; \EndWhile \Until{$\bm \theta = \bm \theta^*$} \Return optimal $\bm \theta^*$. \end{algorithmic} \end{algorithm} Comparing Algorithms~\ref{algo1} and \ref{algo2}, we observe that the essence of these two algorithms is similar: computing $G(n)$ and updating policies iteratively. However, Algorithm~\ref{algo2} is much simpler as it utilizes the $c/\mu$-rule based multi-threshold policy. The $c/\mu$-rule, as an optimal policy, is very easy to implement in practice. After the value of $G(n)$ is obtained, we compare it with the groups' $\frac{c_k}{\mu_k}$ values. If $\frac{c_k}{\mu_k}$ is smaller, we should turn on as many servers as possible in group $k$; otherwise, turn off all servers in group $k$. Such a procedure will induce a multi-threshold type policy, as stated in Theorem~\ref{theorem9}. More intuitively, we graphically demonstrate the above procedure by using an example in Fig.~\ref{fig_CMuMonotone}. The vertical axis represents the $c/{\mu}$ value of server groups, which are sorted in an ascending order. When $n$ increases and the system becomes more congested, we compute the value of the associated $G(n)$'s. As long as $G(n)$ is larger than $\frac{c_k}{\mu_k}$, we should turn on as many servers as possible for group $k$ and the associated $n$ is set as the threshold $\theta_k$. For the case of $n=6$, group 2 still has 1 server off although its $c/{\mu}$ is smaller than $G(n)$. It is because of Proposition~\ref{pro1} that the total number of working servers should not exceed $n$. Therefore, we can see that the $c/\mu$-rule will prescribe to turn on group servers from-bottom-up, as illustrated in Fig.~\ref{fig_CMuMonotone}. This example demonstrates the monotone structure of the $c/{\mu}$-rule and the optimal threshold policy. Although Assumption 2 is reasonable for systems with non-human servers such as computers with different performance efficiencies (faster computers have smaller operating cost of processing each job), the scale economies may not exist in systems with human servers such as call centers where a faster server may incur much higher operating cost. Thus, it is necessary to investigate the robustness of the $c/\mu$-rule when Assumption 2 is not satisfied. This is done numerically by Example~6 in the next section. \begin{figure}[htbp] \center \includegraphics[width=1\columnwidth]{Fig_MuC.eps} \caption{An example of $c/{\mu}$-rule to determine servers' on-off, where groups are sorted in ascending order of their $c/{\mu}$ values.}\label{fig_CMuMonotone} \end{figure} \section{Numerical Experiments}\label{section_numerical} In this section, we conduct numerical experiments to verify the analytical results and gain useful insights about optimal policies. \subsection{Example 1: A general index policy case} First, we consider a system with 3 groups of servers. System parameters are as follows. \begin{itemize} \item Holding cost rate function: $h(n)=n$; \item Arrival rate: $\lambda = 10$; \item Number of groups: $K = 3$; \item Number of servers in groups: $\bm M=(M_1,M_2,M_3)=(3,4,3)$; \item Service rates of groups: $\bm \mu = (6,4,2)$; \item Operating cost rates of groups: $\bm c = (7,4,3)$. \end{itemize} Note that Assumption~\ref{assumption1} is satisfied since the holding cost rate function is $h(n)=n$ which is a linear function. However, Assumption~\ref{assumption2} is not satisfied in this example as the descending order of $\bm \mu$ is different from the ascending order of ${c}/{\mu}$. Thus, the $c/{\mu}$-rule does not apply to this example. We use Algorithm~\ref{algo1} to find the optimal scheduling policy $d^*$ with the minimal average cost of $\eta^*=12.5706$. The average queue length $L$ (including customers in service) at each iteration is also illustrated along with the long-run average cost $\eta$ in Fig.~\ref{fig_ex1-a-eta}. Since the holding cost function is $h(n) = n$, the long-run average holding cost is the same as $L$. Thus, the difference between $\eta$ and $L$ curves is the average operating cost. Note that $L$ significantly increases at the second iteration, which corresponds to a scenario with fewer servers working and more customers waiting. As shown in Fig.~\ref{fig_ex1-a-eta}, the optimal solution is obtained after 4 iterations. We also plot the convex performance potential $g^*$ and the increasing PRF $G^*$ under the optimal policy $d^*$ in Fig.~\ref{fig_ex1-a-gG}, as predicted in Theorems 4 and 5. \begin{figure}[htbp] \centering \subfigure[Average cost and queue length during iterations.] {\includegraphics[width=0.49\columnwidth]{ex1-a-etaL.eps}\label{fig_ex1-a-eta}} \subfigure[Curves of $g^*$ and $G^*$.] {\includegraphics[width=0.49\columnwidth]{ex1-a-gG.eps}\label{fig_ex1-a-gG}} \caption{Optimization procedure and curves of $g^*$ and $G^*$, $\lambda=10$, $K=3$, $\bm M=(3,4,3)$, $\bm \mu=(6,4,2)$, $\bm c=(7,4,3)$, $\eta^*=12.5706$.}\label{fig_ex1a} \end{figure} The optimal scheduling policy is shown in Fig.~\ref{fig_ex1-a} for the queue length up to 30 as the optimal actions for $n>30$ remain unchanged, as stated in Corollary~\ref{corollary1} and Remark~2. In fact, the optimal action becomes $d^*(n)=(3,4,3)$, for any $n \geq 12$. The stair-wise increase in number of working servers for the short queue length range as shown in Fig.~\ref{fig_ex1-a} reflects the fact that the optimal action should satisfy $d^*(n) \bm 1 \leq n$, as stated in Proposition~\ref{pro1}. Note that Fig.~\ref{fig_ex1-a} demonstrates that the optimal policy $d^*$ obeys the form of quasi bang-bang control defined in Theorem~\ref{theorem3} and the number of total working servers $||d^*(n)||_1$ is increasing in $n$, as stated in Theorem~\ref{theorem6_monotoned}. However, the monotone property of $d^*(n,k)$ in Theorem~\ref{theorem8_monotoned} does not hold since Assumption~\ref{assumption2} is not satisfied in this example. To demonstrate this point, we change the cost rate vector to $\bm c = (7,4,1.8)$ and keep other parameters the same as above. Using Algorithm~\ref{algo1}, we obtain the optimal policy as illustrated in Fig.~\ref{fig_ex1-b} after 5 iterations. We have $d^*(n)=(0,4,1)$ when $n=5$, while $d^*(n)=(2,4,0)$ when $n=6$. Therefore, we can see that the optimal policy of group 3, $d^*(n,3)$, is not always increasing in $n$. However, $||d^*(n)||_1$ is still increasing in $n$ which is consistent with Theorem~\ref{theorem8_monotoned}. \begin{figure}[htbp] \centering \subfigure[Optimal policy with $\lambda=10$, $K=3$, $\bm M=(3,4,3)$, $\bm \mu=(6,4,2)$, $\bm c=(7,4,3)$, $\eta^*=12.5706$.] {\includegraphics[width=0.49\columnwidth]{ex1-a.eps}\label{fig_ex1-a}} \subfigure[Optimal policy with $\lambda=10$, $K=3$, $\bm M=(3,4,3)$, $\bm \mu=(6,4,2)$, $\bm c=(7,4,1.8)$, $\eta^*=12.5659$.] {\includegraphics[width=0.49\columnwidth]{ex1-b.eps}\label{fig_ex1-b}} \caption{Optimal scheduling policies under different parameter settings.}\label{fig_ex1b} \end{figure} \subsection{Example 2: A $c/{\mu}$-rule case} We consider a system with the same set of parameters as that in the previous example except for the operating cost rates. Now we assume \begin{itemize} \item Operating cost rates of groups: $\bm c = (7,8,5)$. \end{itemize} With these new cost rates, the descending order of $\bm \mu$ is the same as the ascending order of $c/{\mu}$ of these groups, i.e., we have $\mu_1>\mu_2>\mu_3$ and $\frac{c_1}{\mu_1}<\frac{c_2}{\mu_2}<\frac{c_3}{\mu_3}$. Therefore, Assumption 2 is satisfied and the $c/{\mu}$-rule applies to this example. Thus, the optimal policy is a threshold vector $\bm \theta = (\theta_1, \theta_2, \theta_3)$, as indicated by (\ref{eq_threshold_d}). We use Algorithm~\ref{algo2} to find the optimal threshold policy $\bm \theta^*$. From Fig.~\ref{fig_ex2-eta}, we can see that after 5 iterations the optimal threshold policy is found to be $\bm \theta^* = (1,9,21)$ with $\eta^* = 13.6965$. Comparing Examples~1 and 2, we note that both algorithms take around 4 or 5 iterations to converge, at a similar convergence speed. Algorithm~\ref{algo2} uses a threshold policy which only has 3 variables to be determined. However, the policy in Algorithm~\ref{algo1} is much more complex. Moreover, the $c/{\mu}$-rule significantly simplifies the search procedure in Algorithm~\ref{algo2}. Fig.~\ref{fig_ex2-gG} illustrates the curves of $g^*(n)$ and $G^*(n)$, which are also consistent with the structures stated in Theorems~\ref{theorem4} and \ref{theorem5}. \begin{figure}[htbp] \centering \subfigure[Average cost and queue length during iterations.] {\includegraphics[width=0.49\columnwidth]{ex2-etaL.eps}\label{fig_ex2-eta}} \subfigure[Curves of $g^*$ and $G^*$.] {\includegraphics[width=0.49\columnwidth]{ex2-gG.eps}\label{fig_ex2-gG}} \caption{Optimization procedure and curves of $g^*$ and $G^*$, $\lambda=10$, $K=3$, $\bm M=(3,4,3)$, $\bm \mu=(6,4,2)$, $\bm c=(7,8,5)$, $\eta^*=13.6965$.}\label{fig_ex2} \end{figure} \subsection{Example 3: Effect of traffic intensity} We study the effect of traffic intensity on the optimal policy by varying the arrival rate $\lambda$ in Example~2. Since the maximal total service rate is $\sum_{k=1}^{K} M_k \mu_k = 40$, we examine $0<\lambda < 40$ range for the system stability. With $\lambda = [2\ 5\ 10\ 20\ 30\ 38\ 39]$, the optimal average cost $\eta^*$ and optimal thresholds are illustrated in Fig.~\ref{fig_ex3-eta} and Fig.~\ref{fig_ex3-theta}, respectively. As the traffic intensity increases (the traffic becomes heavier), i.e., $\lambda \rightarrow 40$, the average cost $\eta^*$ will increase rapidly and the optimal threshold policy $\bm \theta^*$ converges to $(1,4,8)$, which means that servers are turned on as early as possible. Note that the optimal threshold of the first group (its service rate is the largest) is always $\theta^*_1 = 1$ due to the zero setup cost, which is consistent with Theorem~\ref{theorem_th1}. It is expected that the optimal threshold $\theta^*_1$ could be other values if non-zero setup cost is considered. \begin{figure}[htbp] \centering \subfigure[Optimal average cost.] {\includegraphics[width=0.49\columnwidth]{ex3-eta.eps}\label{fig_ex3-eta}} \subfigure[Optimal thresholds.] {\includegraphics[width=0.49\columnwidth]{ex3-theta.eps}\label{fig_ex3-theta}} \caption{Optimization results under different workloads with $\lambda = [2\ 5\ 10\ 20\ 30\ 38\ 39]$, $K=3$, $\bm M=(3,4,3)$, $\bm \mu=(6,4,2)$, $\bm c=(7,8,5)$.}\label{fig_ex3} \end{figure} \subsection{Example 4: Trade-off of costs} For a system with the $c$/$\mu$-rule, the optimal threshold policy depends on the dominance between the holding cost and the operating cost. To study this effect on the optimal policy, we introduce an operating cost weight parameter $v$. The value of $v$ reflects the balance between the server provider's operating cost and the customer's waiting cost. In practice, $v$ depends on the system's optimization objective. The cost rate function (\ref{eq_f}) is modified as below. \begin{equation}\label{eq_fv} f(n,\bm m) = h(n) + v \cdot \bm m \bm c. \end{equation} Other parameters are the same as those in Example~2. Using Algorithm~2 and the set of operating cost weights $v = [0.1\ 0.3\ 0.5\ 1\ 2\ 3]$, we obtain the minimal average costs and the corresponding optimal threshold policies as shown in Fig.~\ref{fig_ex4}. The curve in Fig.~\ref{fig_ex4-eta} is almost linear, because the steady system mostly stays at states with small queue length and the associated part of $\eta^*$ is linear in $v$. When $v$ is small, it means that the holding cost $h(n)$ dominates the operating cost $\bm m \bm c$ in (\ref{eq_fv}). Therefore, each server group should be turned on earlier (smaller thresholds) in order to avoid long queues. This explains why the optimal thresholds are $\bm \theta^*=(1,4,8)$ both for $v=0.1$ and $v=0.3$. When $v$ is large, the operating cost will dominate the holding cost. Thus, except for the first group (the most efficient group), the other two server groups are turned on only when the system is congested enough (larger thresholds). \begin{figure}[htbp] \centering \subfigure[Optimal average cost.] {\includegraphics[width=0.49\columnwidth]{ex4-eta.eps}\label{fig_ex4-eta}} \subfigure[Optimal thresholds.] {\includegraphics[width=0.49\columnwidth]{ex4-theta.eps}\label{fig_ex4-theta}} \caption{Optimization results under different cost weights with $v = [0.1\ 0.3\ 0.5\ 1\ 2\ 3]$, $\lambda=10$, $K=3$, $\bm M=(3,4,3)$, $\bm \mu=(6,4,2)$, $\bm c=(7,8,5)$.}\label{fig_ex4} \end{figure} \subsection{Example 5: Model scalability} Although all the previous examples are about small systems with only 3 server groups, our approach can be utilized to analyze large systems with many server groups and hundreds of servers. To demonstrate the model scalability, we consider a $c$/$\mu$-rule system (with scale economies) where $K$ increases as $[3\ 5\ 10\ 20\ 30\ 50 ]$. For the ease of implementation, we set $M_k = 3$ for all $k$, $\bm \mu = (2,3,\cdots,K+1)$, and $\bm c = \bm \mu^{0.9}$ which indicates a component-wise power operation of $\bm \mu$. We can verify that this parameter setting satisfies the condition in Assumption~\ref{assumption2}. To keep the traffic intensity at a moderate level, we set $\lambda = 0.5 \cdot \bm M \bm \mu^T$, where $\bm M \bm \mu^T$ is the maximal total service rate of the system. The number of iterations of Algorithm~\ref{algo2} required for convergence is shown in Fig.~\ref{fig_ex5} for different $K$ values. We find that the number of iterations remains almost stable (around 3 or 4) as the system size $K$ increases. This indicates the good scalability of our approach, namely, Algorithm 2 can be applied to a large scale system. Note that in our model the state space remains the same but the action space increases exponentially with $K$. Therefore, the optimal policy structure characterized (e.g. multi-threshold type) not only resolves the issue of \emph{infinite state space}, but also the \emph{curse of dimensionality for action space}. \begin{figure}[htbp] \centering \includegraphics[width=0.6\columnwidth]{ex5-iteration.eps} \caption{Number of iterations needed by Algorithm~\ref{algo2} under different problem scales with $K = [3\ 5\ 10\ 20\ 30\ 50 ]$, $\lambda = 0.5 \cdot \bm M \bm \mu^T$, $M_k \equiv 3$, $\bm \mu = (2,3,\cdots,K+1)$, $\bm c = \bm \mu^{0.9}$.}\label{fig_ex5} \end{figure} \subsection{Example 6: Robustness of the $c/\mu$-rule} When the condition of scale economies in Assumption~\ref{assumption2} does not hold, the optimality of the $c/\mu$-rule is not guaranteed. Since the $c/\mu$-rule is easy to implement, we investigate the robustness of the $c/\mu$-rule by numerically testing several scenarios where the condition of scale economies does not hold. For these cases, we first use Algorithm~1 to find the true optimal solution. Then, we use Algorithm~2 to find the ``optimal" threshold policy as if the $c/\mu$-rule is applicable, i.e., servers in group with smaller $c/\mu$ will be turned on with higher priority. We obtain the following table to reveal the performance gaps between the optimal policy and the $c/\mu$-rule. The parameter setting is the same as that in Example~1, except that we choose different cost rate vectors $\bm c$ in different scenarios. \begin{table}[htbp] \centering \begin{tabular}{c|ccc} \toprule $\bm c$ & $\eta^*$ by Algm.1 & $\hat{\eta}^*$ by Algm.2 & error\\ \midrule $[7,4,3]$ & 12.5706 & 12.5706 & 0.00\% \\ \hline $[7,4,1.8]$& 12.5659 & 13.3287 & 6.07\% \\ \hline $[7,4,1]$ & 11.1580 & 11.1580 & 0.00\% \\ \hline $[8,3,1]$ & 10.0241 & 10.0615 & 0.37\% \\ \hline $[4,3,1]$ & 8.4044 & 9.2426 & 9.97\% \\ \hline $[18,10,3]$& 23.4844 & 23.4844 & 0.00\% \\ \bottomrule \end{tabular} \caption{The error effect of applying the $c/\mu$-rule when the condition of scale economies does not hold, $\lambda=10$, $K=3$, $\bm M=(3,4,3)$, $\bm \mu=(6,4,2)$.}\label{tab error} \end{table} \begin{figure}[htbp] \centering \subfigure[Optimal policy by Algorithm~1 with $\bm c=(7,4,1)$, it is of threshold form with $\bm \theta^*=(8,4,1)$; ``Optimal" threshold derived by Algorithm~2 is $\hat{\bm \theta}^*=(8,4,1)$; Their error is 0.00\%.] {\includegraphics[width=0.49\columnwidth]{ex6-3.eps}\label{fig_ex6-3}} \subfigure[Optimal policy by Algorithm~1 with $\bm c=(8,3,1)$, it is of threshold form with $\bm \theta^*=(11,1,5)$; ``Optimal" threshold derived by Algorithm~2 is $\hat{\bm \theta}^*=(11,4,1)$; Their error is 0.37\%.] {\includegraphics[width=0.49\columnwidth]{ex6-4.eps}\label{fig_ex6-4}} \subfigure[Optimal policy by Algorithm~1 with $\bm c=(4,3,1)$, it is of threshold form with $\bm \theta^*=(1,7,4)$; ``Optimal" threshold derived by Algorithm~2 is $\hat{\bm \theta}^*=(4,7,1)$; Their error is 9.97\%.] {\includegraphics[width=0.49\columnwidth]{ex6-5.eps}\label{fig_ex6-5}} \subfigure[Optimal policy by Algorithm~1 with $\bm c=(18,10,3)$, it is of threshold form with $\bm \theta^*=(11,4,1)$; ``Optimal" threshold derived by Algorithm~2 is $\hat{\bm \theta}^*=(11,4,1)$; Their error is 0.00\%.] {\includegraphics[width=0.49\columnwidth]{ex6-6.eps}\label{fig_ex6-6}} \caption{Solutions derived by Algorithm 1 under different operating cost rate vectors with $\lambda=10$, $K=3$, $\bm M=(3,4,3)$, $\bm \mu=(6,4,2)$.}\label{fig_ex6} \end{figure} The first three cases in Table~\ref{tab error} are designed by changing two cost parameters from the original cost vector $\bm c=(7,8,5)$ used in Example 2 where the scale economies condition holds. We first change the operating cost of group 2 from 8 to 4 and the operating cost of group 3 from 5 to 3, 1.8, and 1, respectively, while other parameters are kept unchanged. Such parameter changes cause the $c/\mu$ ranking sequence to change from original $1\rightarrow 2\rightarrow 3$ to $2\rightarrow 1\rightarrow 3$, $3\rightarrow 2\rightarrow 1$, and $3\rightarrow 2\rightarrow 1$, respectively, for three server groups (from the fastest group 1 to slowest group 3). Note that in Case 1 (Example 1), the $c/\mu$ rankings switch between group 1 and group 2 with group 3 ranking unchanged so that the scale economies condition fails. However, the $c/\mu$-rule remains the optimal. This implies that a violation of the scale economies condition may not change the optimality of the $c/\mu$-rule. In Case 2, the $c/\mu$ ranking sequence becomes the reverse of the condition of scale economies and a cost gap of 6.07\% occurs. It is interesting to see that in Case 3 a further cost reduction of group 3 will lead to the optimality of the $c/\mu$-rule again. For the next two cases, we keep the cost of group 3 at 1 while the costs of groups 1 and 2 are changed. It is found that the non-optimality of the $c/\mu$-rule in these cases will cause a less than 10\% additional cost compared to the optimal index policy. Furthermore, it is still possible that the $c/\mu$-rule remains optimal for the reverse order of the scale economies condition as shown in Case 6. Graphically, we can show how the optimal policy is different from the $c/\mu$-rule based threshold policy. The optimal server schedule derived by Algorithm~1 in Case 1 as shown in Fig.~\ref{fig_ex1-a} is of threshold form with $\bm \theta^* = (5,1,8)$ which is the same as the ``optimal" threshold derived by Algorithm~2. The optimal policy derived by Algorithm~1 in Case 2, which is not of threshold form, is shown in Fig.~\ref{fig_ex1-b} while the ``optimal" threshold derived by Algorithm~2 is $\hat{\bm \theta}^* = (8,4,1)$. Such a policy difference results in the performance degradation by 6.07\%. For Case 3, the optimal solution derived by Algorithm~1 illustrated by Fig.~\ref{fig_ex6-3} is of threshold form with $\bm \theta^* = (8,4,1)$ and the ``optimal" threshold derived by Algorithm~2 is also $\hat{\bm \theta}^* = (8,4,1)$. These two solutions coincide and their performance error is 0. Other cases are illustrated by the sub-figures of Fig.~\ref{fig_ex6} in a similar way. Since $G(n)$ function plays a critical role in the optimality of the $c/\mu$-rule and depends on multiple system parameters, we cannot develop a pattern for the optimality of $c/\mu$-rule when the condition of scale economies does not hold. However, from Table~\ref{tab error} and Fig.~\ref{fig_ex6}, we observe that in some cases, although the condition of scale economies does not hold, the optimality of the $c/\mu$-rule still holds. In other cases, the performance degradation caused by ``faultily" using the $c/\mu$-rule is tolerable. This implies that the $c/\mu$-rule has a good applicability and robustness, even for the cases where the condition of scale economies does not hold. \section{Conclusion}\label{section_conclusion} In this paper, we study the service resource allocation problem in a stochastic service system, where servers are heterogeneous and classified into groups. Under a cost structure with customer holding and server operating costs, we investigate the optimal index policy (dynamic scheduling policy) which prescribes the number of working servers in each group at each possible queue length. Using the SBO theory, we characterize the structure of the optimal policy as a quasi bang-bang control type. A key technical result of this work is to establish the monotone increasing property of PRF $G^*(n)$, a quantity that plays a fundamental role in the SBO theory. Then, the necessary and sufficient condition and the monotone property of the optimal policy are derived based on this property. Under an assumption of scale economies, we further characterize the optimal policy as the $c/\mu$-rule. That is, the servers in group with smaller $c/\mu$ should be turned on with higher priority. The optimality of multi-threshold policy is also proved. These optimality structures significantly reduce the complexity of the service resource allocation problem and resolve the issue of curse of dimensionality in a more general heterogeneous multi-server queueing model with infinite state space. Based on these results, we develop the efficient algorithms for computing the optimal scheduling policy and thresholds. Numerical examples demonstrate these main results and reveal that the $c/\mu$-rule has a good scalability and robustness. A limitation of our model is that the startup and shutdown cost of each server is assumed to be zero. The cost of customer migration among servers is also neglected. Taking these costs into account in our model can be a future research topic. Moreover, we assume linear operating costs in this paper. It would be interesting to extend our results to a more general operating cost structure. Asymptotically extending to the scenario of many servers in a networked setting under the fluid regime can also be another future research direction. \section*{Acknowledgement} The authors would like to present grates to Prof. Xi-Ren Cao, Prof. Christos Cassandras, Prof. Jian Chen, and Prof. Leandros Tassiulas for their valuable discussions and comments.